Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "all the letters of the alphabet"
-
So, I wanted to find a new way to arrange my language's alphabet. Atm, I'm loosely using latin's system even though my system is weird;
A B K D E F G H I IE SH L M N O P R S T U V
So, I remember that another language (I think Japanese) uses a poem with every letter to figure the order of their letters, so I decided to do the same.
Only problem is: My current word list is very limited, some of the letters I needed only existed in specific words (aka, the word for "Dark") so I ended up making a very depressing poem.
Enjoy! Or not.. I'm not going to tell you what to do.
English translation below. I also will post images of it written in my language's script, as well as one line in my language's cursive script (I'm not doing the whole thing in cursive because fuck that)
Senarseha:
Seh ninfuat seh nem fieta; Seka sato nem fiekm juna jenes sermin.
Seh ninfuat sif nemsin netua niet; Seka sem sedma nemat sargo no
nrokniet sam fiekmin sehim sepra.
Sehim sinta nem nara niv nakliet.
Seh nem sine fieta.
English:
I say I am well; But all is dark before day begins.
I say it isn't too much; But this place is a farm of
preasure that blackens my soul.
My mind is ever in agony.
I am not well.6 -
I wonder if anyone has considered building a large language model, trained on consuming and generating token sequences that are themselves the actual weights or matrix values of other large language models?
Run Lora to tune it to find and generate plausible subgraphs for specific tasks (an optimal search for weights that are most likely to be initialized by chance to ideal values, i.e. the winning lottery ticket hypothesis).
The entire thing could even be used to prune existing LLM weights, in a generative-adversarial model.
Shit, theres enough embedding and weight data to train a Meta-LLM from scratch at this point.
The sum total of trillions of parameter in models floating around the internet to be used as training data.
If the models and weights are designed to predict the next token, there shouldn't be anything to prevent another model trained on this sort of distribution, from generating new plausible models.
You could even do task-prompt-to-model-task embeddings by training on the weights for task specific models, do vector searches to mix models, etc, and generate *new* models,
not new new text, not new imagery, but new *models*.
It'd be a model for training/inferring/optimizing/generating other models.4