The problem with models like this is they're built on very little actual training data we can trace back to verifiable protein data. The protein data back, and other sources of training data for stuff like this, has a lot of broken structures in them and "creative liberties" taken to infer a structure from instrument data. It's a very complex process that leaves a lot for interpretation.
On top of that, we don't have a clear understanding on how certain positions (conformations) of a structure affect underlying biological mechanisms.
Yes, these models can predict surprisingly accurate structures and sequences. Do we know if these outputs are biologically useful? Not quite.
This technology is amazing, don't get me wrong, but to the average person they might see this and wonder why we can't go full futurism and solve every pathology with models like these.
We've come a long way, but there's still a very very long way to go.
Yeah. Things like "Complete results, architectural decisions, and runnable code below." is literally how AI outputs stuff, so I'd expect the post was AI written too. :(
Nice work! Here is an article you may find helpful if you have not already come across it.[0]. You may also want to consider benchmarking against some non ML methods.[1]
You solve a dataset when you learn what there is to learn about the phenomenon of interest. The limit of such phenomenon is “cure all disease”, and clearly this is not solving that.
MNIST (the number classification task) has been “solved” a billion times and it is hard to imagine any subsequent advances there as scores using a variety of methods have hit the saturation point of accuracy. Any further improvements are likely overfitting to noise. Therefore, we know that it is easy to detect handwritten numbers. However, we may not know how to detect other things as well, like reading an MRI. Those datasets/tasks are clearly different and require different techniques. Training an LLM is likewise different.
If it was really solved, wouldn't it just need to happen once?
You think classifying handwriting of 10 numbers is the same as this that took 55 hours of GPU time for someone to go through?
I have no idea what point you're trying to make and I can't tell if you do either. You were talking about "solving" other "health datasets" but you can't even come up with one or what that means.
Can someone explain what one might use this model for? As a developer with a casual interest in biology it would be fun to play with but honestly not sure what I would do
Lab strains of things tend to be extremely sensitive and not human adapted. You shouldn't study and modify human-infecting organisms in your basement anyway. While you shouldn't ignore protective equipment and proper procedure... paranoia about infecting yourself with a lab leak isn't warranted.
It’s a self supervised learning architecture, and it’s pretty much universal. The loss function runs on embeddings, and some other smart architectural choices allover. Worth diving into for a few hours, Yann LeCun gives some interesting talks about it
The problem with models like this is they're built on very little actual training data we can trace back to verifiable protein data. The protein data back, and other sources of training data for stuff like this, has a lot of broken structures in them and "creative liberties" taken to infer a structure from instrument data. It's a very complex process that leaves a lot for interpretation.
On top of that, we don't have a clear understanding on how certain positions (conformations) of a structure affect underlying biological mechanisms.
Yes, these models can predict surprisingly accurate structures and sequences. Do we know if these outputs are biologically useful? Not quite.
This technology is amazing, don't get me wrong, but to the average person they might see this and wonder why we can't go full futurism and solve every pathology with models like these.
We've come a long way, but there's still a very very long way to go.
How do we get more verifiable protein data? So even if we had better data, we don't yet understand how the structure impacts the biology?
"Complete results, architectural decisions, and runnable code below."
This is a weird post, there doesn't seem to be any "below" here. Another comment linked the article: https://huggingface.co/blog/OpenMed/training-mrna-models-25-...
Yeah. Things like "Complete results, architectural decisions, and runnable code below." is literally how AI outputs stuff, so I'd expect the post was AI written too. :(
full article: https://huggingface.co/blog/OpenMed/training-mrna-models-25-...
Nice work! Here is an article you may find helpful if you have not already come across it.[0]. You may also want to consider benchmarking against some non ML methods.[1]
0. https://pubmed.ncbi.nlm.nih.gov/35318324/
1. https://www.nature.com/articles/s41586-023-06127-z
What makes this dataset or problem worth solving compared to other health datasets? Would the results on this task be broadly useful to health?
What other "datasets" are you talking about? How do you "solve a dataset" ?
You solve a dataset when you learn what there is to learn about the phenomenon of interest. The limit of such phenomenon is “cure all disease”, and clearly this is not solving that.
What are you talking about? "the phenomenon of interest"? There is nothing you wrote in either comment that makes sense.
What is a "dataset" that has been "solved" and what did the program do that 'solved' it?
MNIST (the number classification task) has been “solved” a billion times and it is hard to imagine any subsequent advances there as scores using a variety of methods have hit the saturation point of accuracy. Any further improvements are likely overfitting to noise. Therefore, we know that it is easy to detect handwritten numbers. However, we may not know how to detect other things as well, like reading an MRI. Those datasets/tasks are clearly different and require different techniques. Training an LLM is likewise different.
has been “solved” a billion times
If it was really solved, wouldn't it just need to happen once?
You think classifying handwriting of 10 numbers is the same as this that took 55 hours of GPU time for someone to go through?
I have no idea what point you're trying to make and I can't tell if you do either. You were talking about "solving" other "health datasets" but you can't even come up with one or what that means.
Can someone explain what one might use this model for? As a developer with a casual interest in biology it would be fun to play with but honestly not sure what I would do
You can get your feet wet with genetic engineering for surprisingly little money.
This guy shows a lot of how it's done: https://www.youtube.com/@thethoughtemporium
Basically you can design/edit/inject custom genes into things and see real results spending on the scale of $100-$1000.
We actually did this in my highschool genetics class back in 1999! We made bacteria change color by splicing in a gene. Awesome stuff.
The (public!) school had a grant from one of Seattle's biotech boom companies.
Is there something like this in text/readable format?
My main concern is using fungi. If it ends up in my lungs I'm most likely screwed, right?
Yes, but most students produce their best work while infected.
This is the classic meme https://www.reddit.com/r/labrats/comments/mmv2ig/lab_strains...
Lab strains of things tend to be extremely sensitive and not human adapted. You shouldn't study and modify human-infecting organisms in your basement anyway. While you shouldn't ignore protective equipment and proper procedure... paranoia about infecting yourself with a lab leak isn't warranted.
I'd love to experiment with this stuff, just literally have no idea how it would be safe to start.
A Codon-based model is cool. I know NVIDIA is building quite a large one.
At GTC they showed an SAE they built on a smaller version of it, allowing you to see what their model learned: https://research.nvidia.com/labs/dbr/blog/sae/
Interesting work - Looks like AI for science is having it's day right now.
> In Progress: CodonJEPA
JEPA is going to break the whole industry :D
Can you explain this? I haven't heard of JEPA, and from a quick search it seems to be vision/robotics based?
It’s a self supervised learning architecture, and it’s pretty much universal. The loss function runs on embeddings, and some other smart architectural choices allover. Worth diving into for a few hours, Yann LeCun gives some interesting talks about it
https://openreview.net/pdf?id=BZ5a1r-kVsf
HN's blindspots never cease to amaze me.
I am a structural biologist working in pharmaceutical design and this type of thing could be wildly useful (if it works).
Blind spot?
Distributing the load on this will probably be infinitely more useful than “folding at home”
What makes these Domain specific models work when we don’t have good domain models for health care, chemistry, economics and so on
>we don’t have good domain models for health care, chemistry, economics and so on
Who says we don't?
Examples please?
No, it's really simple to search for domain specific models being used "in production" all over the place
I didn’t find a single one that outperforms a general model.
Ok, alphafold.
It’s not a large language model
gray goo of the future
hmmmm seems like some fake hype.