Jacob Kimmel
๐ค SpeakerAppearances Over Time
Podcast Appearances
And so it puts you in this regime where transcription factors are a really nice substrate to manipulate as targets for medicines.
In some ways, they might be like evolution's levers upon the broader architecture of the genome.
And so by pulling on those same levers that evolution has gifted us, there are probably many useful things we can engender upon biology.
I don't know about that, but if I can give you, I'll give you like a real cringe analogy that sometimes I deploy, but it requires a very special audience.
I think you'll probably be the one who fits into it.
I don't know about your audience, but you will.
You can kind of think about it like, you know, if you think about how attention works, like queries, keys, values, TFs are kind of like the queries, the genome sequences they bind to, kind of like the keys.
Genes are kind of like the values.
And it turns out that structure then allows you to very efficiently in terms of
editing space.
You can change just one of those embedding vectors, in this case, one of those sequences, and get dramatically different performances or total outputs.
And so I do think it's kind of interesting how these structures recur throughout biology, you know, in the same way that the attention mechanism seems to exist in some neural structures.
I think it's kind of interesting that you can very easily see how that same sort of querying and information storage might exist in the genome.
Yeah, or Eddie Chang has found like positional encodings probably exist in humans using Neuropixels.
If you haven't read these papers.
Oh, yeah.
So he implants these Neuropixel probes into individuals, and then he's able to talk to them, look at them as they read sentences.
And what he finds is that there seem to be certain representations which function as a positional encoding across sentences.
So they fire at a certain frequency, and it just increases as the sentence goes on and then like resets.
And so it seems exactly like what we do when we train large language models where you've got some function.