Hover Design
774 posts





This is getting wild. You can now generate 3d assets (and more) with Spline’s new product Omma.




Now we know why it was so important to front-load Midjourney prompts SMH.

Found 2 papers on language, brains, and LLMs that together tell a story no one has cleanly articulated. One looks at spoken conversation and finds that contextual LLM embeddings can track linguistic content as it moves from one brain to another, word by word. The relevant representation shows up in the speaker before the word is said, then shows up again in the listener after the word is heard. The other looks within a single brain and finds that the timeline of verbal comprehension lines up with the layer hierarchy of LLMs: earlier layers match earlier neural responses, deeper layers match later ones, especially in higher-order language regions. Both papers are from the same group at Princeton. Quick summary of each, then what I think they mean together. Zada et al. (Neuron 2024) recorded ECoG from pairs of epilepsy patients having spontaneous face-to-face conversations. They aligned neural activity to a shared LLM embedding space and found that contextual embeddings captured brain-to-brain coupling better than syntax trees, articulatory features, or non-contextual vectors. The embedding space works like a shared codec. Speaker encodes into it before they open their mouth, listener decodes after. Goldstein, Ham, Schain et al. (Nat Comms 2025) pulled embeddings from every layer of GPT-2 XL and Llama 2 while people listened to a 30-minute podcast. In Broca’s area, correlation between layer index and peak neural lag hits r = 0.85. As you move up the ventral stream, the temporal receptive window stretches from basically nothing in auditory cortex to a ~500ms spread between shallow and deep layer peaks in the temporal pole. The classical phonemes → morphemes → syntax → semantics pipeline doesn’t recover this temporal structure. The learned representations do. Together, these papers make conversation look a lot like two brains running closely related forward passes, with speech acting as a brutally lossy bottleneck between them. Inside a single brain, the structure of that forward pass (shallow layers tracking fast local features, deeper layers integrating slower contextual information) looks a lot like the way comprehension actually unfolds over time. What's crazy is these models were only trained on text, and yet their layer hierarchy STILL mirrors the temporal dynamics of spoken-language processing, so whatever structure they picked up is probably not just a quirk of modality. It actually seems to fall out of language statistics themselves, which is not what the classical picture would predict at all. If comprehension were really a tidy pipeline of discrete symbolic modules, you’d likely expect to see that cleanly in the neural timing, but you don’t. If you take compression seriously, this suggests language is not really about explicit symbolic manipulation, but more accurately about lossy compression over a learned continuous space. Brains and transformers may be landing on similar solutions because the statistical structure of meaning constrains the geometry hard enough that very different objective functions (natural selection vs next token prediction) still push you into roughly the same region. Something I find kinda funny is transformers compute all layers for a token in one feedforward pass, while brains seem to realize something like the same hierarchy sequentially in time, sometimes within the same cortical region. Broca’s area obviously does not have 48 anatomical layers, but its temporal dynamics behave almost as if it does, which is quietly a point in favor of recurrence. What transformers learned may be right even if the brain implements it more like an RNN unrolling over a few hundred milliseconds. The field ditched RNNs for engineering reasons. The brain, apparently, did not get the memo. The better frame than “LLMs think like brains” is representing meaning in context may just be a problem with fewer good solutions than we assumed. If you optimize hard enough on language statistics, you may end up in a solution family that overlaps miraculously well with what evolution found. There’s a real isomorphism in the problem, even if not necessarily in the machinery. Paper links: pubmed.ncbi.nlm.nih.gov/39096896/ nature.com/articles/s4146…


Latest Youtube is up: How to build a content-first marketing team in 2026 - how to format a modern marketing team - what a content flywheel looks like - the brand world of paid and organic - anchors a brand needs to stand out - goals and tracking - 10+ examples small and large executing a media-first marketing approach


Dirty secret of most consumer startups right now is that they are paying for marketing, except not via traditional paid channels. Instead, it’s paid content creation (in-house or via influencers) that gets tested over and over until the TikTok algorithm grabs it and runs with it.



In consumer, paid ads generally = lack of true product market fit I have yet to see a generational startup with largely paid ad-driven growth…















