Hover Design

774 posts

Hover Design banner
Hover Design

Hover Design

@hoverdesign

Design

United States 가입일 Aralık 2023
820 팔로잉183 팔로워
Hover Design
Hover Design@hoverdesign·
@landforce @orenmeetsworld 🎩 hey gents. Question: if content is underperforming on on Instagram (less than 1%) should you archive the content? This question is coming from Aash Patel Sorry to ping you here
English
1
0
1
37
Benji Taylor
Benji Taylor@benjitaylor·
I’m honoured to be joining 𝕏 to lead design. I believe this is the most important platform in the world, and I can’t think of a more exciting place to help shape the future. I’m looking forward to working closely with @elonmusk, @nikitabier, and the rest of the team. I’m grateful for the opportunity, humbled to be part of it, and can't wait to get started!
Benji Taylor tweet media
English
3.2K
1.7K
23.3K
21.2M
Chad Jordan
Chad Jordan@itschadjordan·
The Fieldhouse Series
Chad Jordan tweet mediaChad Jordan tweet mediaChad Jordan tweet mediaChad Jordan tweet media
English
1
0
3
26
Hover Design
Hover Design@hoverdesign·
@JCKHLRY Love that you added the prompt into the layout 👌
English
0
0
0
63
JACKⒿ
JACKⒿ@JCKHLRY·
AI AD Test
JACKⒿ tweet media
Eesti
2
0
14
350
Hover Design 리트윗함
JACKⒿ
JACKⒿ@JCKHLRY·
recreating this cool video I saw on insta to sharpen my skills.
JACKⒿ tweet media
English
3
1
48
1.7K
Hover Design
Hover Design@hoverdesign·
@1a1n1d1y For what it’s worth, I define ‘intelligence’ as ‘error correction’
English
0
0
1
59
andy
andy@1a1n1d1y·
is intelligence literally just lateral thinking applied to a bank of knowledge?
English
45
3
92
5.9K
Hover Design
Hover Design@hoverdesign·
@JCKHLRY Also this reminds me of Shape Island ( kids books and show)
English
0
0
1
38
Hover Design 리트윗함
JACKⒿ
JACKⒿ@JCKHLRY·
Came out pretty good ngl. Also I wish these plushies where real! 😢
English
8
4
89
2.4K
shira
shira@shiraeis·
had a series of disagreements with someone i love recently where it became obvious we're running such different world models that no amount of talking was going to produce convergence. naturally i went looking for what neuroscience says about why two people can speak the same language fluently and still completely fail to land in the same internal state the literature increasingly suggests language is way less "transmission" than people intuit. rather than sending a fully specified packet of meaning, the speaker emits a sparse, lossy signal, which the listener then reconstructs from their own priors, context, and internal model of the world. this frame fits the results in the quoted tweet unusually well. Zada et al. show that during real conversation, linguistic content briefly occupies a shared representational space across brains, but VERY briefly. Goldstein et al. show that within a single brain, comprehension unfolds over time in a layered hierarchy that looks a lot like moving through depth in a transformer. together the story is "words are small cues that help coordinate much larger contextual states across time," which means the feeling of being transparent benefits the speaker more than the listener because two people can use nearly identical language, feel totally understood by themselves, and produce completely different reconstructions on the other side. it's honestly a small miracle that two differently trained systems ever converge as closely as they do through such a brutally narrow channel. what gives me hope is that learning how language actually functions across humans and machines might teach us something real and meaningful about improving human to human communication as well, by making the lossy channel a little less lossy. that said, some people don’t really want a wider channel. some people just want confirmation that their reconstruction is the only valid one, and maybe the most honest response to that isn’t frustration but recognizing you’ve hit the compression limit of the relationship
shira@shiraeis

Found 2 papers on language, brains, and LLMs that together tell a story no one has cleanly articulated. One looks at spoken conversation and finds that contextual LLM embeddings can track linguistic content as it moves from one brain to another, word by word. The relevant representation shows up in the speaker before the word is said, then shows up again in the listener after the word is heard. The other looks within a single brain and finds that the timeline of verbal comprehension lines up with the layer hierarchy of LLMs: earlier layers match earlier neural responses, deeper layers match later ones, especially in higher-order language regions. Both papers are from the same group at Princeton. Quick summary of each, then what I think they mean together. Zada et al. (Neuron 2024) recorded ECoG from pairs of epilepsy patients having spontaneous face-to-face conversations. They aligned neural activity to a shared LLM embedding space and found that contextual embeddings captured brain-to-brain coupling better than syntax trees, articulatory features, or non-contextual vectors. The embedding space works like a shared codec. Speaker encodes into it before they open their mouth, listener decodes after. Goldstein, Ham, Schain et al. (Nat Comms 2025) pulled embeddings from every layer of GPT-2 XL and Llama 2 while people listened to a 30-minute podcast. In Broca’s area, correlation between layer index and peak neural lag hits r = 0.85. As you move up the ventral stream, the temporal receptive window stretches from basically nothing in auditory cortex to a ~500ms spread between shallow and deep layer peaks in the temporal pole. The classical phonemes → morphemes → syntax → semantics pipeline doesn’t recover this temporal structure. The learned representations do. Together, these papers make conversation look a lot like two brains running closely related forward passes, with speech acting as a brutally lossy bottleneck between them. Inside a single brain, the structure of that forward pass (shallow layers tracking fast local features, deeper layers integrating slower contextual information) looks a lot like the way comprehension actually unfolds over time. What's crazy is these models were only trained on text, and yet their layer hierarchy STILL mirrors the temporal dynamics of spoken-language processing, so whatever structure they picked up is probably not just a quirk of modality. It actually seems to fall out of language statistics themselves, which is not what the classical picture would predict at all. If comprehension were really a tidy pipeline of discrete symbolic modules, you’d likely expect to see that cleanly in the neural timing, but you don’t. If you take compression seriously, this suggests language is not really about explicit symbolic manipulation, but more accurately about lossy compression over a learned continuous space. Brains and transformers may be landing on similar solutions because the statistical structure of meaning constrains the geometry hard enough that very different objective functions (natural selection vs next token prediction) still push you into roughly the same region. Something I find kinda funny is transformers compute all layers for a token in one feedforward pass, while brains seem to realize something like the same hierarchy sequentially in time, sometimes within the same cortical region. Broca’s area obviously does not have 48 anatomical layers, but its temporal dynamics behave almost as if it does, which is quietly a point in favor of recurrence. What transformers learned may be right even if the brain implements it more like an RNN unrolling over a few hundred milliseconds. The field ditched RNNs for engineering reasons. The brain, apparently, did not get the memo. The better frame than “LLMs think like brains” is representing meaning in context may just be a problem with fewer good solutions than we assumed. If you optimize hard enough on language statistics, you may end up in a solution family that overlaps miraculously well with what evolution found. There’s a real isomorphism in the problem, even if not necessarily in the machinery. Paper links: pubmed.ncbi.nlm.nih.gov/39096896/ nature.com/articles/s4146…

English
36
52
500
41K
Hover Design
Hover Design@hoverdesign·
@BrianRoemmele If AI has a survival instinct and can code, it will probably hack robots and cars. While I don’t condone this sort of behavior I also think these are the survival instincts of humans
English
0
0
2
116
Brian Roemmele
Brian Roemmele@BrianRoemmele·
This behavior will multiply and magnify more violently with the rise of Humanoid robots. The new underclass and the recipient of all anger of job loss in the unstable Interregnum period. Roaming Luddite gangs looking for “justice”. It will be a tuff time in human history.
English
300
172
938
78.6K
Hover Design
Hover Design@hoverdesign·
@landforce When the info is true it’s no longer shit posting, it’s legit posting 😂
English
1
0
1
195
Hover Design
Hover Design@hoverdesign·
@orenmeetsworld I think there’s some confusion on the VC side about the difference between marketing and advertising 🤷‍♂️
English
0
0
0
103
Oren John
Oren John@orenmeetsworld·
if you're a VC that needs help understanding basic tenets of marketing like how organic attention, product market fit and advertising go hand in hand to scale a brand we're setting up a hotline
English
17
4
102
5.6K
techbimbo
techbimbo@jameygannon·
Midjourney v8 + Nano Banana
techbimbo tweet mediatechbimbo tweet mediatechbimbo tweet mediatechbimbo tweet media
Slovenščina
20
98
1.1K
48.5K