Mark Young

412 posts

Mark Young

Mark Young

@vizmo

Designer, developer, scientist

San Francisco Katılım Mayıs 2008
116 Takip Edilen76 Takipçiler
Nature Unedited
Nature Unedited@NatureUnedited·
Patrick is a 34-year-old orangutan at the Metro Richmond Zoo who recently became an online sensation after a video of him tying a perfect double knot went viral. To celebrate his birthday, the zoo gifted him a royal cloak, which he tied neatly on his own 🦧
English
109
930
12.2K
654K
Mark Young
Mark Young@vizmo·
@jjainschigg @shiraeis That makes some sense. There's additional cues and context that are shared, but again, not necessarily processed the same way. I believe there are systematic differences between people such that there's typically a significant difference between world models
English
0
0
1
6
κυβερκογιότλ
κυβερκογιότλ@jjainschigg·
@vizmo @shiraeis And as she later says, this is because there’s an actual shared reality and thus broad convergence on what amounts to a shared world model. x.com/shiraeis/statu…
shira@shiraeis

otoh, and this is the part i find hopeful: the post qt’d is about the local problem: language is a brutally narrow channel. 2 people can hear the same words, yet run totally different reconstructions, and walk away feeling either perfectly clear or completely alien to each other. meaning is inferred, not transmitted whole. but there may also be a global convergence pressure. the platonic representation hypothesis is basically the idea that when very different systems get good enough at modeling the same world, their internal representations start becoming more alike. larger models don’t just perform better, but across architectures, and even across language vs vision, they begin organizing reality in more similar ways. some fMRI work suggests the strongest overlap is often not at the final “answer” layer, but in the middle of the stack, where the system is still actively building a general-purpose world model of what’s going on. in a 2025 review of 25 fMRI studies, this intermediate-layer pattern shows up repeatedly, and the broader alignment story tends to get stronger as models scale in size and capability. that’s how I reconcile this with the post qt’d: a single conversation may be nowhere near enough to make 2 differently trained systems converge. shared reality may exert a long-run pull toward similar abstractions, but the channel between 2 people is still narrow, lossy, and path-dependent. yes, sometimes you really do hit the compression limit of a relationship, but that isn’t proof that shared understanding is fake, just that it is expensive. paper links: arxiv.org/abs/2405.07987 phillipi.github.io/prh/ arxiv.org/abs/2510.17833

English
1
0
0
16
shira
shira@shiraeis·
had a series of disagreements with someone i love recently where it became obvious we're running such different world models that no amount of talking was going to produce convergence. naturally i went looking for what neuroscience says about why two people can speak the same language fluently and still completely fail to land in the same internal state the literature increasingly suggests language is way less "transmission" than people intuit. rather than sending a fully specified packet of meaning, the speaker emits a sparse, lossy signal, which the listener then reconstructs from their own priors, context, and internal model of the world. this frame fits the results in the quoted tweet unusually well. Zada et al. show that during real conversation, linguistic content briefly occupies a shared representational space across brains, but VERY briefly. Goldstein et al. show that within a single brain, comprehension unfolds over time in a layered hierarchy that looks a lot like moving through depth in a transformer. together the story is "words are small cues that help coordinate much larger contextual states across time," which means the feeling of being transparent benefits the speaker more than the listener because two people can use nearly identical language, feel totally understood by themselves, and produce completely different reconstructions on the other side. it's honestly a small miracle that two differently trained systems ever converge as closely as they do through such a brutally narrow channel. what gives me hope is that learning how language actually functions across humans and machines might teach us something real and meaningful about improving human to human communication as well, by making the lossy channel a little less lossy. that said, some people don’t really want a wider channel. some people just want confirmation that their reconstruction is the only valid one, and maybe the most honest response to that isn’t frustration but recognizing you’ve hit the compression limit of the relationship
shira@shiraeis

Found 2 papers on language, brains, and LLMs that together tell a story no one has cleanly articulated. One looks at spoken conversation and finds that contextual LLM embeddings can track linguistic content as it moves from one brain to another, word by word. The relevant representation shows up in the speaker before the word is said, then shows up again in the listener after the word is heard. The other looks within a single brain and finds that the timeline of verbal comprehension lines up with the layer hierarchy of LLMs: earlier layers match earlier neural responses, deeper layers match later ones, especially in higher-order language regions. Both papers are from the same group at Princeton. Quick summary of each, then what I think they mean together. Zada et al. (Neuron 2024) recorded ECoG from pairs of epilepsy patients having spontaneous face-to-face conversations. They aligned neural activity to a shared LLM embedding space and found that contextual embeddings captured brain-to-brain coupling better than syntax trees, articulatory features, or non-contextual vectors. The embedding space works like a shared codec. Speaker encodes into it before they open their mouth, listener decodes after. Goldstein, Ham, Schain et al. (Nat Comms 2025) pulled embeddings from every layer of GPT-2 XL and Llama 2 while people listened to a 30-minute podcast. In Broca’s area, correlation between layer index and peak neural lag hits r = 0.85. As you move up the ventral stream, the temporal receptive window stretches from basically nothing in auditory cortex to a ~500ms spread between shallow and deep layer peaks in the temporal pole. The classical phonemes → morphemes → syntax → semantics pipeline doesn’t recover this temporal structure. The learned representations do. Together, these papers make conversation look a lot like two brains running closely related forward passes, with speech acting as a brutally lossy bottleneck between them. Inside a single brain, the structure of that forward pass (shallow layers tracking fast local features, deeper layers integrating slower contextual information) looks a lot like the way comprehension actually unfolds over time. What's crazy is these models were only trained on text, and yet their layer hierarchy STILL mirrors the temporal dynamics of spoken-language processing, so whatever structure they picked up is probably not just a quirk of modality. It actually seems to fall out of language statistics themselves, which is not what the classical picture would predict at all. If comprehension were really a tidy pipeline of discrete symbolic modules, you’d likely expect to see that cleanly in the neural timing, but you don’t. If you take compression seriously, this suggests language is not really about explicit symbolic manipulation, but more accurately about lossy compression over a learned continuous space. Brains and transformers may be landing on similar solutions because the statistical structure of meaning constrains the geometry hard enough that very different objective functions (natural selection vs next token prediction) still push you into roughly the same region. Something I find kinda funny is transformers compute all layers for a token in one feedforward pass, while brains seem to realize something like the same hierarchy sequentially in time, sometimes within the same cortical region. Broca’s area obviously does not have 48 anatomical layers, but its temporal dynamics behave almost as if it does, which is quietly a point in favor of recurrence. What transformers learned may be right even if the brain implements it more like an RNN unrolling over a few hundred milliseconds. The field ditched RNNs for engineering reasons. The brain, apparently, did not get the memo. The better frame than “LLMs think like brains” is representing meaning in context may just be a problem with fewer good solutions than we assumed. If you optimize hard enough on language statistics, you may end up in a solution family that overlaps miraculously well with what evolution found. There’s a real isomorphism in the problem, even if not necessarily in the machinery. Paper links: pubmed.ncbi.nlm.nih.gov/39096896/ nature.com/articles/s4146…

English
36
53
500
41.9K
Mark Young
Mark Young@vizmo·
@jjainschigg @shiraeis But people? No way. Not to mention that whatever the human analog of "internal geometry" is, it's not a data structure and it's got wild I/O
English
1
0
1
17
κυβερκογιότλ
κυβερκογιότλ@jjainschigg·
@shiraeis But I’m sure you saw the research that says that the internal geometry of differently trained and quantized systems was broadly the same no?
English
1
0
0
155
Mark Young
Mark Young@vizmo·
@JebraFaushay His singing is still great. The show's production values look awful
English
0
0
0
3
Dr. Jebra Faushay
Dr. Jebra Faushay@JebraFaushay·
It’s ok to retire. You don’t have to perform anymore. Here is Rod Stewart strutting around with his tight little flare jeans and teased hair.
English
6.1K
640
9.8K
2.9M
κυβερκογιότλ
κυβερκογιότλ@jjainschigg·
This is going to happen like gravity, with comical and terrifying results. For a brief, shining moment, some folks will think “omg, everything talks to everything! Everything is self-healing! Look at my alarm system warning of a fire by Van Eck phreaking a connection to my refrigerator!” And then the horrors will begin.
lachlan@hyprturing

i think chips with burnt-in LLMs that run at a very low power will probably result in much of the world around us being unneccesarily intelligent. cheaper to throw that chip and some flash with a readme into an automatic door opener than develop firmware for it.

English
2
1
1
102
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
Meta just confirmed a 20 percent company wide headcount cut but insiders are screaming the real story is way uglier than the efficiency memo. Direct word from multiple sources deep in the org. The announced 20 percent is only the near-term floor. A senior manager told me it's the starter number. Top leadership is already modeling two to three times that long term. They dug into developer activity logs after rolling out heavy AI tools across engineering. What they found floored them. Hours worked have plummeted for a huge chunk of the org. Many seniors now clocking under 10 hours of real productive time per week because the tools are handling so much grunt work so fast. One staff engineer said leadership pulled dashboards showing commit rates holding steady or climbing while logged hours tanked. Same output. Way less human sweat. The math was brutal. If AI is letting one person do the work of three with half the time input why pay for three. They have been quietly running knowledge extraction sprints for months. Senior engineers screens recorded 24 7. Every prompt logged. Debugging flows captured in brutal detail. Entire decision trees filmed under the banner of process documentation for continuity. One engineer described being forced to whiteboard his whole system design playbook. Trade offs. Failure modes. Scaling hacks. All while cameras rolled. They called it knowledge transfer to support the transition. The transition is agents and remaining skeleton crew armed with those exact recordings prompt libraries and heavy Claude+Gemini access. Replacements are already shipping changes 40 percent faster using the precise workflows they ripped from the outgoing engineers. They are not slashing for cost alone. They are turning 15 plus years of Meta engineering DNA into structured training data. Every document your process request is not teamwork. It is feeding the beast that replaces you. The strategic AI pivot line they are feeding the press is cover. They are replacing the entire engineering organism with agents trained on their own seniors captured minds. CTO level playbook is already locked. Extract. Document. Automate. Repeat at two to three times scale. If you are still at Meta and someone pings you to please record a quick walkthrough of your workflow for the team run. Do not document. Do not hand over the keys. The knowledge extraction is complete. If you are inside watching more engineers get gutted because AI made their 40 hour week look like 10 hours of value, DMs are wide open.
English
100
154
1.3K
724.3K
positive side of X 🌞
positive side of X 🌞@positivityofx·
The tiger Maruay lived confined in a cage with a cement floor and was rescued. Since then, he is thriving and loves to relax in the lake with his red ball.
English
192
2.7K
32.1K
552.9K
FearBuck
FearBuck@FearedBuck·
Nathan Martin, a 36-year-old high school cross-country coach from Jackson, Michigan, won the 2026 Los Angeles Marathon by running past Kenya’s Michael Kamau in the final second.
English
769
3.5K
70.6K
4.2M
Mark Young
Mark Young@vizmo·
@jjainschigg I guess I'm doing it wrong. Agents responses vary a lot. Will I get used to that variance? Is the future sloppier?
English
1
0
1
14
κυβερκογιότλ
κυβερκογιότλ@jjainschigg·
I say, yes indeedy. And I've been saying this for some years. Good thing you kids are finally catching up. It'll be great. Anyone will be able to do anything. Nobody will be able to pay for anything.
Dustin@r0ck3t23

Dario Amodei just said the thing no one in tech wants to accept as true. The story everyone clings to about permanent human-AI partnership? It’s fiction. Comforting fiction we tell ourselves while the actual future takes shape around us. Amodei: “The ‘Centaur’ window could be very short.” Developer demand is exploding right now. Salaries competitive. Companies desperate for anyone who can harness AI effectively. Every visible metric suggesting the jobs adapted and survived. Every metric is wrong. This isn’t adaptation. This is the last breath before drowning. Human-directed to fully autonomous isn’t a slope. It’s a cliff edge. The collaboration has value until suddenly it doesn’t, and that transition happens in weeks, not generations. Amodei: “Don’t mistake the spike for safety.” The partnership feels real because it is real. Right now. Developers making choices AI can’t make. Catching failures. Providing context and judgment. Actual symbiotic value creation. That generates real demand for people who excel at this. But Amodei won’t pretend what’s obvious to him: this demand has an expiration date already visible on the horizon. When AI completes projects independently, architects solutions autonomously, debugs without oversight, the human doesn’t evolve into a new role. The human becomes redundant. Not less important. Irrelevant. Amodei: “Full automation.” Not collaboration as destination. Extinction. Centaur mode exists while AI masters independence. The economics sustain human involvement until the second they don’t, then it collapses completely. Developers watching this boom think they’re witnessing professional transformation. They’re experiencing the terminal employment cycle before the profession relocates entirely to silicon. We’re not building partners that enhance what humans do. We’re building replacements that eliminate why humans were needed. And the gap separating “collaborative tool” from “total replacement” isn’t closing gradually. It’s narrowing violently, and most people won’t see it coming until it already happened.

English
1
0
0
58
Mark Young
Mark Young@vizmo·
@PSxMania_ I loved the first Jak and Daxter game so much - work of art, lots of soul, so much fun
English
0
0
1
96
PSX Mania
PSX Mania@PSxMania_·
Happy birthday to Jason Rubin, the creator of Crash Bandicoot and Naughty Dog co-founder! 🎂🎈🎉
PSX Mania tweet media
English
33
524
3.6K
59.7K
κυβερκογιότλ
κυβερκογιότλ@jjainschigg·
It's January 1, 2026. The basic premises and much of the detail predicted by the movie Her, made in 2013, are all now real, and right in front of your face. How's everyone feeling about that?
English
1
0
1
35
Mark Young
Mark Young@vizmo·
@elonmusk Mars is an extremely crappy place compared to Earth
English
0
0
0
9
Elon Musk
Elon Musk@elonmusk·
Occupy Mars
X Freeze@XFreeze

Mars will be our next planetary home for human civilization. Here are a few things you need to know > Mars formed about 4.5 billion years ago alongside Earth and likely started as a warmer, wetter world before it lost its global magnetic field and dried into the cold desert seen today > Mars is only about half the physical size of Earth, but because it has no oceans, its total dry land area is similar to all of Earth’s continents combined, making it effectively 100% usable “real estate” > Surface gravity on Mars is just 38% of Earth’s, so you could jump roughly three times higher and lift much heavier objects than you could on Earth > New evidence suggests a massive reservoir of liquid water may be trapped 10–20 kilometers beneath the Martian crust, theoretically enough to cover the whole planet in an ocean if it were brought to the surface > The search for alien life has intensified after NASA’s Perseverance rover found rocks with “leopard spot” patterns that may record chemical signatures consistent with ancient microbial life, though the findings are still being tested > Martian weather is extreme: the average temperature is about -81°F (-62°C) and can drop near -225°F in some regions, yet summer days near the equator can briefly reach around 70°F, similar to a mild day on Earth > Colonizing Mars will be difficult because of toxic soil, high radiation, and a very thin atmosphere, but deep underground water and protected subterranean habitats could make long-term human survival scientifically possible > Sunsets on Mars appear blue rather than red because fine dust in the thin atmosphere scatters red light away from the line of sight and lets more blue light pass through > Mars has two small, irregular moons, Phobos and Deimos; Phobos is slowly spiraling inward and is expected to eventually break apart, likely creating a temporary ring system around the planet > The planet hosts some of the most extreme landscapes in the solar system, including Olympus Mons, a giant volcano about three times taller than Mount Everest, and Valles Marineris, a canyon system so huge it makes the Grand Canyon look tiny by comparison > We are still only visitors on Mars, but with every SpaceX Starship mission, we get closer to the day this harsh red desert becomes humanity’s next home

English
7.4K
3K
34K
33.9M
Mark Young
Mark Young@vizmo·
@elonmusk There's this weird numerology where some people think that people that don't agree with them are "low IQ"
English
0
1
2
27
Elon Musk
Elon Musk@elonmusk·
At this point, I am not advocating any particular candidate for NASA Administrator. I am just desperate for someone with a 3 digit IQ.
English
16.6K
17.5K
241.3K
27.7M
Mark Young
Mark Young@vizmo·
@DeepDishEnjoyer I wouldn't be surprised if people in Boston have a wider variety and higher levels of education on the average given the abundance of universities there. But I'm fairly certain you can't measure someone's IQ in a typical conversation
English
1
0
6
589
peepeepoopoo
peepeepoopoo@DeepDishEnjoyer·
I'm slowly coming to realize that a big reason I enjoy being in Boston so much is that the baseline IQ feels at least half a standard deviation higher than SF. It's this intangible aspect to every conversation across a wide range of personalities, like a brain fog was lifted.
English
158
69
3.1K
530K
Mark Young
Mark Young@vizmo·
@swardley @leashless Rich people's lives in space will suck compared to middle class life on Earth. Are they that naive?
English
1
0
0
82
Simon Wardley
Simon Wardley@swardley·
The problem with this, is we knew over 15 years ago that a certain group of wealthy individuals thought their salvation lies in space (think, the film Elysium). They're not going to fix the planet, they have no intention of fixing the planet. Never have. Still, the launch was picture perfect. Spectacular.
Dale Vince@DaleVince

Watching obscenely rich people and their friends blast off into (not quite) space for a jolly is bizarre when there’s so much to fix here on earth. This song by Gil Scot Heron song does a good job of summing it up…

English
3
3
11
2.7K
Mark Young
Mark Young@vizmo·
@SenatorBanks Truth? You knew nothing about that person or the work they did yet you insult them. What a jerk. Definitely not a leader
English
0
0
2
16
Mark Young
Mark Young@vizmo·
@cb_doge Biggest grift the solar system has ever seen
English
0
0
0
16
DogeDesigner
DogeDesigner@cb_doge·
🚨 BREAKING: Elon Musk says that he thinks we could have a million people on Mars in 20 years.
English
4.1K
856
5.3K
36.5M
Nick Ford
Nick Ford@Ford_Nick·
What would you add?
Nick Ford tweet media
English
2.4K
234
3K
336.6K
philip lewis
philip lewis@Phil_Lewis_·
What was the Black newspaper in your hometown? Trying to see something
English
1.4K
181
1.5K
299.7K