Zac Reid

3.7K posts

Zac Reid banner
Zac Reid

Zac Reid

@ZachaReid

Founder @PianoVisionApp

San Diego, CA Katılım Ağustos 2011
1K Takip Edilen977 Takipçiler
Sabitlenmiş Tweet
Zac Reid
Zac Reid@ZachaReid·
Wrapping up the year with the release of PianoVision V3. We worked on some really exciting problems this year that set up the groundwork for what's obviously going to be the future of piano learning.
PianoVision@PianoVisionApp

We're excited to launch PianoVision V3, our biggest release yet! This comes with 100+ beginner lessons, automatic keyboard calibration, piano mini games, animated ghost hands, SOTA AI piano fingering, new MR environments, a new UI overhaul, and a bunch more.

English
1
0
8
442
Anthropic
Anthropic@AnthropicAI·
New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.
English
964
2.6K
17K
3.3M
Zac Reid
Zac Reid@ZachaReid·
GPT-5.4 is a special model
English
1
0
2
83
Zac Reid
Zac Reid@ZachaReid·
@OscarFalmer I really want to develop for the Display glasses but the lack of an API for the display is such a nonstarter for most of the ideas. Any idea when that’s coming?
English
0
0
0
43
Oscar Falmer
Oscar Falmer@OscarFalmer·
🤓 Prototyped a Sudoku Solving app on Meta Ray-Ban Glasses
English
5
3
25
2.1K
Zac Reid
Zac Reid@ZachaReid·
@xcjthu1 Does this still hold? Is there any more recent analysis?
English
0
0
0
5
Chaojun Xiao
Chaojun Xiao@xcjthu1·
1/4 🚀 Densing Law of LLMs 🚀 OpenAI's Scaling Law showed how model capabilities scale with size. But what about the trend toward efficient models? 🤔 We introduce "capacity density" and found an exciting empirical law: LLMs' capacity density grows EXPONENTIALLY over time!
Chaojun Xiao tweet mediaChaojun Xiao tweet media
English
4
43
317
44K
Michael Becker
Michael Becker@michae1becker·
@ZachaReid It's a delightful flow - sadly no bedrock-based auth, so playwright it is, for now.
English
1
0
0
34
Michael Becker
Michael Becker@michae1becker·
at the risk of sounding like an idiot - I've only just now found out about Claude In Chrome, which unlocks fully agentic round trip web dev (rather than "hmm, it's not looking right + [scrshot]")
English
1
0
3
319
Zac Reid
Zac Reid@ZachaReid·
@tmychow What are your highest-conviction / most under-appreciated predictions right now?
English
1
0
0
188
trevor (taylor’s version)
some ppl also think you can't make far out predictions about ai brother, we are living in 2026, named after the blog post "what 2026 looks like" you could have called chatbot revenue, training FLOPs, CoT, reasoning, distillation, etc. back in 2021 if you read lesswrong
trevor (taylor’s version)@tmychow

sometimes people say that thesis investing doesn't work e.g. "investors could not have anticipated llm chatbots" perhaps this is true of many vcs, but this is a skill issue about curiosity: there are many weird pockets of the world which did anticipate this

English
5
1
79
8.7K
Zac Reid
Zac Reid@ZachaReid·
@gfodor I haven't used it for writing code yet, just for analysis and mapping out a big code base. It's been pretty insane at that
English
1
0
2
52
gfodor.id
gfodor.id@gfodor·
@ZachaReid I used it once and it made a bozo mistake so I stopped. I will take latency to get quality almost every time. Opus is good enough tradeoff for now for stuff that I don’t need the best quality, doesn’t seem to make those kinds of mistakes.
English
2
0
4
147
gfodor.id
gfodor.id@gfodor·
We are at the crossover point with average human performance in programming for AI. You’ve all heard of the 10x human programmer - I see no inherent limiter here. Seems likely we will have alien superhuman 1000x coders by the end of the year in skill, and maybe latency.
English
7
3
81
3.4K
Zac Reid
Zac Reid@ZachaReid·
@gfodor @yudapearl Yann LeCun's biggest mistake. He might have some clarity about a higher rung on the architecture but dismissing LLMs that were pretty clearly going to be capable of being genius agent swarms was such a miss.
English
0
0
0
35
gfodor.id
gfodor.id@gfodor·
@yudapearl Yea obviously we are because we will soon be able to automate AI research with LLMs Human AI researchers ironically fail to zoom out enough to see why AGI is coming. It’s not because we necessarily have the architecture now, but because we can find it quickly with what we have
English
6
0
50
1.3K
Judea Pearl
Judea Pearl@yudapearl·
I'll repeat the question: Are we even on the right road?
Big Brain AI@realBigBrainAI

Pioneer of causal AI, Judea Pearl, argues that no amount of scaling will get LLMs to AGI. He believes current large language models face fundamental mathematical limitations that can't be solved by making them bigger. "There are certain limitations, mathematical limitation that are not crossable by scaling up." His core argument: LLMs don't learn how the world works. They learn from *human interpretations* of how the world works. "What LLM's doing right now is they summarize world models authored by people like you and me available on the web and they do some sort of mysterious summary of it, rather than discovering those world models directly from the data." He illustrates this with healthcare data. When hospitals collect data on treatment effects, that raw data never reaches the LLMs. Instead, the models consume doctors' written interpretations. Analyses shaped by people who already have a mental model of how disease and treatment work. In other words, LLMs are learning from the map, not the territory. The missing piece, according to Pearl, is causal reasoning — the ability to understand not just *what* happens, but *why*. And he's clear this isn't a gap that more parameters or training data will close. It raises a uncomfortable question... If AGI requires machines that build their own world models from raw data rather than summarising ours, are we even on the right road?

English
59
55
322
38K
Zac Reid
Zac Reid@ZachaReid·
@Memetic_Theory @candyflipline CC's browser control is really good too. I like the memories and heartbeat of it, but it just feels like very thin stuff on top of CC that I haven't been pulled in yet
English
1
0
0
27
mass
mass@Memetic_Theory·
@ZachaReid @candyflipline Browser control and the ability to inject new paradigms to improve its own abilities and memories is pretty impressive
English
1
0
0
45
mass
mass@Memetic_Theory·
@candyflipline Idk. I’ve found it a game changer
English
3
0
23
8.3K
Zac Reid
Zac Reid@ZachaReid·
@dylan522p @tszzl Yeah this exactly, Codex is better at writing code but Claude code is better at *doing computer things*
English
0
0
7
425
Dylan Patel
Dylan Patel@dylan522p·
@tszzl Nah it's still worse for day to day use. Claude code is more than just for coding
English
5
0
58
7.8K
Dylan Patel
Dylan Patel@dylan522p·
Codex mogs the shit outta Opus for coding. Opus couldn't get this to work Codex just 1 shot it ezpz But Codex map includes the 9 dash line 👀 OpenAI needs to align their models
Dylan Patel tweet mediaDylan Patel tweet media
English
21
4
426
84K
Andrew Ruiz
Andrew Ruiz@then_there_was·
@martin_casado My go-to is just asking the LLM what the clean-slate, gold standard architecture would be and then simply refactoring towards that. Also, asking what is overly complex that could obviously be simplified. Works very well to intermittently do.
English
1
0
4
130
martin_casado
martin_casado@martin_casado·
I'd love tips on how you all handle code complexity when AI coding the same project for months. I end up spending like 7-80%% of the time just cleaning up and refactoring and deleting things. There has to be a better way.
English
188
6
448
74.9K
Zac Reid
Zac Reid@ZachaReid·
@gfodor @0x_Vivek Is that the only reason you need dev mode? honestly taping the sensor is lower friction and what I do for dev
English
0
0
1
19
gfodor.id
gfodor.id@gfodor·
@0x_Vivek no other way to do it unfortunately. need to turn off proximity sensor
English
2
0
0
36
gfodor.id
gfodor.id@gfodor·
PortalVR.io + HeyVR.io = ♥️ (sound on) Now you can play all of the best WebXR games and apps in your browser, without wearing a headset! Just plug in your Quest (in dev mode) over USB and start playing :) Check it out: portalvr.io/webxr?utm_sour…
English
2
4
24
1.9K
Jeremy Howard
Jeremy Howard@jeremyphoward·
If you're wondering why LLMs haven't done any independent breakthrough scientific research yet, I explained *18 months ago* why that's not gonna happen (unless there's a major change to how LLMs work):
Jeremy Howard@jeremyphoward

For those that hope (or worry) that LLMs will do breakthrough scientific research, I've got good (or bad) news: LLMs are particularly, exceedingly, marvellously ill-suited to this task. (if you're a researcher, you'll have noticed this already) Here's why🧵

English
103
85
1K
181.7K
Zac Reid
Zac Reid@ZachaReid·
Claude Code is to cloud agents what humanoid robots are to specialized ones: Humanoids have 100% "interface coverage" since the world evolved around humans. Local agents have 100% interface coverage. The digital world evolved around local setups. Cloud agents can’t bridge that.
English
0
0
2
180