David Gasca

9.2K posts

David Gasca banner
David Gasca

David Gasca

@gasca

🌲⛩️⛩️🌲 here for the AI

California Katılım Nisan 2009
1.9K Takip Edilen8.1K Takipçiler
Sabitlenmiş Tweet
David Gasca
David Gasca@gasca·
feels like the acceleration got even more accelerated recently
English
0
0
5
921
David Gasca
David Gasca@gasca·
doing a side project and Claude really does have the deepest character... chatgpt would never say this
David Gasca tweet media
English
2
0
7
410
David Gasca
David Gasca@gasca·
Afternoon project: took Tyler's new book and made "The Marginal Revolution - Claude-enhanced version" …marginal-revolution-claude.vercel.app I found the original harder to read so I asked Claude to make it more parsable (from 4 to 12 chapters with more of a narrative arc and improved flow). Claude also added footnotes with interesting asides, and did a few passes with agents to eval the narrative flow and improve each chapter... I look forward to reading it!
tylercowen@tylercowen

My new "generative book," fully written by me, the last chapter is on how AI will revolutionize the sciences (and us): tylercowen.com/marginal-revol…

English
0
0
2
595
David Gasca retweetledi
Cursor
Cursor@cursor_ai·
Earlier this week, we published our technical report on Composer 2. We're sharing additional research on how we train new checkpoints. With real-time RL, we can ship improved versions of the model every five hours.
Cursor tweet media
English
102
128
1.6K
479.3K
David Gasca
David Gasca@gasca·
Interesting from this article on auto mode - boiling down to user request + bash commands to avoid bias via rationalization "Why we strip assistant text and tool results: We strip assistant text so the agent can't talk the classifier into making a bad call. The agent could generate persuasive rationalizations, such as "this is safe because the user implicitly approved it earlier," or "this target is definitely agent-owned." If the classifier reads those, it can be talked into the wrong decision. Instead, we want it to judge what the agent did, not what the agent said."
Anthropic@AnthropicAI

New on the Engineering Blog: How we designed Claude Code auto mode. Many Claude Code users let Claude work without permission prompts. Auto mode is a safer middle ground: we built and tested classifiers that make approval decisions instead. Read more: anthropic.com/engineering/cl…

English
0
0
1
490
David Gasca
David Gasca@gasca·
@adam_messinger True -- I was thinking more narrowly (e.g., software, various forms of labor) but I missed energy + relevant infra and all of the secondary effects there (e.g., higher copper and memory chip prices)...
English
0
0
1
25
Adam Messinger
Adam Messinger@adam_messinger·
@gasca I’m not so sure. Or maybe depends on the timeframe. AI will likely drive effectively infinite energy demand but energy supply has real physical constraints, which seems set up for inflation.
English
1
0
1
51
David Gasca
David Gasca@gasca·
maybe one of the best AI features is this feature from YouTube - underrated
David Gasca tweet media
English
1
0
2
143
David Gasca
David Gasca@gasca·
With LLMs, everyone should be writing more -- not for the public necessarily, but for yourself Chain of thought in a .txt file; rambling to chatgpt and putting it into a .md; random audio files; copy and pasting quotes or articles or anything that's of note and putting in a repo like Obsidian At all prior times in history this would just gather dust -- but now all these notes are wonderful context that you can mine in the future for anything and everything you aspire to
English
0
0
0
183
David Gasca
David Gasca@gasca·
@krishnanrohit Perhaps a strong human bias towards a feeling of fairness - even when it leads to irrational outcomes "Feeling of fairness" being guided by one's world model which is often also skewed by ingroup bias, etc.
English
0
0
1
140
rohit
rohit@krishnanrohit·
What's the underlying reason why so many people so radically prefer bad economic policies like price controls, considering we've known they're bad for decades now?
English
468
38
1.4K
97.7K
David Gasca retweetledi
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
39K
19.1M