TravisGood

821 posts

TravisGood banner
TravisGood

TravisGood

@IridiumEagle

Travis Godwin Good, PhD. CEO and Co-founder of @Ambient_xyz, an upcoming Useful PoW L1

Katılım Ocak 2024
773 Takip Edilen3.8K Takipçiler
Sabitlenmiş Tweet
TravisGood
TravisGood@IridiumEagle·
One thing we're trying to avoid with the design of Ambient is what I call "The ASIC Trap." Our proof of work is designed to operate at an algorithmic level, where the algorithm is generic across implementations and hardware, and the "usefulness" of the work is enforced by the choice of the network LLM and the supply and demand matching that occurs on chain. Imagine if, instead, we made the proof of work something more fundamental, like matrix multiplication. Matrix multiplication is useful, right? But, it turns out that if your work is something fundamental like that, it's much cheaper to do a lot of nonsense matrix multiplications on a piece of hardware like an ASIC than it is to do meaningful operations on something like a GPU running an LLM. And at that point, you've stopped doing useful work and just become a clone of Bitcoin, because all your GPU providers are getting priced out. And Bitcoin ASIC fab capacity is decreasing year-on-year because ASICs are not inherently useful. Thus, "The ASIC Trap": Any primitive mathematical operation that is claimed as inherently a proof of useful work reduces to a least-common-denominator of non-useful work on a highly specialized piece of hardware. An unfortunate economic inevitability
English
12
8
153
18.8K
TravisGood retweetledi
Sentient Ecosystem
Sentient Ecosystem@SentientEco·
Say hi to our friend Travis Good (@iridiumeagle) 👋 When he’s not building high scale machine intelligence as CEO of Ambient (@ambient_xyz), he's speedwalking, scrolling X, or using the word “contumacious”.
English
10
3
25
6.8K
TravisGood
TravisGood@IridiumEagle·
I absolutely love this. Downloading immediately. Thank you!
Dev Shah@0xDevShah

We're releasing a whole new category of voice models. Introducing DramaBox — our state-of-the-art, open source voice model built for cinematic use cases. Traditional TTS gives you a voice. DramaBox by @resembleai gives you a performance. For too long, Voice AI has been stuck in "robotic assistant" mode. If you wanted dramatic emotion, sighs, or a voice cracking with grief, you had to hire an actor or spend hours editing. We fixed that.

English
1
0
7
897
TravisGood
TravisGood@IridiumEagle·
@DJIGlobal Oh c'mon. Are you guys selling camera stabilizing mech suits now??
English
0
0
0
804
DJI
DJI@DJIGlobal·
When someone questions your rig, don’t argue—demonstrate. 😏 That’s not just gear. That’s the difference between trying and delivering. 🎥: HangpaiV #Ronin4D #DJIRonin #dji
English
219
1.2K
6.1K
1M
TravisGood
TravisGood@IridiumEagle·
All of this falls apart in practice. I studied Government at Harvard. I did the maximum difficulty honors track and graduated magna cum laude. The sections for our required classes were each graded on a curve, with a 'C' grade being 'good.' Some sections had a bunch of legacies and athletic scholarship recipients in them. Some of them had the finest minds of our generation. If you fell into a weak section you could get an A no problem. If you got an interesting and intellectually stimulating section, you could expect your future employment prospects to dim because your GPA was guaranteed to go down. Additionally, the competence of the Teaching Fellows grading the assignments varied widely. I had a teaching fellow who opened our section by saying (I'm not making this up) "I hate Harvard undergraduates because you think you're smarter than you really are and it's my mission in this class to punish you." Arbitrary quotas and curves within classes make it much harder for employers and graduate schools to identify top students. A student's overall courseload, thesis, and demonstrated achievement across classes in diverse disciplines is a reliable indicator of future potential. General courses should not be not sorting mechanisms; advanced and honors classes should be, and those don't need to be graded on a curve either.
English
0
0
0
79
Jeffrey Redfern
Jeffrey Redfern@JeffreyHRedfern·
Most curves only apply to satisfactory grades. At HLS, for instance, professors are not required to give any "low pass" grades at all; they have discretion to do so when a student's work is poor. So the starting point for the curve is "sufficient mastery." The basic mistake here is treating "mastery" as binary. In any serious subject, it's not. There's no such thing as "knowing" economics, literature, or philosophy, or constitutional law. New knowledge is always being created, and old claims are always being contested. Even distinguished professors at the top of their respective fields would not claim to have "mastered" their subjects. Grades place students on a spectrum from insufficient competence all the way to genuinely exceptional work. A grading system that recognizes that spectrum isn't a "bizarre Darwinian competition." If you're teaching a course where every student masters the material so well that you cannot draw any meaningful distinctions between the work they've produced, then you have not challenged them at all. They were bored out of their minds. Meaningful growth occurs in the "zone of proximal development," where students are stretched to their capacities and can do good work with some "scaffolding" help from professors and peers, but not alone. The practical problem with grade inflation is incentives. Experience has conclusively shown that students will not work nearly as hard when top grades are guaranteed. This is true at even the best schools in the country. Grade inflation (to the point where As are the only acceptable grades at many schools) has coincided with a decade-long decrease in actual student performance in both K-12 and higher education. Ask an actual college professor; they will tell you that grades are up and performance is down. College students spend less time studying and are less capable across core disciplines. Now, if the goal is simply to get all students to a meet some minimum standard of competence, then pass/fail grading is perhaps defensible. But that's certainly not the sales pitch from top universities. They claim to identify and nurture great minds. The other practical problem with grade inflation is that it makes it hard for employers and graduate schools to identify top students. That makes hiring less meritocratic and more dependent on personal networks, prestige, and social capital.
English
1
1
4
147
Phillip Muñoz
Phillip Muñoz@VPhillipMunoz·
Harvard, apparently, is about to adopt a new policy to combat grade inflation. I devised my own anti–grade inflation policy 25 years ago. I’ve shared it with provosts and deans, to no avail. Here it is: The Muñoz Plan Against Grade Inflation The plan has three key components:
English
130
195
2.4K
853.7K
TravisGood
TravisGood@IridiumEagle·
On context engineering & it's challenges.
English
2
2
24
1.3K
TravisGood
TravisGood@IridiumEagle·
@MatthewBerman What's stopping them from releasing their earlier models that are +1 year old?
English
0
0
0
51
Matthew Berman
Matthew Berman@MatthewBerman·
Demis says he wants to see a Western open source AI stack and that we’re losing to China. He also says Google doesn’t have enough compute to build two frontier (open and closed) models, which is why Gemma is a smaller family of models. Watch this incredible clip. Shout out @ycombinator and @garrytan for the fantastic interview.
Matthew Berman@MatthewBerman

American open source AI is in trouble. China is eating our lunch. This is a bigger problem than people realize.

English
93
147
1.4K
295.7K
TravisGood
TravisGood@IridiumEagle·
I remember when we were told FISA was temporary. Congress should be fired wholesale. Don’t run for office if you can be so easily blackmailed.
English
4
0
12
353
TravisGood
TravisGood@IridiumEagle·
@Biochem115 @WomanDefiner Programmers are useless? News to me. Seems like the entire economy and US stock market are downstream from the value created by ... programmers
English
0
0
1
35
AntiBull
AntiBull@Biochem115·
@WomanDefiner Incorrect, humans will fulfill other tasks in the job market that ai is incapable of doing. The only people “replaced” by ai are those who were near useless to begin with (granted this is currently, but I’d even argue that “tech skill” are useless in market terms; like HR).
English
2
0
1
770
Paul
Paul@WomanDefiner·
This is all going to end in collapse and no one can stop it because US companies are legally obligated to make as much money as possible and if they don't they can be sued into obscurity. None of these people understand human nature as seen by the last decade of there conduct and none of them are seeing what comes next. The Great filter is going to be humans forgetting what humans do when survival is at stake.
Yasir Ai@AiwithYasir

🚨BREAKING: Two researchers from UPenn and Boston University just published a paper that should be uncomfortable reading for every CEO automating their workforce right now. The argument is straightforward. Every company replacing workers with AI is also eliminating its own future customers. Laid off workers stop spending. Enough of them stop spending and nobody can afford to buy anything. The companies that fired everyone end up selling into an economy with no purchasing power left. Every executive can see this. The math is not complicated. But here is why nobody stops. If you do not automate, your competitor does. They cut costs, lower prices, take your market share, and you collapse anyway. So every company automates knowing it is collectively destructive because the alternative is dying alone while everyone else survives. The researchers proved this is a Prisoner's Dilemma playing out in real time. The numbers are already moving. Block cut nearly half its 10,000 employees this year. Jack Dorsey said AI made those roles unnecessary and that within the next year the majority of companies will reach the same conclusion. Salesforce replaced 4,000 customer support agents with AI. Goldman Sachs deployed a coding tool that lets one engineer do the work of five. Over 100,000 tech workers were laid off in 2025 and AI was cited as the primary driver in more than half those cases. 80% of US workers hold jobs with tasks susceptible to AI automation. The researchers tested every proposed solution. Universal basic income does not change a single company's incentive to automate. Capital income taxes adjust profit levels but not the per-task decision to replace a human. Collective bargaining cannot hold because automating is always the dominant strategy. They also identified what they call a Red Queen effect. Better AI does not solve the problem, it accelerates it. Every company chases faster automation to gain market share over rivals but at the end everyone has automated equally, the gains cancel out, and the only thing left is more destroyed demand. The one thing the math says could work is a Pigouvian automation tax. A per-task charge that forces companies to account for the demand they destroy each time they replace a worker. The conclusion is that this is not a transfer of wealth from workers to owners. Both sides lose. Workers lose income. Companies lose customers. It is a deadweight loss with no market mechanism to stop it on its own. (Link in the comment)

English
105
405
4.2K
633.2K
TravisGood retweetledi
ambient.xyz
ambient.xyz@ambient_xyz·
talking to @IridiumEagle about how AI fails at production.
English
5
4
43
2.2K
TravisGood retweetledi
ambient.xyz
ambient.xyz@ambient_xyz·
The internet is literally eating itself. AI summarizes AI and that synthetic output feeds the next model which means each cycle dilutes the truth further. For enterprises this means the data powering your production systems is quietly decaying & no one can trace where any of it came from. Your compliance team cannot defend a decision nobody can replay. Your CTO cannot sign off on a system with no audit trail. Regulators will not accept that the model said so. You do not get to ship and iterate on systems that touch financials, patient records, or legal exposure because you need provable inference. Ambient makes every AI output verifiable down to the logit with cryptographic proof of exactly what was computed. This is AI your enterprise can actually be accountable for.
English
8
3
33
1.6K
TravisGood
TravisGood@IridiumEagle·
@staysaasy The models now are unbelievably good compared to even 3 months ago. Closed and open both. But you’re right, the closed side may be overextending itself
English
0
0
3
357
staysaasy
staysaasy@staysaasy·
Compute shortage. Data center sentiment cratering. Global resource crunch. Still no killer consumer app. Capital drying up, financing going circular, nowhere left to run but public markets. Population sentiment in the toilet on the eve of a major election. Model prices climbing while Chinese open source closes the gap. The models supposedly approaching singularity haven't materially improved in six months. And the tell: people are leaving hot AI labs like Thinking Machines for Meta. Meta isn't a genAI company. It's an RL factory for ad spend. Capital, compute, sentiment, and competitive pressure all breaking the same direction at the same time. That's not a soft landing setup.
English
30
40
853
174.8K
TravisGood
TravisGood@IridiumEagle·
On the scope of verified inference for enterprise.
English
5
1
25
1.8K
TravisGood
TravisGood@IridiumEagle·
@signulll I think they don't have humans in the loop, anywhere, during pre-training. Googlers can't be bothered with manual data cleaning, etc. The result is a model that is not properly tuned for human preferences, that feels alien and off-putting.
English
0
0
0
123
signüll
signüll@signulll·
not a single person i have ever spoken to uses gemini for coding. this is still very very weird. why is gemini so bad at coding when google has scoured the web full of code for decades?
English
1.1K
163
9.6K
848.6K
TravisGood
TravisGood@IridiumEagle·
This speaks to complete incompetence. Major obvious performance regressions persisting for over a month *again* without Anthropic noticing. And this is just what they're willing to admit! If you're an enterprise, think very hard about handing your data over to these people. anthropic.com/engineering/ap…
English
2
1
22
1.1K
TravisGood
TravisGood@IridiumEagle·
If Opus 4.7 is running the ship I kind of get why Anthropic has been tone deafly flubbing everything lately. One bad model release and you get recursively compounding failure modes
English
5
0
18
532
TravisGood
TravisGood@IridiumEagle·
AI companies are not meant to be everything businesses.
English
3
1
22
6.1K
TravisGood
TravisGood@IridiumEagle·
The whole "spiking server CPU demand" story doesn't make sense if people are just able to run agents on their own hardware with model calls to a provider like @ambient_xyz. It only makes sense if both the harness and the model are closed and API is not sold separately.
English
3
1
12
856