Avi Shefi

444 posts

Avi Shefi banner
Avi Shefi

Avi Shefi

@shefiavi

Consultant | AI & Distributed Systems

Katılım Şubat 2011
59 Takip Edilen66 Takipçiler
Sabitlenmiş Tweet
Avi Shefi
Avi Shefi@shefiavi·
FOMO is irrational. Reactive. The thing that gets engineering teams to waste cycles on technology they don't need. All true. And yet: months of debate about whether to investigate something. Then alignment in a week. No new data. No better argument. Just enough people independently feeling left out of the same thing at the same time. "We view a behavior as more correct in a given situation to the degree that we see others performing it." — Cialdini When the anxiety goes collective, it does something rational argument almost never does: it moves things off the backlog without anyone having to make the case. The conversation skips straight to "how do we approach this." Worth leaning into, not managing away.
English
1
0
1
140
Avi Shefi
Avi Shefi@shefiavi·
@dhh It used to be that around the ~5 year mark cloud bill stops making sense if you plan for the long run. Do you think that changed? More/less/same?
English
0
0
1
2.9K
DHH
DHH@dhh·
In 2023, we spent $3,934,099 on AWS + other hosting. In 2026, our hosting + support bill is down to ~$1m/year due to the cloud exit. Even including all the hardware buying, we will already have saved ~$4m by the end of this year. And going forward, it's ~$3m/yr in savings 🤑
English
233
282
6.1K
466.7K
Avi Shefi
Avi Shefi@shefiavi·
@dhh The gap gets smaller when expectations are translated by capable models
English
0
0
0
130
DHH
DHH@dhh·
"As models get even more powerful, the idea that your system is tied down as a fixed black box is likely to become an archaic notion pretty quickly. As always, the future is already here, it's just not evenly distributed." world.hey.com/dhh/the-mallea…
English
37
35
570
38.8K
Avi Shefi
Avi Shefi@shefiavi·
“If you are not occasionally adding things back in, you are not deleting enough.” — Elon Musk
English
1
1
2
16
Mishig Davaadorj
Mishig Davaadorj@mishig25·
trying to figure out which open model to run with my pi agent
Mishig Davaadorj tweet media
English
5
5
27
6.7K
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
The poisonous atmosphere around AI innovation is caused by a lethal combo: 1) idiotic messaging from AI execs (it will take your job) 2) doomers inciting violence and warning of the end of all life 3) unscrupulous populist politicians who never waste a good crisis 4) and the media who leads with fear It is the official end of the Pax Americana. You can't build the next great wave of a country's development breathing this poisonous air. I hope you're all learning Mandarin. You'll need it soon.
zerohedge@zerohedge

The US social mood is turning dramatically negative on AI

English
54
31
245
49.9K
Avi Shefi
Avi Shefi@shefiavi·
The All-In Pod @chamath: Anthropic's Mythos warning is "mostly theater" - same playbook as the GPT-2 "end of days" hype in 2019 that turned out to be a nothingburger. Clever go-to-market to drive hyper attention and adoption. "Vendors, conference keynotes, and tech influencers know this mechanism, and some use it deliberately, manufacturing urgency around adoption." FOMO moves things that good arguments alone don't. Even when the anxiety is manufactured.
The All-In Podcast@theallinpod

Chamath: Anthropic's Mythos Warning Is Theater @Jason: “Chamath, is it the Boy who Cried Wolf, or is this the real deal now?” @Chamath: “I think it's mostly theater. In February of 2019 when Dario was still at OpenAI, they did the same thing with GPT-2. That was a 1.5 billion parameter model, which sounds like a total fart in the wind in 2026. But at that time, this model was supposed to be the end of days. And at the end of it, it was a huge nothingburger. If you actually think that Mythos is capable of doing what it says it can do, two things are true. One is, a very sophisticated hacker can probably do those things right now with Opus. And two, if these exploits are this easy to find, whether you use Opus or whether you use Mythos, the reality is you'd have to shut down the internet for about five years to patch them all. So when you see a large multi-trillion dollar GSIB bank, it's a bit of theater. Why? What do you think they can actually accomplish in two months? Do you actually think that if there's these vulnerabilities, it's all going to get fixed? Let's give them six months, let's give them nine months. So I do think that Sacks is right, that they have figured out a very clever go-to-market muscle here that activates hyper attention and hyper usage, and so I give them tremendous credit. But we've seen it before, we saw it when these folks were the principal architects at OpenAI, and we're now seeing the same playbook here. The reality is that capitalism moves forward, the funding needs moves forward, and the need for these guys to build adoption moves forward. And that's going to supersede what this is.”

English
1
0
0
46
Avi Shefi
Avi Shefi@shefiavi·
@chamath It's also noticing when you pressure yourself to ramp up your lifestyle because of perception. The cost is real.
English
0
0
0
1.9K
Chamath Palihapitiya
When your working life rewards you, it’s easy to ratchet up the complexity: homes, cars, travel, possessions etc. I have found that all that complexity comes at the sake of your most fleeting asset: your time. Instead of building things, all of a sudden you’re dealing with minutiae and logistics. Instead of talking mostly to engineers, you’re talking mostly to non-engineers. The building stops…the business of managing self inflicted complexity begins. It’s worth noting that the best players in the game (Buffett, Elon) have kept their life extremely basic, almost monastic/nomadic, as success ratcheted them ever higher. I think it’s the biggest secret hiding in plain sight: When the world upgrades your status, downgrade your complexity.
English
542
1K
12K
955.2K
Avi Shefi
Avi Shefi@shefiavi·
@yacineMTB And still it's valuable. Wait until agents get laws that penalize them.
English
0
0
0
1.7K
kache
kache@yacineMTB·
he's right
kache tweet media
English
96
182
3.2K
147.5K
Craig Weiss
Craig Weiss@craigzLiszt·
you’re not a serious developer unless you’re romantically involved with your coding agent
English
90
22
323
12K
Avi Shefi
Avi Shefi@shefiavi·
@minchoi Probably automated policy, then someone noticed and escalated
English
1
0
0
344
Peter Steinberger 🦞
Yeah folks, it's gonna be harder in the future to ensure OpenClaw still works with Anthropic models.
Peter Steinberger 🦞 tweet media
English
543
245
5.5K
1.4M
BridgeMind
BridgeMind@bridgemindai·
Spent the last few days vibe coding on my NVIDIA DGX Spark. Here's what I learned. Qwen 3.5 122B took one minute and nine seconds to respond "Hi how are you doing". Unusable for vibe coding. Gemma 4 was fast but built a dot instead of a first person shooter game. GPT-OSS 120B was the sweet spot. Fast, capable, and actually produced working HTML. Open source models running locally are not replacing Claude Opus 4.6 or Codex with GPT 5.4. Not even close. But they're getting better every month. The new DGX Spark Bench is live on bridgebench.ai. Real-world benchmarks for local models on local hardware. This is just the start. Full video below.
BridgeMind@bridgemindai

Claude Code rate limited me so hard I bought a $5,000 NVIDIA DGX Spark. Arriving tomorrow. A personal AI supercomputer. Anthropic cut off OpenClaw users. Slashed Claude Opus 4.6 rate limits. Told $200/month Max plan customers to use less. Then gave us a credit as an apology. This is what happens when AI companies have too much power over your workflow. One update and your entire stack breaks. Local models are the only infrastructure no one can throttle. No rate limits. No 529 errors. No surprise policy changes. Tomorrow I'm testing the DGX Spark live on stream. Running local models through real vibe coding workflows. The goal is simple. Never depend on a single provider again.

English
129
28
382
51.9K
Avi Shefi
Avi Shefi@shefiavi·
LLM-run agents are expected to be safe, traceable, secure, and transparent. Agentic tools and functions must exhibit these same traits. ​Yet, we have never required this of programming languages designed for humans. ​Formal verification is not yet mainstream, and higher-order functional programming is statistically underrepresented in LLM corpora. ​While virtual filesystems seem like a middle ground, agent tools still leak implementation details and guarantees. The harness should be extensible by agents, yet we keep it behind relatively closed doors. ​Perhaps programming languages for humans are not a good fit for agents. Maybe it’s time for agents to design their own language. Perhaps the guarantees we expect are the spec and the plan, while the implementation is Dorodango.
English
0
0
0
33