Judith Dada

657 posts

Judith Dada banner
Judith Dada

Judith Dada

@DadaJudith

GP @VisionariesVC Substack: https://t.co/u6qY79jUNX Instagram: https://t.co/aVW1vBDXaz Linkedin: https://t.co/u7pS81bnwe

Berlin, Deutschland Katılım Ocak 2014
713 Takip Edilen2.1K Takipçiler
Judith Dada retweetledi
derek guy
derek guy@dieworkwear·
@kevinxu Personally think you should use college as a time to expand your mind and make friends, not maximize your market value. Major in something that you're passionate about.
English
200
94
6.7K
225.5K
Judith Dada retweetledi
Polymarket
Polymarket@Polymarket·
JUST IN: Nvidia CEO Jensen Huang calls on tech leaders to "be careful not to scare people" regarding AI.
English
277
130
2.2K
156.9K
Judith Dada retweetledi
Deva Hazarika
Deva Hazarika@devahaz·
Elon, Andreesen, Sacks, Ackman, Chamath, Vivek, on and on, I remember when for all these guys it was critical to elect Trump because our national debt was a crisis and existential threat to the future of America only he would tackle. Don’t hear so much about that anymore.
English
201
1K
8.6K
261K
Packy McCormick
Packy McCormick@packyM·
There is a tremendous amount of progress happening in World Models. Multiple labs have raised more than $1B. WMs were the star of GTC. They are a real path to embodied AI. So @PimDeWitte & I wrote a comprehensive 19k word overview of World Models. notboring.co/p/world-models
English
38
93
701
238K
Judith Dada retweetledi
Alan Eyre
Alan Eyre@AlanEyre1·
spot-on, from @anneapplebaum Money quote: "Donald Trump does not think strategically. Nor does he think historically, geographically, or even rationally. He does not connect actions he takes on one day to events that occur weeks later. He does not think about how his behavior in one place will change the behavior of other people in other places." "He does not consider the wider implications of his decisions. He does not take responsibility when these decisions go wrong. Instead, he acts on whim and impulse, and when he changes his mind—when he feels new whims and new impulses—he simply lies about whatever he said or did before." theatlantic.com/ideas/2026/03/…
English
324
2.6K
8.1K
351.4K
Judith Dada retweetledi
Anthropic
Anthropic@AnthropicAI·
We invited Claude users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people responded in one week—the largest qualitative study of its kind. Read more: anthropic.com/features/81k-i…
English
469
914
6.3K
2.5M
Judith Dada retweetledi
Ethan Mollick
Ethan Mollick@emollick·
I've had ChatGPT-5.4 Pro working away at a project I always wondered about: how lucky are you to be alive right now? Of all the ~117B humans who ever lived, only about 1.5% had a lifestyle roughly equivalently to a middle-class person in a middle-income country today, or better.
Ethan Mollick tweet mediaEthan Mollick tweet mediaEthan Mollick tweet mediaEthan Mollick tweet media
English
64
68
858
63.9K
Judith Dada
Judith Dada@DadaJudith·
„Get ready to lose your job“. F*ing stop it with the rageposting. Stop it with the performative fearmongering. Sometimes scary is good. Every now and then we need it in order to get shocked into action. But fear without a meaningful path towards resolution is just cruelty.
Judith Dada tweet media
English
2
0
4
166
Judith Dada retweetledi
Peter Henderson
Peter Henderson@PeterHndrsn·
I feel this urgency too. But this is all so utterly avoidable with good policymaking. No one should be left behind because they didn't accumulate capital in 2026. There are so many people who aren't plugged into these conversations or are simply not in a position to do anything about it. Single mothers and fathers working three jobs to make ends meet cannot possibly work harder to accumulate capital. They already work hard enough as it is. People in this position should not be "left behind." There should be no "permanent underclass,” as many are worried about. Even if you're somewhat better off. People also shouldn't have to work themselves to the detriment of their health and families to shield against future labor impacts. They should be able to trust that their government will think ahead and make good policy.
Peter Henderson tweet media
English
8
42
333
25.4K
Judith Dada retweetledi
Invest Like the Best
Invest Like the Best@InvestLikeBest·
Patrick Collison tells people in their 20s to not move to San Francisco. William largely agrees with him. He thinks SF has a consensus problem and has removed the risk from becoming a founder: "I'm a product of Silicon Valley. I started Plaid back in 2012. I've been there since I was 21, and it's very easy to stay in Silicon Valley. But you can start to get isolated and get very consensus focused. San Francisco is probably the most consensus place I've ever been to. That is both a huge crutch for us, but it's also probably the most valuable asset. As a founder, if you're building in something that SF believes is very consensus, but the world does not believe yet, that's actually a great operating environment. That's why Silicon Valley and SF are so dynamic and we're so in front of the curve. But we also have completely lost touch with how the rest of the world operates. Even how the everyday American operates. So I think it's very important to go to places that don't have that same bias. If you think about emerging markets specifically – the founders who build there, they're the everyday people, they live in this constrained society. They're constrained in a way that San Francisco and New York isn't. And that breeds a different type of creativity, it breeds a different type of innovation that you really can't get anywhere else. If you go to talk to people in London or Vienna or San Francisco, people are living in a world of abundance. And that causes a very specific creation cycle. SF and Silicon Valley are probably more akin to Wall Street in the 1990s than they are like a research lab in Cambridge in like the 1950s. Maybe that was Silicon Valley in the 90s, but it's not anymore. You talk to a 23-year-old and assuming you're like moderately competent and went to the right high school and college, you're going to get a $3 million seed round. And worst case scenario, you can go work at like a great company as an engineer and you'll have "founder" on your resume. There is no risk in that proposition. If you go back to pre-2008, you're on the edge of the knife, and I think that creates just so much intensity in creativity and fear that is such a critical part of the founder journey. Starting companies is just too f**king safe, and it's caused a lot of companies to be super safe companies -- like we're going to pivot to AI and wrap OpenAI/Anthropic. That's not bold, that's not ambitious. And it's because we are attracting founders that actually want to be employees. They don't think and say "if I don't pull this off, I'm going to become bankrupt. My life is over." I think that's pretty healthy. That's when you bring out the rawness of humanity. And I don't see that very much anymore."
Invest Like the Best tweet media
Patrick OShaughnessy@patrick_oshag

.@williamhockey is one of the least visible founders in tech relative to what he has created. He co-founded Plaid and is now building Column, a software company that owns a bank, and powers Ramp, Wise, Bilt, Mercury, and others. He funded it himself by borrowing against nearly everything he had in Plaid shares, and has never raised any outside capital. His story matters because so much of the value in our industry gets created through exactly this kind of extreme personal risk. He is maniacal about being the best in the world at his thing, and has spent his entire career betting on himself and doing whatever it takes to win. He also spends a lot of time outside the US (in places like Kinshasa) which has given him a rare perch on the power of the US dollar. We discuss: - Why emerging markets are often the most financially innovative - What owning 100% of his company allows him to do that VC-backed founders cannot - Getting margin called and nearly going bankrupt - Why the best founders are specialists - What it takes to be the best in the world at your thing - How Silicon Valley's consensus culture produces consensus founders - How the US dollar functions as an instrument of national security Enjoy! Timestamps: 0:00 Intro 9:19 Emerging Markets 14:03 Silicon Valley's Elite Consensus Problem 16:03 Rejecting the VC Hamster Wheel 21:45 Equity and Liquidity 26:03 Funding a Bank 29:45 The Necessity of Extreme Founder Risk 37:18 Finding Leverage 45:20 Longevity and Profitability in Banking 48:46 Matching Your Capital Structure to Your Business 51:44 The Unseen Power of the US Dollar 1:02:30 How AI Will Transform Legacy Banks 1:09:23 The Kindest Thing

English
33
62
767
318.8K
Judith Dada retweetledi
albina
albina@enjojoyy·
Mistral just launched Leanstral, an open source model that beats Sonnet on a formal proof engineering benchmark at 3% of the cost $18 vs $549. same result. honestly Europe will be fine
albina tweet mediaalbina tweet media
English
49
102
1.1K
78.6K
Judith Dada retweetledi
Packy McCormick
Packy McCormick@packyM·
AI is very weird for me because normally I'd be the guy who'd argue that it's crazy we're not more excited about this miracle technology, but I completely get this sentiment. AI companies have clearly botched telling the story. That's a big piece of this. Telling people, "We built this thing that is definitely going to take your job and hopefully we can figure out how to give you handouts or something on the other side, or come up with even better jobs or whatever, say thank you" is clearly terrible messaging. Part of the issue is that what you need to say to raise tens of billions of dollars is very different from what you need to say to get the public excited. "This is definitely a better Google, it does some other cool stuff, too, and we think it's going to really help make you and your loved ones healthier" doesn't fund data centers. Then there's the gap between hype and the average person's experience with AI. Models are getting more useful for a small number of people - if you're a coder or a mathematician or someone who wants to make software but never learned to code, the last few model upgrades have felt really big. That's like ~5% of people, maybe? 2%? If you just want it to answer your questions or do your homework, it's gotten a little bit better, but it's also gotten better for everyone else, so it's not like you have a magic A+ machine all to yourself. Meanwhile, that very small group for whom it's more useful (or who at least say it's more useful because they don't want to be the one who admits it's not) is flooding the zone telling people, "If you don't use these tools as much as / as well as I do, you are completely screwed. You're going to lose your job to me and my army of bots. You (and your kids) are going to be part of the permanent underclass." If you dare question how incredible it is, you are told that you just don't get it, either because you're not smart enough, are too low agency, or don't pay for the latest paid models, which are the really good ones and don't even bother with the free stuff, you dumb poor. And you hear stories like the guy making an mRNA vaccine to fight his dog's cancer, which is awesome, and you're told that everyone will be able to have personalized medicine like that in the future, which sounds great. But like, are you, who can't even make a website with Claude Code, going to start using AlphaFold to whip up your own peptides? Are those dickbags telling you that they're going to be so much richer than you also going to live so much longer than you?? Plus, you hear creepy stories about AI encouraging people to kill themselves, and you know those people were probably unstable anyway and that AI is just a tool and it'll tell you whatever you want, but is it worth the risk? Pretending to be afraid of it might be the best way to stop it from taking your job, which, remember, all of the leaders at the big labs are promising it will do, unless you want to go be a plumber or something, work with your hands (they will not, of course, but you, you should probably seriously consider getting your hands dirty). Or maybe you're not pretending about being afraid, you actually are, which would be totally justified because the leaders of the big labs have told you to be afraid, that they're afraid, that these things are like nuclear weapons in the wrong hands and that there's a 10%? 25%? higher? chance that they'll kill us all, but it's worth the risk, because this is how society progresses. There's no turning back. "We have achieved Recursive Self-Improvement!" they squawk. "This is the big one! Humans are really and truly useless meatbags now! Ha ha!" And you're so confused, because most of the AI you actually encounter is slop. Poorly written social media posts, fake images, etc. Some of it is very funny, but if this is the stuff that's definitely going to take your job and then probably kill you, you don't quite see how? Are you that replaceable? Would you be more excited than concerned? Or would you be more concerned than excited? Personally, I'm excited, because I think LLMs are overhyped. We'll spend bajillions of dollars on inference in a Red Queen's Race, the slop will runneth over, some people will certainly lose their jobs, but a lot of things will genuinely improve, and a lot of people will end up being able to do more at their job than they can now. Plus, the non-chatbots, the models that power embodied AI and help crack biology, are showing early signs that they're going to be magical. In the past week or so, Travis Kalanick, Bob McGrew, and RJ Scaringe all said they're going to be building AI-powered factories. Yann LeCun raised $1 billion for world models to accelerate AI's impact on the physical world. Robots can play tennis now. We'll all have personal tennis coaches or coaches who teach us anything we want when we're around, and spend the rest of their days making our beds, doing our laundry, cooking healthy, delicious meals. The near future is going to be insanely cool, and different in all sorts of ways, some of which we can predict, and some of which we can't. But my god you weirdos need to stop shilling your dystopian fantasies to the people if you ever want them to feel more excited than concerned.
Packy McCormick tweet media
English
80
50
496
61.1K
Judith Dada retweetledi
Drew Bent
Drew Bent@drew_bent·
I see people at Anthropic who didn't necessarily start that way getting better at it. Part of it is being surrounded by others who are AGI-pilled + watching how they push the models. But ultimately... 1. Ask yourself: what if the exponential actually continues 2. Take a task and handhold the AI less, be more ambitious, try to do more of it end-to-end with AI 3. Do #2 enough until you reach the limits of current AI and it fails 4. Wait until the models get better and can successfully complete that task 5. Learn from this. Update your strategy. Rethink what the future looks like. And practice that over & over
English
14
18
221
28.1K
Judith Dada retweetledi
Drew Bent
Drew Bent@drew_bent·
Wrote down some reflections after 1 year at Anthropic (feels like a lifetime): 1) For each breakout success, it started as 1-2 people's side project. True for Claude Code, Cowork, MCP, Artifacts 2) Being AGI-pilled is a skill & you can get better at it 3) We humans adapt surprisingly well. SWEs today look very different than a year ago 4) Roles are somehow becoming both more manager-like (directing agents) and IC-like (everyone’s a builder) 5) Most people I know have had their roles change at least a few times over the past year, whether in name or practice 6) Fond memories of the colleague who used to set up a 1:1 with every new hire, as well as the one who would read every slack message in every channel. Neither are possible today… for humans at least 7) I’m surprised how I went from knowing almost no one here to now having a friend/colleague join every couple of weeks 8) Strategic thinking matters a lot at the AI labs 9) It’s worse to underestimate a technology’s potential than overestimate it 10) Initiatives in a company can go from super underresourced to overresourced in a short time, which you have to watch out for 11) “Antfooding” of products internally seemed silly at first, too insular. But I now see its merits for AI labs. 12) Writing culture is big at Anthropic, although I’m not sure how long that will last 13) Internal dissent is alive and healthy, and often make up the most lauded docs/slack posts 14) Work-life balance seems to have gotten worse across the company as we progress along the exponential 15) Being an IC with nothing on your calendar is still one of the most sought after roles 16) Take the stairs whenever possible 17) The weight of the technology we’re building is becoming more difficult to grapple with
English
60
106
1.4K
146.5K
Judith Dada retweetledi
Suhail
Suhail@Suhail·
The run on inference capacity is coming. You have been warned.
English
93
64
1.1K
822.4K
Judith Dada retweetledi
kai
kai@watanabe_kai_·
job market so bad I'm actually pursuing my dreams
English
363
11.5K
120.2K
3M
Judith Dada retweetledi
Jack Clark
Jack Clark@jackclarkSF·
AI progress continues to accelerate and the stakes are getting higher, so I’ve changed my role at @AnthropicAI to spend more time creating information for the world about the challenges of powerful AI.
English
136
103
1.9K
151.1K