Daniel Ha

8.8K posts

Daniel Ha

Daniel Ha

@danielha

Internet semi-pro. Investor @ https://t.co/sw5DXLHJX3, formerly: cofounder/CEO @ Disqus

San Francisco, CA Katılım Mart 2007
920 Takip Edilen4.9K Takipçiler
Daniel Ha
Daniel Ha@danielha·
@diasluizzz you just made up something and got angry at it. keep it up Luiz
English
0
0
1
18
Luiz
Luiz@diasluizzz·
imagina por exemplo gastar milhões e escanear o Harisson Ford pro jogo do Indiana Jones e aí chega a Nvidia faz isso aí KKKKKKKKKKKKKK e pior q a Bethesda tá apoiando essa merda
Luiz tweet mediaLuiz tweet media
Português
207
832
17.2K
583.4K
Daniel Ha
Daniel Ha@danielha·
@clairevo "i don't know what any of that is but sounds like sf dork stuff" -- single dudes in their 20s sitting in their studio apt ordering doordash while waiting for claude code to finish already
English
0
0
5
412
claire vo 🖤
claire vo 🖤@clairevo·
Let me list the work I do in my personal life - order groceries - keep track of birthdays + gifts - plan parties - plan trips - keep a house in standing condition - keep a car in standing condition - do my taxes - pay my bills - invest my money - take, organize, and share family photos - help my kids with homework - enrich my kids academics - register my kids for activities - attend and manage several kids sports teams - keep my body healthy - keep my kids healthy - keep an eye on my parent's health - cook meals - clean + organize the house - stay intellectually engaged / read - exercise - design, furnish, and organize our home - keep plants alive - stay engaged with the neighborhood - stay engaged with politics - keep up to date on the news - repair broken things around the house - chauffeur my kids and their friends - price compare and purchase utilities - make holiday magic - order school lunches - pick and manage charitable donations - endless returns
yoni rechtman@yrechtman

Growing suspicion that there are vanishingly few use cases for consumer agents. People don’t do work in their personal lives. The only people who do are sf dorks using spreadsheets to plan trips to tahoe

English
72
44
1.3K
187.5K
Paul Graham
Paul Graham@paulg·
It's Y Combinator's 21st birthday today.
Paul Graham tweet media
English
219
119
3.2K
217.9K
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
GPT-5.4 reaction thread. I know we're really got a lot going on right now, but how's the latest frontier model?
English
79
2
198
40.5K
Daniel Ha
Daniel Ha@danielha·
@heynavtoor How do you possibly take serious any study that abstracts this as AI or ChatGPT Every model from every lab behaves differently
English
0
0
0
15
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.6K
48.9K
9.7M
Daniel Ha
Daniel Ha@danielha·
@ericbahn “not like I’m Epstein or anything” The new Godwin’s law
English
0
0
0
37
Daniel Ha
Daniel Ha@danielha·
@yrechtman You have a pretty unimaginative view of what consumer means!
English
0
0
4
81
yoni rechtman
yoni rechtman@yrechtman·
Growing suspicion that there are vanishingly few use cases for consumer agents. People don’t do work in their personal lives. The only people who do are sf dorks using spreadsheets to plan trips to tahoe
English
235
163
5.8K
663.9K
Daniel Ha
Daniel Ha@danielha·
@aakashgupta Here's what people don't understand: <what everyone understands> I don't think it's very complicated. Is it hard to get why some people may not lionize (even successful) attention seekers? I don't celebrate the Kardashians either
English
2
1
61
6K
Aakash Gupta
Aakash Gupta@aakashgupta·
I had Roy on my podcast. Here’s what people don’t understand about this guy. He’s 21. He got kicked out of Harvard, got suspended from Columbia, built a company to $120M valuation, raised $15M from a16z in two days, and is now generating 1.9M views on his “apology” video where he’s playing Subway Surfers in sunglasses. He told TechCrunch an inflated ARR number. The real numbers from his Stripe were ~$5.2M combined. He admitted it. And his response to the media firestorm is this video. When he spoke at TechCrunch Disrupt, he told the audience straight up: “Reputation is sort of a thing of the past.” He said you have to be extreme, authentic, and personal. Everyone called it reckless. He’s proving the thesis in real time. TechCrunch published a gotcha article. Roy turned it into his biggest content week of the year. The article wanted contrition. He gave them a Subway Surfers clip that’s outperforming their article in reach by 10x. The media keeps trying to apply Fortune 500 accountability frameworks to a 21-year-old who went from college dropout to 8-figure net worth in under a year. He lied about a number. He owned it. And then he reminded everyone that in 2026, attention is the only currency that compounds. Every founder writing a LinkedIn apology draft right now should study this clip.​​​​​​​​​​​​​​​​ Roy on my pod: youtu.be/CoRJEXzMiMA?si…
YouTube video
YouTube
Roy@im_roy_lee

BREAKING: Cluely CEO officially responds to TechCrunch

English
51
21
712
340.7K
Daniel Ha
Daniel Ha@danielha·
@edzitron @TurnerNovak hey champ, not every company calculates revenue like a saas company where you take a month's subscriptions x 12 anthropic makes money through their API. it's more like cloud/usage based infra like aws. a usage window <1 month to calculate runrate is not unheard of
English
0
0
1
64
Ed Zitron
Ed Zitron@edzitron·
@TurnerNovak I think annualizing from a week is deeply deceptive and I have never ever heard of someone doing it
English
2
0
14
1.7K
Ed Zitron
Ed Zitron@edzitron·
$5 billion TO DATE? This heavily suggests that Anthropic? Over $10 billion in inference and training costs? This heavily suggests that almost every story about Anthropic’s revenue and costs they’ve leaked is wrong and that it’s been misleading the media.
Paresh Dave@peard33

Court filings: Anthropic has spent $10 bln on training and inference. Its generated $5 bln in all-time revenue. Now billions of dollars in revenue for just this year is at stake and customers/prospects are already asking for new terms or backing out. wired.com/story/anthropi…

English
28
70
1.1K
242.2K
Roy
Roy@im_roy_lee·
BREAKING: Cluely CEO officially responds to TechCrunch
English
1.8K
575
14.6K
3.8M
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨BREAKING: If you've used ChatGPT for writing or brainstorming in the last 6 months, your creative ability may already be permanently damaged. A controlled experiment just proved the effect doesn't reverse when you stop using it. 3,302 creative ideas. 61 people. 30 days of tracking. Researchers split students into two groups. Half used ChatGPT for creative tasks. Half worked alone. For five days, the ChatGPT group outperformed on every metric. Higher scores. More ideas. Better output. AI was making them better. Then day 7. ChatGPT removed. Every creativity gain vanished overnight. Crashed to baseline. Zero lasting improvement. But that's not the bad part. ChatGPT users' ideas became increasingly identical to each other over time. Same content. Same structure. Same phrasing. The researchers called it homogenization. Everyone using ChatGPT started producing the same ideas wearing different clothes. When ChatGPT was removed, the creativity boost disappeared -- but the homogenization stayed. 30 days later, same result. Their creative range had been permanently compressed. Five days of use. Permanent damage 30 days later. A separate trial confirmed it. 120 students. 45-day surprise test. ChatGPT users scored 57.5%. Traditional learners scored 68.5%. AI reduces cognitive effort. Less effort means weaker encoding. Weaker encoding means less creative raw material. You're not renting a productivity boost. You're financing it with your originality. The interest rate is permanent.
Sukh Sroay tweet media
English
897
4K
15.3K
3.3M
Daniel Ha
Daniel Ha@danielha·
@xdotli Did you schedule this tweet from last week
English
0
0
1
42
Xiangyi Li
Xiangyi Li@xdotli·
openai is now * way behind claude in terms of single model capability (you can use opus 4.6 for everything, not xxx-codex) * behind claude in terms of coding (am i the only one who are often confused by the output of codex?) * behind google in distribution (gemini is my driver)
English
78
3
125
36.7K
Cengiz
Cengiz@cengotengo·
@NotTuxedoSam What are your expectations from me? Delete my tweet?
English
2
0
0
222
Daniel Ha
Daniel Ha@danielha·
@synthwavedd Crazy how it’s always a 23 year old indie hacker from an indescript background that has all of the inside scoops
English
1
0
2
534
leo 🐾
leo 🐾@synthwavedd·
after seeing some pretty mind-blowing things, i am all in on anthropic 🫡
English
26
7
334
35.9K
Daniel Ha retweetledi
Gadi Borovich
Gadi Borovich@GadiBorovich·
“My dream is to take Silicon Valley to Buenos Aires. I got the chance to meet incredible young entrepreneurs in Argentina like Ato and Melian but the community is still too small. I want to build a huge company and make sure a lot of my early people come from Buenos Aires and give them tons of equity” @montonenico
English
26
20
214
55K
Daniel Ha
Daniel Ha@danielha·
@thsottiaux Can you talk to someone about getting Codex's ⌃M dictation shortcut over on ChatGPT pls
English
0
0
0
165
Daniel Ha
Daniel Ha@danielha·
@JasonBotterill but I suppose a model can fake depth sounding the way a sophomore philosophy major sounds your second screenshot actually answers your question even if it didn't sound like dorm room convos at 2am
English
0
0
3
728
JB
JB@JasonBotterill·
You genuinely cant fake *big model smell* in smaller models like 5.3. The depth reveals itself no matter how RL maxxed it is. Any prompt even asking for how spongebob is a communist you see the difference in depth
JB tweet mediaJB tweet media
English
92
30
1.5K
236.7K