Fred Brunel

28K posts

Fred Brunel banner
Fred Brunel

Fred Brunel

@fbrunel

CTO & Co-Founder @ https://t.co/B9hLbvlgCl

Katılım Mart 2007
1.3K Takip Edilen1.8K Takipçiler
Fred Brunel
Fred Brunel@fbrunel·
@GergelyOrosz @ThePrimeagen Steve Jobs was known for painting an idealized vision of their products but it was always grounded in reality and products did work. What is this tech world we live in?
English
0
0
3
59
Fred Brunel
Fred Brunel@fbrunel·
@GergelyOrosz @ThePrimeagen There was always path to profitability, everybody loved the service. These new startups are faking products and revenues entirely, nothing works for real. They took "fake it til you make it" way too far.
English
1
0
9
74
ThePrimeagen
ThePrimeagen@ThePrimeagen·
guys, i honestly do not like clowning on Gary. I don't find being the butt of a joke funny, so I imagine he does not either. But, this is what worries me about where we are going. We are actively encouraging an entire generation that the tech is there when its not, and a couple of silly mistakes made on a website isn't the end of the world, but people's data and breaches are serious. We are entering a very VERY hackable world, and I do not like it one bit.
gregorein@Gregorein

so... I audited Garry's website after he bragged about 37K LOC/day and a 72-day shipping streak. here's what 78,400 lines of AI slop code actually looks like in production. a single homepage load of garryslist.org downloads 6.42 MB across 169 requests. for a newsletter-blog-thingy. 1/9🧵

English
298
265
5.2K
544.5K
Gergely Orosz
Gergely Orosz@GergelyOrosz·
Looking back, before all this AI stuff, just look at eg Uber (where I used to work.) Uber focused on *growth* above everything else: kept losing money on unit economics (on purpose!) The media said they would never make money... but it turned out to be the correct play for eg Uber's investors. I would look at Gary's takes from this lens: he is there to help *VC-funded* startups succeed. And slow and steady growth / build the right thing with high quality first: this can work well outside of VC, but it's almost always a sure failure with VC, because the next VC-funded company is on iteration #7, with most of your target customer market being their customer already (usually subsidized!) by the time you launch your technically superior product...
English
3
1
40
5.2K
Fred Brunel
Fred Brunel@fbrunel·
@Jonathan_Blow You know, the previous CEO of YC was Altman. Who got fired for deceptive behavior and conflicts of interest. Says it all. YC is a shadow of itself.
English
1
0
1
354
Jonathan Blow
Jonathan Blow@Jonathan_Blow·
These statements inherently damage YC’s reputation because either (a) Garry has incredibly poor judgement, which affects the companies chosen and mentored; (b) He is just BSing and knows it, which means YC is not trustable; (c) He is correct, then WHAT ARE ALL THE COMPANIES FOR? x.com/garrytan/statu…
English
53
24
1.2K
137.4K
Fred Brunel retweetledi
here’s something
here’s something@ive_arc·
This edit has no business being this high quality
English
83
6.1K
53.9K
1.5M
Sam Altman
Sam Altman@sama·
The first steel beams went up this week at our Michigan Stargate site with Oracle and Related Digital
English
1.2K
405
7.2K
1.2M
Fred Brunel retweetledi
Gergely Orosz
Gergely Orosz@GergelyOrosz·
If you use GitHub (especially if you pay for it!!) consider doing this *immediately* Settings -> Privacy -> Disallow GitHub to train their models on your code. GitHub opted *everyone* into training. No matter if you pay for the service (like I do). WTH github.com/settings/copil…
Gergely Orosz tweet media
English
394
929
5.2K
564.2K
Elon Musk
Elon Musk@elonmusk·
Generated with @Grok Imagine 🚬
English
6.4K
4.7K
49K
13.2M
Elon Musk
Elon Musk@elonmusk·
Minute-long story made w Grok Imagine
English
6.9K
7.6K
72.2K
25.8M
Alan Eyre
Alan Eyre@AlanEyre1·
spot-on, from @anneapplebaum Money quote: "Donald Trump does not think strategically. Nor does he think historically, geographically, or even rationally. He does not connect actions he takes on one day to events that occur weeks later. He does not think about how his behavior in one place will change the behavior of other people in other places." "He does not consider the wider implications of his decisions. He does not take responsibility when these decisions go wrong. Instead, he acts on whim and impulse, and when he changes his mind—when he feels new whims and new impulses—he simply lies about whatever he said or did before." theatlantic.com/ideas/2026/03/…
English
323
2.6K
8K
357K
Jonathan Blow
Jonathan Blow@Jonathan_Blow·
Theory: We don't let LLMs control robots and operate freely in the physical world (yet?) because they'd fall all the time, break everything, and cause massive damage. But in software the falling and the massive damage are invisible, so it's fine. x.com/sama/status/20…
Sam Altman@sama

I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.

English
120
164
2.9K
105.2K
Elon Musk
Elon Musk@elonmusk·
Btw, the proceeds of any legal victory in the OpenAI case will be donated to charity. I will in no way enrich myself.
English
18.4K
22.4K
293.5K
70.6M
Fred Brunel
Fred Brunel@fbrunel·
@atmoio Having a court of yes-men does the same, you don't need a machine.
English
0
0
1
34
Mo Bitar
Mo Bitar@atmoio·
AI is making CEOs delusional
Indonesia
1K
2.6K
19.1K
2.8M
NIK
NIK@ns123abc·
Remember OpenAI Sora? disappeared like it never existed lol
NIK tweet mediaNIK tweet media
English
83
57
991
647.6K
Fred Brunel retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Yann LeCun (@ylecun ) explains why LLMs are so limited in terms of real-world intelligence. Says the biggest LLM is trained on about 30 trillion words, which is roughly 10 to the power 14 bytes of text. That sounds huge, but a 4 year old who has been awake about 16,000 hours has also taken in about 10 to the power 14 bytes through the eyes alone. So a small child has already seen as much raw data as the largest LLM has read. But the child’s data is visual, continuous, noisy, and tied to actions: gravity, objects falling, hands grabbing, people moving, cause and effect. From this, the child builds an internal “world model” and intuitive physics, and can learn new tasks like loading a dishwasher from a handful of demonstrations. LLMs only see disconnected text and are trained just to predict the next token. So they get very good at symbol patterns, exams, and code, but they lack grounded physical understanding, real common sense, and efficient learning from a few messy real-world experiences. --- From 'Pioneer Works' YT channel (link in comment)
English
176
361
2.2K
636.6K
Fred Brunel retweetledi
HSVSphere
HSVSphere@HSVSphere·
"Shipping a button" (vid by @KaiLentit). Might be the funniest thing I've seen in years
English
71
327
2.9K
306.7K