Post

Bindu Reddy
Bindu Reddy@bindureddy·
Gemini 3.0 didn’t quite work out and most of us are still stuck with 2.5 Sometimes I don’t get it - what’s preventing Google from ditching all the side hustles and training 100 models from 100 teams in parallel Pick the model/team combination that produces a decent model! That way, they will at least stay in the AI race 😅
English
124
46
866
203.8K
Elon Musk
Elon Musk@elonmusk·
@bindureddy Google will win the AI race in the West, China on Earth and SpaceX in space
English
818
780
7K
860.6K
Michael Brave
Michael Brave@MikeBrave_Maker·
@bindureddy I can feel gemini being smarter than 2.5 in a lot of ways, but it's really lazy and does the minimum effort until you push it a few times, and it's tone is much more boring than 2.5. But I think to google the AI race is an obligation more than a priority
English
0
0
0
12
Vladimir Gusev
Vladimir Gusev@GusevV1987·
@bindureddy Scaling experiments sounds appealing, but coordinating talent, compute, and direction at that level is far harder than it looks.
English
0
0
0
431
BetMGM 🦁
BetMGM 🦁@BetMGM·
Pick which twin will win! You could score a share of $2 million in Bonus Bets.
English
295
184
3.7K
47.2M
Weakest AI Fan
Weakest AI Fan@WeakestAIFan·
@bindureddy hey just a word of advice, I would stop tweeting like this if you're the CEO of an AI company. im not trying to take a jab here, im being serious
English
1
0
2
874
Bin Liu
Bin Liu@liu8in·
@bindureddy interesting - imo, Gemini 3 Flash is the best $ / token x intelligence. curious why "Gemini 3.0 didn’t quite work out"?
English
1
0
2
1K
AISauce
AISauce@aisauce_x·
@bindureddy the irony is google has more AI surface area than anyone. search. maps. docs. gmail. android. youtube. the problem isn't models. it's that those products move on different timelines than the model teams. the bottleneck is integration not intelligence
English
1
0
4
757
pranav
pranav@dailymusings20·
@bindureddy I found Gemini 3.0 a huge step up over prior models and among all peer models. My use case mostly involves deep analysis, research, and cognitively heavy tasks, including data analysis.
English
0
0
0
119
Alex Belov
Alex Belov@belovdigital·
@bindureddy google’s playing a long game, they won’t ditch what’s familiar for some shiny new toy. but hey, let’s hope they figure this out before everyone else catches up
English
0
0
0
183
Ginox
Ginox@Ginox_Official·
@bindureddy Feels like Google’s stuck in too many cooks mode, parallel training could speed things up, but bureaucracy keeps slowing innovation.
English
0
0
0
132
SarahYang
SarahYang@sarahyang_ai·
@bindureddy scaling experiments across so many teams simultaneously feels like a coordination nightmare though
English
1
0
0
1.7K
xr_rijihua
xr_rijihua@xr_bb52547·
@bindureddy The real blocker is internal politics + the Innovator's Dilemma. Training 100 models in parallel sounds great until you realize they all report to the same VP who needs to justify $280B search revenue. OpenAI ships because they have nothing to lose.
English
0
0
0
291
Silas
Silas@xixn418399·
@bindureddy I really thought Google would be way ahead in AI, but turns out they’re definitely not.
English
0
0
0
256
Dhaval Trivedi
Dhaval Trivedi@DrAIExpert·
@bindureddy Google's biggest problem is internal politics, not engineering talent. They have brilliant people but too many competing teams. OpenAI ships fast because they are focused. Google ships cautiously because they have everything to lose. Focus beats scale here.
English
0
0
12
2.5K
Ummueaman
Ummueaman@ummueaman61465·
@bindureddy Interesting take 😅 In reality, it’s less about training “many models and picking one,” and more about balancing research direction, safety, cost, and long-term scalability. Big labs tend to iterate carefully rather than run fully parallel experiments at that scale.
English
0
0
0
139
Chandan H
Chandan H@_Chandan_17·
@bindureddy Training 100 models in parallel sounds good, but coordination, data quality, and evaluation become the real bottlenecks.
English
0
0
0
180
Udeme Akpan | SOCIAL MEDIA & ADS MANAGER
Bindu Reddy, I Totally get the frustration. They have insane talent and resources, yet it feels like every team is guarding their own sandbox instead of just throwing everything at the wall to see what sticks. One focused moonshot push with real parallel bets would wake things up fast.
English
0
0
0
280
Dairy Queen
Dairy Queen@DairyQueen·
Free Cone Day is here! Come celebrate with us at a DQ location on today by getting a FREE small vanilla cone 🍦
English
931
2.3K
15.1K
24.8M
Jeffrey Escobar
Jeffrey Escobar@JeffEscobars·
fragmented nature of google’s internal focus is exactly why gemini 3.0 is lagging. when you split compute and talent across 100 different experiments, you end up with a dozen "almost there" models instead of one unified powerhouse like gpt-5.4. openai’s lead right now isn’t just about raw compute; it’s about the decision to kill everything that isn’t the core mission. google has the infra, but they don't have the stomach to stop hedging their bets
English
1
0
0
167
Sam Tilston
Sam Tilston@samtilston·
@bindureddy Google has the compute, the data, and the talent What it does not have is the institutional permission to fail publicly at speed OpenAI won partly because it had nothing to protect
English
0
0
0
14
Batoong
Batoong@Batoong82·
@bindureddy They might think that out of 100 ideas, there will be one good idea and ninety-nine bad ones to discard.
English
0
0
0
291
Manish Pareek
Manish Pareek@Mkpareek19_·
@bindureddy I get what you mean! It feels like there's so much untapped potential with parallel model training. Would love to see how that approach unfolds!
English
0
0
0
593
Arpit Aggarwal
Arpit Aggarwal@manageyourcode·
@bindureddy 3.1 Pro is actually good. And for anything DOM based, the only model that actually works in debugging for me.
English
0
0
0
113
Kode
Kode@kode11·
@bindureddy Google's problem isn't engineering capacity — it's organizational antibodies. Every team protects their product from being cannibalized by AI. Meanwhile Anthropic and OpenAI have zero legacy to protect. Incumbency is a liability in this race.
English
0
0
2
983
Max
Max@crypt_max·
Google's problem isn't a lack of models; it's the 'Innovator’s Dilemma.' They have a search monopoly to protect, which makes them play defense with safety rails and legacy integration. A decentralized 'hunger games' between 100 teams sounds great until you realize they’re all competing for the same internal data and approval from the same legal department.
English
0
0
1
2.9K
Dan
Dan@danialbka·
@bindureddy They messed up giving away the 1 year pro sub and doing the Apple deal. So much compute given away for nothing
English
0
0
0
82
Jerry
Jerry@Jerry94_HC·
@bindureddy google trying to win a marathon wearing flip flops
English
0
0
0
40
Techmik
Techmik@MichaelAluya3·
What's preventing Google is the same thing that prevents every incumbent from winning a paradigm shift in their own market. The existing business generates $280 billion a year from search and advertising. Every decision about AI gets filtered through whether it helps or threatens that revenue stream. OpenAI and Anthropic don't have a search business to protect. That's not a resource disadvantage. It's a clarity advantage.
English
0
0
0
1.3K
Centrox AI
Centrox AI@CentroxAI·
@bindureddy @bindureddy This isn’t a search problem where you brute force 100 variants. Training large models is path dependent. Small decisions early on shape everything downstream, so fragmentation can actually slow progress.
English
0
0
2
532
Ultimecia
Ultimecia@Ultimecia2171·
@bindureddy WTF are you talking about? 3 Pro was a massive improvement from 2.5. 3.1 Pro itself is leaps and bounds ahead of 3.
English
3
0
16
831
Compartir