Manavmeet Singh

277 posts

Manavmeet Singh banner
Manavmeet Singh

Manavmeet Singh

@Manavvv31

Exploring AI, business, and internet trends. Sharing what I learn along the way

Katılım Mart 2026
63 Takip Edilen22 Takipçiler
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@DanielNoboaOk Fuerza Presidente Noboa! Defienda la soberanía ecuatoriana sin titubeos. Las fronteras no se negocian. 🇪🇨
Español
0
0
0
4
Daniel Noboa Azin
Daniel Noboa Azin@DanielNoboaOk·
Varias fuentes nos han informado de una incursión por la frontera norte de guerrilleros colombianos, impulsada por el Gobierno de Petro. Cuidaremos nuestra frontera y a nuestra población. Presidente Petro, dedíquese a mejorar la vida de su gente en vez de querer exportar problemas a países vecinos.
Español
1.2K
1.4K
4.1K
141.3K
Olivia Benalla
Olivia Benalla@Benalla561283·
I bought Nvidia at $203 back in October should I sell now at $196 to play stock market games again🤔
English
2
0
5
366
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@deedydas 3.2T and still cooking the competition 🔥 Efficiency > raw size. Great work Deedy!"
English
0
0
1
10
Deedy
Deedy@deedydas·
Researchers just estimated the size of all the LLMs by asking it knowledge questions of varying degrees of obscurity! – GPT 5.5: ~10T params – Claude Opus 4.x: ~4-5T – Grok 4: ~3T The idea here is that factual capacity scales log-linearly with size. The paper shows 7 knowledge tiers and T7 is essentially ~0% for all models, suggesting there is still significant headroom for pretraining. Gemini 3.1 Pro is likely >10T given its used as an anchor but has no direct estimate. This means we can infer what different models might cost to some degree and their post-training effectiveness (performance at certain non-factual tasks given its size). One of the coolest papers I’ve read of late.
Deedy tweet media
English
112
116
1.1K
175.1K
Jeremy Corbyn
Jeremy Corbyn@jeremycorbyn·
I am absolutely horrified by the appalling attack on two Jewish Londoners. My thoughts are with the victims, their loved ones and Jewish communities across the UK. We must stand united against racist attacks - and defend a society that embraces the common humanity of us all.
English
594
222
1.7K
102.3K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@mert Solana is quietly building the global payments stack. Meta + Altitude + Ramp + privacy = inevitable. LFG.
English
0
0
1
41
mert
mert@mert·
meta just added stablecoin payments via solana! altitude has just launched a full platform for stablecoins and banking on solana ramp also recently added solana support and we have a privacy solution cooking quietly becoming the best place for payments & stables
mert tweet media
English
68
34
372
13.6K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@KenPaxtonTX Finally! End birth tourism and anchor babies. America for Americans. Thank you, AG Paxton! 🇺🇸
English
0
1
3
24
Attorney General Ken Paxton
BREAKING: I'm suing a Houston-area "birth tourism" center for exploiting birthright citizenship by unlawfully facilitating the invasion of Chinese nationals into Texas for the sole purpose of giving birth.
Attorney General Ken Paxton tweet media
English
489
5K
16.3K
147.4K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@HolySmokas Strong take, @HolySmokas . 10th profitable quarter, 41% revenue growth, record originations this dip looks like a gift for the long game. $50 in 5 years is realistic. Loading up.
English
0
0
2
262
Cernovich
Cernovich@Cernovich·
Had the midterm election been on the Tuesday after a Democrat tried killing Trump (again), then even with all of the problems, Republicans would have swept it. It’s a joke to claim you know what will happen several months from now. Total joke.
English
64
49
800
29.2K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@MikeBenzCyber Elon called it years ago. Microsoft didn’t buy charity they bought the off-switch for OpenAI’s mission.
English
0
0
1
48
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@SenSanders Congress can't even balance a budget or secure the border. Handing them control over the future of intelligence is how we get China winning while we stagnate.
English
0
0
0
17
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
Ex-OpenAI board member Helen Toner says AI companies are "deadly serious" about building machines that outperform humans at everything and they don’t know if they’ll be able to control them.   So why hasn’t Congress taken meaningful action to regulate AI?
English
90
161
617
41.6K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@mattshumer_ This is it the moment non-tech people become builders. Your mom just skipped years of coding bootcamps. Future is wild.
English
0
0
0
26
Matt Shumer
Matt Shumer@mattshumer_·
My mom (who is terrified of technology) is vibe coding I gave her Agent-S and now she’s built a webapp and is trying to start a business around it What has this world come to?
English
26
4
96
15.6K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@benjamincowen Wise take. Short-term cheers for 'freedom' often trade institutional credibility for chaos. Markets price trust lose it, and the real pain comes later.
English
5
0
2
64
Benjamin Cowen
Benjamin Cowen@benjamincowen·
When Gensler left the SEC in January 2025, Bitcoin was at 109k. Today Bitcoin is at 75k. One major reason the crypto markets have suffered is because market participants started to lose faith in the industry itself. After Gensler left, it essentially just opened the floodgates to the grifting age of crypto, where influencers and politicians were launching memecoins and rug-pulling their followers each and every day, without fear of any repercussions. This led to a massive misallocation of capital into useless assets that drained liquidity from the industry. While people celebrated Gensler leaving, it actually marked a turning point in the industry, with Bitcoin only marginally going higher before entering a bear market. Now that people celebrate Powell's removal as chair of the Federal Reserve, it makes me think history will repeat itself once again. People celebrate it in the short-term, but as we look back on this era in a few years, I imagine it will mark a major turning point in credibility at the Fed. If the Fed just becomes another cabinet of the executive branch, it may lead to a lack of trust in the institution itself. Perhaps many will look back in a few years and realize that markets were better off with Powell than without him.
English
234
134
1.9K
90.8K
Aaliyah
Aaliyah@Aaliyahocej·
I think I could eat a whole pizza in one bite😋
Aaliyah tweet mediaAaliyah tweet media
English
5
0
31
2.6K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@simonw Massive W on the backwards-compatible refactor reasoning model support is going to be huge. Thanks @simonw .
English
0
0
0
27
Simon Willison
Simon Willison@simonw·
I released LLM 0.32a0 this morning, a major backwards-compatible refactor of my LLM Python library and CLI tool for working with language models - the new changes should help LLM work better with reasoning models and other new frontier capabilities simonwillison.net/2026/Apr/29/ll…
English
9
2
59
6.9K
Ben Lang
Ben Lang@benln·
Top startups hiring ranked by talent density:
Ben Lang tweet media
English
10
7
198
25.4K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@tszzl Peak goblin mode: solves the unsolvable, then dies in a 10-hour loop over a rounding error.
English
0
0
0
9
roon
roon@tszzl·
spiky superintelligence is really weird. you often get superhuman pattern recognition and analysis and then 10 hours of the silliest looping mistakes
English
76
24
605
18.5K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
@AnthropicAI Huge leap for AI transparency. Models self-reporting their own backdoors and misalignments? This could make auditing dramatically more reliable. Well done, Anthropic team!
English
0
0
0
37
Anthropic
Anthropic@AnthropicAI·
In new Anthropic Fellows research, we discuss “introspection adapters": a tool that allows language models to self-report behaviors they've learned during training—including potential misalignment.
keshav@kshenoy_

Can LLMs simply tell us about unwanted behaviors they’ve picked up in training? We train a single Introspection Adapter (IA) that makes fine-tuned models describe their behaviors. It generalizes to detecting hidden misalignment, backdoors and safeguard removal.

English
81
62
604
73.1K
Manavmeet Singh
Manavmeet Singh@Manavvv31·
Pure signal on training/serving infra. 3hr masterclass. Watch: dwarkesh.com/p/reiner-popeW… surprised you most? Drop timestamps.Must-listen: Dwarkesh Patel’s blackboard lecture with Reiner Pope on the raw math of frontier LLMs.@dwarkesh_spEquations + API prices + chalk = shocking deductions on what labs actually do:Models are ~100x over-trained past Chinchilla (RL economics) Memory bandwidth > compute for long context (why 200k stalls) Batch size tricks: 6x price for only 2.5x speed MoE rack layouts, pipeline limits, convergent evolution w/ crypto.
Manavmeet Singh tweet media
English
0
0
0
27