Chandrasekhar V

28 posts

Chandrasekhar V

Chandrasekhar V

@chander1987

VC at Z47

Bangalore Katılım Ekim 2011
180 Takip Edilen93 Takipçiler
Chandrasekhar V retweetledi
India In Shanghai
India In Shanghai@IndiaInShanghai·
Dream of a New India 🇮🇳 powered by the New Generation! ➡️ CG @PratikMathur1 met a dynamic team of venture capitalists from @z47_vc who have backed several pioneering Indian startups and #unicorns. ➡️ Young Indian entrepreneurs shared their global expansion strategies, with India positioned at the heart of regional and global supply chains. ➡️ Conversations focused on sunrise sectors—EV/NEVs, AI, solar & renewables—where 🇮🇳 is fast emerging as a global leader. ➡️ CG appreciated their efforts for #ViksitBharat, underscoring how innovation and enterprise are creating quality jobs and driving India’s growth story. @indiandiplomacy @indiandiplomats
India In Shanghai tweet media
English
2
8
20
5.2K
Chandrasekhar V retweetledi
Z47
Z47@z47_vc·
At Z47, we back founders building enduring consumer brands. TWT exemplifies our belief that consumer-first, transparent companies will shape the future of Indian food. Full details on our blog. Link to the announcement: z47.com/news/z47-backe… @chander1987 (6/6)
English
0
1
2
313
Chandrasekhar V retweetledi
Aakrit Vaish
Aakrit Vaish@aakrit·
Below is all that matters. Everything else is just narrative, to whoever it suits best. - DeepSeek is an incredible feat in the evolution of AI, the next big inflection point after ChatGPT released in Nov 2022. - They have proved once again what we have known forever: in AI, talent over everything else. It was the very reason a non-profit lab called OpenAI was put together in 2015, to assimilate the best research talent to take on Google. DS has played the exact same move against OAI a decade later. - The difference this time is that the talent is not what you would typically think AI Research, but rather a crack team of software engineers, infra architects, mathematicians and DevOps. This IMO is as significant a moment as anything else, that to build great AI models you no longer need to compete for limited expensive AI talent ($1-5m salaries). - This would not have been possible without decades of AI research compounding, and without OpenAI o3 already being out there. DS used AI to generate data to build best in class AI. The hilarious part is OpenAI still makes a unit loss on their inference, which means DS caused them losses in more ways than one! - Even if DS is fabricating the costs, the paper proves it is still an order of magnitude cheaper than anything else so far of similar quality. This will lead to an explosion of more teams & companies building models, and many will play in specific niches. This is a very very good thing. - As a result, the eventual demand for compute will still stay in a similar range, or maybe even to up because so many players will now get into the game. Jevons Paradox in full action. - Cheaper models will lead to cheaper inference. Cheaper inference will lead to startups using AI for all sorts of things for both building products and internal tools. Cost of company building will go down and the one person $1b enterprise that Sam has talked about is no longer a pipe dream. - This also has implications on Venture Capital in AI. IMO there will be small funds and then very large funds. A $1m seed can now probably take you very far, even profitability. And then you need serious growth capital for world dominance. Good time to be in private equity. What a time to be alive, and fortunate to be working in this industry. Time to put India on the map! 🚀🇮🇳
English
11
26
216
26.8K
Chandrasekhar V retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM model size competition is intensifying… backwards! My bet is that we'll see models that "think" very well and reliably that are very very small. There is most likely a setting even of GPT-2 parameters for which most people will consider GPT-2 "smart". The reason current models are so large is because we're still being very wasteful during training - we're asking them to memorize the internet and, remarkably, they do and can e.g. recite SHA hashes of common numbers, or recall really esoteric facts. (Actually LLMs are really good at memorization, qualitatively a lot better than humans, sometimes needing just a single update to remember a lot of detail for a long time). But imagine if you were going to be tested, closed book, on reciting arbitrary passages of the internet given the first few words. This is the standard (pre)training objective for models today. The reason doing better is hard is because demonstrations of thinking are "entangled" with knowledge, in the training data. Therefore, the models have to first get larger before they can get smaller, because we need their (automated) help to refactor and mold the training data into ideal, synthetic formats. It's a staircase of improvement - of one model helping to generate the training data for next, until we're left with "perfect training set". When you train GPT-2 on it, it will be a really strong / smart model by today's standards. Maybe the MMLU will be a bit lower because it won't remember all of its chemistry perfectly. Maybe it needs to look something up once in a while to make sure.
Artificial Analysis@ArtificialAnlys

GPT-4o Mini, announced today, is very impressive for how cheap it is being offered 👀 With a MMLU score of 82% (reported by TechCrunch), it surpasses the quality of other smaller models including Gemini 1.5 Flash (79%) and Claude 3 Haiku (75%). What is particularly exciting is that it is also to be offered at a cheaper price than these models. The reported price is $0.15/1M input tokens and $0.6/1M output tokens. With such a cheap price for input tokens and its large 128k context window, it will be very compelling for long context use-cases (including large document RAG). @OpenAI have clearly made a very high quality model relative to its size (pricing can indicate size due to the direct relationship to compute cost). The model seems a worthy successor to GPT3.5 Turbo as OpenAI's smallest model and the model used for ChatGPT's free version.

English
191
920
7.5K
1.4M
Chandrasekhar V retweetledi
Z47
Z47@z47_vc·
India's growing middle class loves premium stuff! More money to spend and a desire for better quality are driving this trend. People want experiences, not just products. This is making the premium market grow fast! Premiumization is now big in India. (1/2)
English
2
2
7
1.3K
Chandrasekhar V retweetledi
Vivek Sinha
Vivek Sinha@viveksinhaisb·
I am thrilled to announce the launch of 'Beyond Odds Technologies', a training and recruitment platform aimed at addressing the shortage of skilled workforce. On this mission, I am happy to have secured $11 million in seed funding.
English
18
20
139
38.7K
Chandrasekhar V retweetledi
Jason Warner
Jason Warner@jasoncwarner·
Have a long flight so time to do a thread on a question I get in some form at least half dozen times a week Where will value accrue in AI from here Think: foundation models, apps, middleware, full-stack blah blah blah etc
English
6
31
228
99.2K
Chandrasekhar V retweetledi
Z47
Z47@z47_vc·
You've probably noticed a much higher number of people owning iPhones compared to two years ago. It is not an anomaly. Affluent and elite households are expected to grow to ~80 million by 2030. That’s almost a quarter of the 350 million Indian households. 💰 (1/10)
Z47 tweet media
English
2
10
43
13.3K
Chandrasekhar V retweetledi
Z47
Z47@z47_vc·
🏠By 2030, India is poised to witness the emergence of ~35-40 million new premium households, which excites us. We have the privilege of hosting @RevantB, @chakrigade & @KapilChopra72 this evening to deliberate what impact this would have on the India consumption story. (2/7)
Z47 tweet mediaZ47 tweet media
English
1
2
8
548
Chandrasekhar V retweetledi
Z47
Z47@z47_vc·
📊India’s post-pandemic growth story is yet to unfold fully. The upper-income segment exhibits robust growth and purchasing power, while the narratives of the lower and middle-income groups are still catching up. Economists call it a K-shaped recovery path. (1/7)
English
1
7
15
4.4K
Chandrasekhar V retweetledi
Harry Stebbings
Harry Stebbings@HarryStebbings·
So @VancityReynolds is a marketing genius. To take a sponsorship announcement and turn it into viral, comedic content is pure genius. Watch and learn below… 👇
English
11
3
105
35K
Chandrasekhar V
Chandrasekhar V@chander1987·
I need a whole series of Nolan covering complicated & controversial high-impact personalities ignored by media Nietzche? Jung? Newton? Da Vinci? Galileo? Pythagoras? (Please do Da Vinci first :P)
English
0
0
0
50
Chandrasekhar V
Chandrasekhar V@chander1987·
3. Max Born and University of Göttingen: Maybe an entire movie around this? :) The man and the scene that unlocked multiple geniuses - Heisenberg, Wolfgang Pauli, Fermi, Oppenheimer among many others. What a time!
English
1
0
0
127
Chandrasekhar V
Chandrasekhar V@chander1987·
So many goosebump moments in #Oppenheimer Could have been a 10hr movie and I'd still want more. Wishlist below! 1. Einstein: E=mc2 and deeper interactions with the man that would unleash it Proving light wave/particle duality vs "God does not play dice" + debates with Niels Bohr
English
1
0
0
130