aashay sachdeva

4.5K posts

aashay sachdeva

aashay sachdeva

@AashaySachdeva

Model Training @SarvamAI | Built https://t.co/hWenaRkujG

India Katılım Mayıs 2014
521 Takip Edilen3.4K Takipçiler
sumit 🏴
sumit 🏴@wh0sumit·
i want to get in touch with the @sarvamai team. if anyone from my network can help me connect with them, that would be super helpful 🙇🏻‍♂️
English
12
2
56
7K
big hero cigs
big hero cigs@arbit_raj·
@AashaySachdeva ragebait or not, if people get people for cheap, why will they stop? Easy to say so
English
1
0
1
491
aashay sachdeva
aashay sachdeva@AashaySachdeva·
@kawal279 Sir the day you have robots which cook better you will replace them in a day. Whole point is to move upwards for humans in terms of skill. Btw have you reopened in hsr? Wanted to visit but then saw your post
English
0
0
5
277
Hemant Mohapatra
Hemant Mohapatra@MohapatraHemant·
Congrats to @SarvamAI - one of the top handful of global AI cos called out by Jensen as close partners of @nvidia at GTC 2026. We are just getting started, stay tuned. Incidentally, 4 of these are Lightspeed portfolio companies :)
Hemant Mohapatra tweet media
English
6
52
557
8.6K
aashay sachdeva
aashay sachdeva@AashaySachdeva·
@jojokompella What evals are you checking on? You should check on generation evals. MMLU etc require absolutely no generation capability,only understanding.
English
1
0
1
308
Ramakrishna kompella
Ramakrishna kompella@jojokompella·
I did some tests myself, putting it out soon. Expected it to be significantly better than the competition for Indian languages. For lower resource ones, it is. But not for high resource. Sarvam 30B is not significantly worse than 105B though
nullptr@resetptr

ran some quick weekend experiments on @SarvamAI's 105B model on a subset of the IndicMMLU-Pro dataset Sarvam's model is really good at reasoning efficiency. uses ~2.5x less tokens to reach ~same accuracy

English
2
0
2
575
Mahesh Sathiamoorthy
@AashaySachdeva Standardization, reuse etc. Also, I asked opus and it gave me this. Do you like this way of representing rubrics?
Mahesh Sathiamoorthy tweet media
English
1
0
0
195
Mahesh Sathiamoorthy
What's the library people use for defining/loading/processing rubrics?
English
6
1
11
2.9K
nullptr
nullptr@resetptr·
ran some quick weekend experiments on @SarvamAI's 105B model on a subset of the IndicMMLU-Pro dataset Sarvam's model is really good at reasoning efficiency. uses ~2.5x less tokens to reach ~same accuracy
nullptr tweet media
English
2
4
35
2.4K
nullptr
nullptr@resetptr·
sidenote: sarvam's APIs are kinda flaky, repeated 504 gateway errors which required multiple retries. i'm sure this'll get better with time tho. great job!
English
1
0
5
148
Mahesh Sathiamoorthy
Claude code high-five'ing itself about how well it explored the code :)
Mahesh Sathiamoorthy tweet media
English
2
0
18
2.1K
neural nets.
neural nets.@cneuralnetwork·
sarvam superuser
neural nets. tweet media
Eesti
4
0
47
3.9K
Indra
Indra@IndraVahan·
we built a really cool mvp utilizing sarvam's indic stack @AphelionLabs and pushed its capabilities pretty hard. not much i can share yet but excited about what came out of it
Aphelion Labs@AphelionLabs

Most people building for India still underestimate the language problem. We recently built a voice-first rural banking assistant MVP on Sarvam’s stack to handle dialect-heavy Hindi and real loan queries from farmers. Glad to see @Inc42 cover the work and quote our approach. inc42.com/features/sarva…

English
4
4
78
3.7K
neural nets.
neural nets.@cneuralnetwork·
I calculated my token burn in last 10 months I have burnt a total of 667M tokens and on a monthly avg of 66.5M tokens 😁 This is just on cursor
English
23
0
285
18.3K