Pranay Agrawal

1.1K posts

Pranay Agrawal banner
Pranay Agrawal

Pranay Agrawal

@praggr

Unfiltered thoughts. @codepragrr

Bengaluru South, India Katılım Ağustos 2022
139 Takip Edilen57 Takipçiler
Pranay Agrawal retweetledi
Istra of Glome
Istra of Glome@byistra·
C.S. Lewis’ incredible observation on friendship
Istra of Glome tweet media
English
35
2.6K
17.2K
644.7K
Pranay Agrawal
Pranay Agrawal@praggr·
@paraschopra 💯💯 I remember reading a book about exactly this called Go givers sell more
English
0
0
0
189
Paras Chopra
Paras Chopra@paraschopra·
The folk understanding of sales is that it requires making superlative promises that the product can’t fulfil. However, if you spend time with great sales people, you understand that the skill is really about making it easy for the other party to appreciate the value they can get. A badly sold product focuses on its features which requires the customer to decipher the hidden value prop, while a well sold product has its value prop is extremely obvious. The focus on value is why great sales people care deeply about the customer, sometimes to the extent of recommending a competitor product when that’s a better fit. (The product here can mean anything - from a piece of writing to software to a shampoo and even your CV)
English
22
8
260
13.9K
silicognition
silicognition@silicognition·
sharing my college notes with everyone
silicognition tweet media
English
2
0
12
220
Pranay Agrawal
Pranay Agrawal@praggr·
Are we seeing the first signs of "Move 37" by LLMs🤯🤯
Jared Duker Lichtman@jdlichtman

In my doctorate, I proved the Erdős Primitive Set Conjecture, showing that the primes themselves are maximal among all primitive sets. This problem will always be in my heart: I worked on it for 4 years (even when my mentors recommended against it!) and loved every minute of it. [Primitive sets are a vast generalization of the prime numbers: A set S is called primitive if no number in S divides another.] Now Erdős#1196 is an asymptotic version of Erdős' conjecture, for primitive sets of "large" numbers. It was posed in 1966 by the Hungarian legends Paul Erdős, András Sárközy, and Endre Szemerédi. I'd been working on it for many years, and consulted/badgered many experts about it, including my mentors Carl Pomerance and James Maynard. The the proof produced by GPT5.4 Pro was quite surprising, since it rejected the "gambit" that was implicit in all works on the subject since Erdős' original 1935 paper. The idea to pass from analysis to probability was so natural & tempting from a human-conceptual point of view, that it obscured a technical possibility to retain (efficient, yet counter-intuitve) analytic terminology throughout, by use of the von Mangoldt function \Lambda(n). The closest analogy I would give would be that the main openings in chess were well-studied, but AI discovers a new opening line that had been overlooked based on human aesthetics and convention. In fact, the von Mangoldt function itself is celebrated for it's connection to primes and the Riemann zeta function--but its piecewise definition appears to be odd and unmotivated to students seeing it for the first time. By the same token, in Erdős#1196, the von Mangoldt weights seem odd and unmotivated but turn out to cleverly encode a fundamental identity \sum_{q|n}\Lambda(q) = \log n, which is equivalent to unique factorization of n into primes. This is the exact trick that breaks the analytic issues arising in the "usual opening". Moreover, Terry Tao has long suspected that the applications of probability to number theory are unnecessarily complicated and this "trick" might actually clarify the general theory, which would have a broader impact than solving a single conjecture.

English
0
0
0
24
Pranay Agrawal retweetledi
H
H@hmmmmmm1458·
X에서 전문가인 척 하는 사람들 다 이런 느낌임
한국어
325
15.2K
97.8K
4.3M
Pranay Agrawal
Pranay Agrawal@praggr·
@inceptmyth Its really clear and well structured. One small suggestion is to split the initial paragraph into 2 separate paragraphs for more readability.
English
1
0
1
37
Aman
Aman@arcaman07·
finally created my first personal website, I guess it's never too late. website link - arcaman07.github.io would love to hear some feedbacks.
Aman tweet media
English
2
0
17
791
Pranay Agrawal
Pranay Agrawal@praggr·
Finally some big model smell from @OpenAI 🤩🤩
Andrew Curran@AndrewCurran_

If OpenAI and Anthropic both finished training surprisingly capable large models at roughly the same time in early March, then this is potentially purely a result of scale. Q1 2026 was just the first time anyone had enough compute to train at this level. If this really comes down to how fast, and to what extent, you can scale physical infrastructure, then I think it probably becomes very difficult to beat Elon after around 2030. If the race goes that long, and we are still pre-transformative, he will just keep ramping up physical constructs. He will literally build a datamoon if that's what it takes to win a contest of scale. If orbital datacenters work, he probably also wins that way due to SpaceX. Mark Zuckerberg is just as scale-pilled. Last year, when he was pressed on capex during the earnings call, he said that he would rather overbuild now than risk missing the next leap that requires 10x more compute to train. The last eighteen months have shown how valuable top human talent in this industry still is, but even senior people at OpenAI and Anthropic now say openly that they do not know how long they themselves will still have these jobs. Once automated researchers are superhuman, top talent will be supplanted by how many super-researchers you can run simultaneously. It will be difficult to beat Elon and Zuck at this game by the end of the decade. This is what Stargate is for, but will it be enough? Against xAI, META, Microsoft, and Google, it seems that OpenAI and Anthropic have to blitz now; reach a sufficient capability threshold to surpass the human level, then automate as much of the economy as possible as fast as possible before they are outbuilt.

English
0
0
1
20
initlayers
initlayers@initlayers·
My coding sucks man. I was solving a simple problem: print numbers from 1 to n without using string methods. I ended up writing a whole function, building a list, appending values... all that just to print a sequence. Meanwhile the clean solution was literally: for i in range(1, n+1): print(i, end="") I am humbled.
initlayers tweet media
English
9
0
59
3.8K
Colossus
Colossus@colossusmag·
Elon Musk has spent a decade trying to control an AI lab. He tried to absorb DeepMind into Tesla in 2014. Then OpenAI in 2018. When that failed, an intern spoke up. It did not end well.
Colossus tweet media
Colossus@colossusmag

We're publishing an exclusive chapter from @scmallaby's brilliant new book about Demis Hassabis and DeepMind. This is the inside story of Project Mario. How DeepMind's co-founders spent 4 years trying every mechanism they could think of to put guardrails around AGI, only to watch each one fail, and conclude that the only safeguard was themselves. It reveals that Hassabis ran a secret hedge fund team inside DeepMind trying to beat Renaissance Technologies; Mustafa Suleyman assembled lawyers for a $5 billion walkaway plan; Reid Hoffman committed $1 billion of his personal fortune to back them; Google kept saying yes and no at the same time—and the endless negotiations left Hassabis so distracted that when the transformer paper dropped in 2017, he was less alert to its significance than he might have been. Meanwhile, OpenAI was fighting the mirror-image battle with Musk, Altman, and Sutskever tearing each other apart over the same question: who gets to control AGI? Musk proposed folding OpenAI into Tesla. When that failed, he stormed out. When OpenAI's nonprofit board finally tried to assert authority in 2023, it was crushed in days. Both camps arrived at the same unsettling conclusion, that governance structures don't hold. The best safeguard either side could come up with? Trust us. Read the chapter in the link below.

English
27
67
749
226.7K
Pranay Agrawal
Pranay Agrawal@praggr·
Had no idea DeepMind had this much drama
Colossus@colossusmag

We're publishing an exclusive chapter from @scmallaby's brilliant new book about Demis Hassabis and DeepMind. This is the inside story of Project Mario. How DeepMind's co-founders spent 4 years trying every mechanism they could think of to put guardrails around AGI, only to watch each one fail, and conclude that the only safeguard was themselves. It reveals that Hassabis ran a secret hedge fund team inside DeepMind trying to beat Renaissance Technologies; Mustafa Suleyman assembled lawyers for a $5 billion walkaway plan; Reid Hoffman committed $1 billion of his personal fortune to back them; Google kept saying yes and no at the same time—and the endless negotiations left Hassabis so distracted that when the transformer paper dropped in 2017, he was less alert to its significance than he might have been. Meanwhile, OpenAI was fighting the mirror-image battle with Musk, Altman, and Sutskever tearing each other apart over the same question: who gets to control AGI? Musk proposed folding OpenAI into Tesla. When that failed, he stormed out. When OpenAI's nonprofit board finally tried to assert authority in 2023, it was crushed in days. Both camps arrived at the same unsettling conclusion, that governance structures don't hold. The best safeguard either side could come up with? Trust us. Read the chapter in the link below.

English
0
0
2
94
Pranay Agrawal retweetledi
Ash Jogalekar
Ash Jogalekar@curiouswavefn·
As someone who went through the Indian system of rote memorization a long time ago, I largely agree that the approach stifles creativity and original thinking. However, I would say that rote memorization has its place when it’s combined with creative thinking. Enrico Fermi for instance had a little notebook full of derived formulas and equations that he had memorized, so that he could know which one to use when confronted with a problem. He was of course also a creative genius, but it goes to show how the two approaches can be powerful when combined.
English
15
21
257
13.6K
Pranay Agrawal retweetledi
eigenron
eigenron@eigenron·
you can get a form of depression that’s not quite typical but comes from this dread of knowing how intelligent your brain is via this evolved metacongition stage but never being able to explain or prove it to someone in the rawest sense. very few have it. it’s dreadful.
English
26
11
304
15.9K
Pranay Agrawal retweetledi
Kyros
Kyros@IamKyros69·
Don’t ever limit yourself... you can learn anything.
English
88
3.8K
21.5K
2.4M
Pranay Agrawal retweetledi
Gurpriya
Gurpriya@GurpriyaSidhu·
This is such a good parenting scene.
English
11
103
712
38.2K