Saurabh Singh

101 posts

Saurabh Singh banner
Saurabh Singh

Saurabh Singh

@saurasingh

Research Scientist @ Poetiq, Ex-Google DeepMind, Opinions are my own.

California, USA เข้าร่วม Aralık 2009
428 กำลังติดตาม299 ผู้ติดตาม
Saurabh Singh รีทวีตแล้ว
Massimo
Massimo@Rainmaker1973·
Points of view
English
176
3.9K
22.1K
893.4K
Saurabh Singh รีทวีตแล้ว
Steven Pinker
Steven Pinker@sapinker·
We are primates, with a third of our brain dedicated to vision. Graphics are the most effective way to communicate quantitative, spatial, and causal information, but they are as hard to master as clear and stylish prose. Here's an excellent guide to data visualization by Saloni Dattani @salonium open.substack.com/pub/salonium/p…
English
1
261
1.3K
140.3K
Saurabh Singh
Saurabh Singh@saurasingh·
@tbenst Prompt optimization is a reasonable first step and cool! Our released model demonstrates much more than that -- test time reasoning loop. Learned test time reasoning is cooler 😉!
English
1
0
4
122
Tyler Benster
Tyler Benster@tbenst·
This is super cool and all but isn't it "just" prompt engineering or am I missing something? Prompts used for the new ARC-AGI top result: github.com/poetiq-ai/poet…. In any case a nice example of SOTA model prompting / best practices.
Poetiq@poetiq_ai

Poetiq has officially shattered the ARC-AGI-2 SOTA 🚀 @arcprize has officially verified our results: - 54% Accuracy – first to break the 50% barrier! - $30.57 / problem – less than half the cost of the previous best! We are now #1 on the leaderboard for ARC-AGI-2!

English
2
0
1
417
ARC Prize
ARC Prize@arcprize·
Announcing the ARC Prize 2025 Top Score & Paper Award winners The Grand Prize remains unclaimed Our analysis on AGI progress marking 2025 the year of the refinement loop
ARC Prize tweet media
English
25
48
314
221.4K
🪶
🪶@post555s·
@poetiq_ai @arcprize The secret ingredient is to train on the test data 🤝
English
1
0
0
745
Poetiq
Poetiq@poetiq_ai·
Is more intelligence always more expensive? Not necessarily. Introducing Poetiq. We’ve established a new SOTA and Pareto frontier on @arcprize using Gemini 3 and GPT-5.1.
Poetiq tweet media
English
58
110
941
502.8K
Saurabh Singh
Saurabh Singh@saurasingh·
@giffmana I think people really need to read -- The Structure of Scientific Revolutions by Thomas S. Kuhn. A "Hero Scientist", sole inventor etc., is mostly a myth and science advances by communication, collaboration and resulting improvements of ideas.
Saurabh Singh tweet media
English
0
0
0
54
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
OK, I have to give Jürgen this one. I've seen the Yann video like a million times, but this is the first time I see the Fukushima video, which is strikingly similar, but was 3y earlier. Why?
Jürgen Schmidhuber@SchmidhuberAI

Fukushima's video (1986) shows a CNN that recognises handwritten digits [3], three years before LeCun's video (1989). CNN timeline taken from [5]: ★ 1969: Kunihiko Fukushima published rectified linear units or ReLUs [1] which are now extensively used in CNNs. ★ 1979: Fukushima published the basic CNN architecture with convolution layers and downsampling layers [2]. He called it neocognitron. It was trained by unsupervised learning rules. Compute was 100 times more expensive than in 1989, and a billion times more expensive than today. ★ 1986: Fukushima's video on recognising hand-written digits [3]. ★ 1988: Wei Zhang et al had the first "modern" 2-dimensional CNN trained by backpropagation, and also applied it to character recognition [4]. Compute was about 10 million times more expensive than today. ★ 1989-: later work by others [5]. REFERENCES (more in [5]) [1] K. Fukushima (1969). Visual feature extraction by a multilayered network of analog threshold elements. IEEE Transactions on Systems Science and Cybernetics. 5 (4): 322-333. This work introduced rectified linear units or ReLUs, now widely used in CNNs and other neural nets. [2] K. Fukushima (1979). Neural network model for a mechanism of pattern recognition unaffected by shift in position—Neocognitron. Trans. IECE, vol. J62-A, no. 10, pp. 658-665, 1979. The first deep convolutional neural network architecture, with alternating convolutional layers and downsampling layers. In Japanese. English version: 1980. [3] Movie produced by K. Fukushima, S. Miyake and T. Ito (NHK Science and Technical Research Laboratories), in 1986. YouTube: youtube.com/watch?v=oVYCjL… [4] W. Zhang, J. Tanida, K. Itoh, Y. Ichioka. Shift-invariant pattern recognition neural network and its optical architecture. Proc. Annual Conference of the Japan Society of Applied Physics, 1988. First "modern" backpropagation-trained 2-dimensional CNN, applied to character recognition. [5] J. Schmidhuber (AI Blog, 2025). Who invented convolutional neural networks? x.com/SchmidhuberAI/…

English
46
68
1.5K
219.4K
Saurabh Singh
Saurabh Singh@saurasingh·
Read more on the #ARCPRIZE blogpost about our results: #model-refinements" target="_blank" rel="nofollow noopener">arcprize.org/blog/arc-prize…
English
0
0
3
99
Saurabh Singh
Saurabh Singh@saurasingh·
Quite confusing wording on the official plot though. We are not just a provider. Good luck getting this level of performance without our system!
Saurabh Singh tweet media
English
1
0
1
120