Shashank Jain

7.6K posts

Shashank Jain

Shashank Jain

@smjain

Fascinated by Bayesian principles. My creation. https://t.co/wx04NDwYaP

Katılım Mayıs 2009
1.3K Takip Edilen415 Takipçiler
Sabitlenmiş Tweet
Shashank Jain
Shashank Jain@smjain·
🌟 Excited to unveil my dream project: maibhisinger.com! Now, you can train our AI with your voice and sing any song as if you're the star on stage. Transform your voice into a melody and let the world hear your unique tune. Dive into your musical journey today! 🎤🎶
English
2
1
2
3.1K
Shashank Jain
Shashank Jain@smjain·
@daniel_mac8 I think lineage and provenance of Agent Executions is very important to improve the harness. So needs a full feedback loop as well.
English
0
0
1
184
Dan McAteer
Dan McAteer@daniel_mac8·
🤯 The fact that the entire AI community is not shouting from the rooftops about HyperAgents is beyond me. HyperAgents: > Self-referential agents that can in principle self-improve for any computable task How it works: A multi-agent system that combines: 1. Task agent - performs the given task 2. Meta agent - improves the task agent's ability to perform a given task, and crucially, can improve its own ability to improve the task agent The authors call it "metacognitive self-modification". The meta agent can improve elements of the agent like: > code & logic > instructions & prompts > tools > system architecture Mind blowing quotes from the paper: > "Can potentially support self-accelerating progress on any computable task." > "AI systems that can improve themselves could transform scientific progress from a human-pace process into an autonomously accelerating one." > "When the mechanism of improvement is itself subject to improvement, progress can become self-accelerating and potentially unbounded." Recursively Self-improving Artificial Intelligence is here. It's just not evenly distributed.
Dan McAteer tweet mediaDan McAteer tweet media
English
33
48
328
23.2K
Dexter
Dexter@kanyadabrian·
🤣🤣🤣😂😂😭😭💓
QME
4
28
262
12.1K
Shashank Jain
Shashank Jain@smjain·
@_philschmid Super cool . I ported this to mac and tried . Works well . Just wondering a bit how it compares to statistical techniques like Bayesian optimisation which also work in space of searching for the right hyper parameters
English
0
0
0
201
Shashank Jain
Shashank Jain@smjain·
@johnrushx I think thousand brain theory by Jeff Hawkins explains this well .
English
0
0
0
16
John Rush
John Rush@johnrushx·
I dont share Yann LeCun's point of view If it's true, how can he explain the intelligence in blind humans? They have never seen even a pixel in their life, but they are often as smart as a non-blind person. He makes a typical mistake scientists/devs make by being ego-stubborn
Rohan Paul@rohanpaul_ai

Yann LeCun (@ylecun ) explains why LLMs are so limited in terms of real-world intelligence. Says the biggest LLM is trained on about 30 trillion words, which is roughly 10 to the power 14 bytes of text. That sounds huge, but a 4 year old who has been awake about 16,000 hours has also taken in about 10 to the power 14 bytes through the eyes alone. So a small child has already seen as much raw data as the largest LLM has read. But the child’s data is visual, continuous, noisy, and tied to actions: gravity, objects falling, hands grabbing, people moving, cause and effect. From this, the child builds an internal “world model” and intuitive physics, and can learn new tasks like loading a dishwasher from a handful of demonstrations. LLMs only see disconnected text and are trained just to predict the next token. So they get very good at symbol patterns, exams, and code, but they lack grounded physical understanding, real common sense, and efficient learning from a few messy real-world experiences. --- From 'Pioneer Works' YT channel (link in comment)

English
82
9
141
31.8K
KibitzingPatzer
KibitzingPatzer@kibitzingpatzer·
@smjain Correct! Sorry I missed your answer the other day.
English
1
0
2
20
KibitzingPatzer
KibitzingPatzer@kibitzingpatzer·
White to move and mate in two. Tricky! 😏
KibitzingPatzer tweet media
English
42
4
62
8.5K
KibitzingPatzer
KibitzingPatzer@kibitzingpatzer·
White to move and mate in two! Easy but tricky! 🤔
KibitzingPatzer tweet media
English
40
3
53
7.8K
KibitzingPatzer
KibitzingPatzer@kibitzingpatzer·
White to move and mate in two. Hard! 🤔 P. Leopold - 1923
KibitzingPatzer tweet media
English
17
4
36
3.1K
KibitzingPatzer
KibitzingPatzer@kibitzingpatzer·
White to move and mate in two. Tricky! 😮
KibitzingPatzer tweet media
English
33
6
66
5.8K
Dexter
Dexter@kanyadabrian·
😂😂😂🤣🤣🤣
QME
7
49
678
41.8K
KibitzingPatzer
KibitzingPatzer@kibitzingpatzer·
@smjain It’s not mate 1. Kd7 KxN now what? If Qa5 just Kb8
English
1
0
1
28
Dexter
Dexter@kanyadabrian·
🤣🤣😾😾🤩🤩✨✨
QME
21
172
1.2K
77.5K
Shashank Jain
Shashank Jain@smjain·
@kibitzingpatzer KD7 ..then black can take knight with rook or with king..both cases can be mate by Queen.. ?
English
1
0
0
23
KibitzingPatzer
KibitzingPatzer@kibitzingpatzer·
White to move and mate in two. Tricky! 😧
KibitzingPatzer tweet media
English
63
7
81
12.9K
KibitzingPatzer
KibitzingPatzer@kibitzingpatzer·
White to move and mate in two. 🙂
KibitzingPatzer tweet media
English
42
5
84
11.3K