Gating

11 posts

Gating banner
Gating

Gating

@gating_ai

A strong AI company

Frankfurt, Germany Katılım Şubat 2025
8 Takip Edilen5 Takipçiler
Gating
Gating@gating_ai·
@burkov we think we at Gating have a good formula for improvement.
English
0
0
0
3
BURKOV
BURKOV@burkov·
People waiting for better coding models don't realize that the quadratic time and space complexity of self-attention hasn't gone anywhere. If you want an effective 1M token context, you need 1,000,000,000,000 dot products to be computed for you for each of your requests for new code. Right now, you get this unprecedented display of generosity because some have billions to kill Google while Google spends billions not to be killed. Once the dust settles down, you will start receiving a bill for each of those 1,000,000,000,000 dot products. And you will not like it.
English
188
231
2.7K
557K
Gating
Gating@gating_ai·
Where are the household robots ... why don´t we have them yet? This is Danko from gating.ai about the future of deep learning
English
0
0
1
12
Gating
Gating@gating_ai·
It seems like deep learning is hitting a wall. A paper in 2017 calculated why this will happen . The problem is with the variety that deep learning cannot support. The paper: link.springer.com/article/10.100…
English
0
1
2
42
Gating
Gating@gating_ai·
@GaryMarcus I think, after this did not work, we should now start focusing on Gating: gating.ai
English
0
0
0
14
Gary Marcus
Gary Marcus@GaryMarcus·
𝗘𝗳𝗳𝗼𝗿𝘁𝘀 𝗮𝘁 𝗺𝗮𝘀𝘀𝗶𝘃𝗲 𝘀𝗰𝗮𝗹𝗶𝗻𝗴 𝘁𝗵𝗮𝘁 𝗳𝗮𝗶𝗹𝗲𝗱 𝘁𝗼 𝗽𝗿𝗼𝗱𝘂𝗰𝗲 𝗔𝗚𝗜 (partial list) GPT-Turbo GPT-4o o3 mini-high GPT 4.5 Grok 3 Claude 3.7 Sonnet Gemini 2.0 Flash Meta Llama 3.1 Sora …. 𝗘𝗳𝗳𝗼𝗿𝘁𝘀 𝗮𝘁 𝗺𝗮𝘀𝘀𝗶𝘃𝗲 𝘀𝗰𝗮𝗹𝗶𝗻𝗴 𝘁𝗵𝗮𝘁 𝘀𝘂𝗰𝗰𝗲𝘀𝘀𝗳𝘂𝗹𝗹𝘆 𝘆𝗶𝗲𝗹𝗱𝗲𝗱 𝗔𝗚𝗜 [𝘯𝘰𝘯𝘦]
GIF
English
22
19
137
19.5K
Gating
Gating@gating_ai·
@ylecun @DrJimFan @RichardSSutton I don't think it is architectures that makes us primarily smart. It is something else related to inductive biases and levels of adaption. These principles we embodied in "Gating" (gating.ai).
English
0
0
0
4
Yann LeCun
Yann LeCun@ylecun·
Animals and humans get very smart very quickly with vastly smaller amounts of training data. My money is on new architectures that would learn as efficiently as animals and humans. Using more data (synthetic or not) is a temporary stopgap made necessary by the limitations of our current approaches.
English
311
566
5.4K
3M
Jim Fan
Jim Fan@DrJimFan·
It’s pretty obvious that synthetic data will provide the next trillion high-quality training tokens. I bet most serious LLM groups know this. The key question is how to SUSTAIN the quality and avoid plateauing too soon. The Bitter Lesson by @RichardSSutton continues to guide AI development: there’re only 2 paradigms that scale indefinitely with compute: Learning & Search. It’s true in 2019 at the time of writing, true today, and I bet will hold true till the day we solve AGI. incompleteideas.net/IncIdeas/Bitte…
English
138
287
2.5K
1.6M
Moonlit Monkey
Moonlit Monkey@MoonlitMonkey69·
@Algon_33 @ylecun Humans are biologically heuristic - the structure of the brain from our genetic code that comes from many thousands of years of evolution. Our individual lives are more like fine-tuning, or RHLF. Our 'base model' is our DNA. In reality we are training on VASTLY more data.
English
2
0
2
241
Yann LeCun
Yann LeCun@ylecun·
As long as AI systems are trained to reproduce human-generated data (e.g. text) and have no search/planning/reasoning capability, performance will saturate below or around human level. Furthermore, the amount of trials needed to reach that level will be far larger than the amount of trials needed to train humans. LLMs are trained with 200,000 years worth of reading material and are still pretty dumb. Their usefulness resides in their vast accumulated knowledge and language fluency. But they are still pretty dumb.
Pedro Domingos@pmddomingos

Interesting how in all these domains AI is asymptoting at roughly human performance - where's the AI zooming past us to superintelligence that Kurzweil etc. predicted/feared?

English
237
735
4.2K
820.3K
Gating
Gating@gating_ai·
@ylecun If you put a hand-held calculator and related technology to this graph, you will see very quickly super human capability in multiplying and dividing numbers. There is something in symbolic thinking, something very poweful.
English
0
0
0
8
Gating
Gating@gating_ai·
@ylecun @Algon_33 I think this is wrong logic. These data are mostly noise, are repetitive and are to a high degree even garbage. You know how we need "clean data". Well, these data are the opposite of clean. You cannot do much with these data. Moreover, how about blind people?
English
0
0
0
23
Gating
Gating@gating_ai·
How to find information about the gating technology: gating.ai
English
0
2
2
47
Gating
Gating@gating_ai·
Here is one example why LLMs do not really reason but rather perform pattern recognition. I asked ChatGPT 4o to perform a parity function on the binary string below. The LLM miscounted the 1s in the string. It is off by one, which is enough to return an incorrect answer.
Gating tweet media
English
2
1
5
66