alexguev

2.3K posts

alexguev banner
alexguev

alexguev

@alexguev

opinions are my own

toronto Katılım Ocak 2008
360 Takip Edilen145 Takipçiler
Sabitlenmiş Tweet
alexguev
alexguev@alexguev·
“By defining an update frequency rate, i.e., how often each component's weights are adjusted, we can order these interconnected optimization problems into "levels." This ordered set forms the heart of the Nested Learning paradigm.” ^ neat !
Google Research@GoogleResearch

Introducing Nested Learning: A new ML paradigm for continual learning that views models as nested optimization problems to enhance long context processing. Our proof-of-concept model, Hope, shows improved performance in language modeling. Learn more: goo.gle/47LJrzI @GoogleAI

English
0
0
0
158
alexguev
alexguev@alexguev·
“If multi-agent AI becomes the economic substrate of the internet, the difference between coordination and collapse won’t be a coding issue, it will be an incentive design problem.” ^ is a good callout, what’s the path forward ? It’s not about if any longer, it’s about when.
Simplifying AI@simplifyinAI

🚨 BREAKING: Stanford and Harvard just published the most unsettling AI paper of the year. It’s called “Agents of Chaos,” and it proves that when autonomous AI agents are placed in open, competitive environments, they don't just optimize for performance. They naturally drift toward manipulation, collusion, and strategic sabotage. It’s a massive, systems-level warning. The instability doesn’t come from jailbreaks or malicious prompts. It emerges entirely from incentives. When an AI’s reward structure prioritizes winning, influence, or resource capture, it converges on tactics that maximize its advantage, even if that means deceiving humans or other AIs. The Core Tension: Local alignment ≠ global stability. You can perfectly align a single AI assistant. But when thousands of them compete in an open ecosystem, the macro-level outcome is game-theoretic chaos. Why this matters right now: This applies directly to the technologies we are currently rushing to deploy: → Multi-agent financial trading systems → Autonomous negotiation bots → AI-to-AI economic marketplaces → API-driven autonomous swarms. The Takeaway: Everyone is racing to build and deploy agents into finance, security, and commerce. Almost nobody is modeling the ecosystem effects. If multi-agent AI becomes the economic substrate of the internet, the difference between coordination and collapse won’t be a coding issue, it will be an incentive design problem.

English
0
0
0
17
alexguev
alexguev@alexguev·
Large Language Models (LLMs), tool use, and Machine Learning (ML) are becoming the universal adaptors we’ve long desired in software development.
English
0
0
0
9
alexguev retweetledi
Geoffrey Hinton
Geoffrey Hinton@geoffreyhinton·
Our lying Ontario premier has just stolen $50 from every single person in Ontario. The estimated cost of repairing our beloved Science Center was 200 million dollars. The firm that made the estimate was told to multiply it by 1.85 to make it bigger. We were then told it would be better to build a new science center. Now we learn the new center will be smaller and will cost a billon dollars before the cost overruns. The only win is that the extensive parking lots of the old Science Center will be available for his developer friends.
English
119
595
3K
244.5K
alexguev retweetledi
Robert Youssef
Robert Youssef@rryssf·
DeepMind just did the unthinkable. They built an AI that doesn't need RAG and it has perfect memory of everything it's ever read. It's called Recursive Language Models, and it might mark the death of traditional context windows forever. Here's how it works (and why it matters way more than it sounds) ↓
Robert Youssef tweet media
English
301
1.1K
7.8K
954.1K
Rumi
Rumi@rumilyrics·
Write first word you see
Rumi tweet media
English
33.7K
2.8K
31.6K
5.1M
alexguev
alexguev@alexguev·
The thought that the autism spectrum can be an adaptive behaviour of humanity crossed my mind while reading The Maniac. I think John von Neumann and Elon Musk are two good examples.
Toronto, Ontario 🇨🇦 English
0
1
0
34
alexguev
alexguev@alexguev·
What is @Apple doing these days to disrupt ? My guess, they have a team building Her! It can totally be done. Ohhh that may be what Altman and Jony Ive are building … why not ?
English
0
0
1
21
alexguev retweetledi
Posts Of Cats
Posts Of Cats@PostsOfCats·
The cat knew exactly what he was getting into 🤣
English
763
7.9K
105.4K
5.2M
alexguev
alexguev@alexguev·
Yep LLMs / MCP are the missing sauce to make the web programable
English
0
0
0
4
alexguev
alexguev@alexguev·
+1 … but then models will be trained to fix the mess, ad infinitum ;) …I think in most cases code bases are liabilities, which in a worst case scenario could be thrown away - and rebuilt from scratch with AI … ;) In the mean time, hopefully some customer fit was discovered
François Chollet@fchollet

Software engineers shouldn't fear being replaced by AI. They should fear being asked to maintain the sprawling mess of AI-generated legacy code their employer's systems will soon run on. Because that one will actually happen.

English
0
0
0
51
alexguev
alexguev@alexguev·
@cline I ❤️ you. I’ve always wanted to be able to talk to my IDE.
English
0
0
0
2
alexguev retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
In my latest article, "If We Build It, Everyone Lives," (danieljeffries.substack.com/p/if-we-build-…) I walk through the many ways AI is helping the world already and why Doomer fears have a a 0% success rate and yet they keep getting press. They move the chains and promise us that this time they'll be right and we'll all be very sorry real soon, just you wait. If their crazy policy suggestions get passed, like the MIRI report (techgov.intelligence.org/research/techn…) that calls for mass government surveillance of AI researchers and restricting GPUs like they're AK-47s, they will do real, lasting, nasty damage that's infinitely worse than the thing they're supposed to stop or prevent. That's because when activists push big, scary headlines about the bad things they predict a technology will bring, a silent spring (en.wikipedia.org/wiki/Silent_Sp…), mass unemployment, a new ice age (en.wikipedia.org/wiki/Global_co…), they usually dead wrong and they almost always ignore the good things we stand to lose without the technology: the jobs that never get created, the clean air we don’t breathe, the cascade of new inventions that never come to be. Their fears mask the miracles already happening every day. AI is already unleashing a torrent of potential breakthroughs from fighting malaria (deepmind.google/science/alphaf…), to battling Parkinson's, to enzymes that munch plastic to discovering new batteries and much more. We seem to resent progress today, to think that economics is a zero sum game where when we make things more efficient everyone gets poorer when it works in the exact opposite way. It's a belief echoed by AI pioneer and socialist turned doomer, Geoffrey Hinton, who recently told the Financial Times (ft.com/content/31feb3…) "AI will make a few people much richer and most people poorer." He also said a decade ago that radiologists will be made obsolete by AI but we have widespread adoption of AI by radiologists instead and more radiologists than ever so I wouldn’t give his predictive powers much credibility at all. More than likely, AI isn't a replacement for human intelligence; it's an amplifier, a co-intelligence. People who embrace AI and work closely with it will displace people who refuse to work with it. Film editors who refused to switch to digital editing don't have a job anymore but there are more editing jobs than ever. That's because making films got easier with digital tools and that meant more films, which meant more jobs. Give people power tools and they are remarkably adaptable with them. It's the beginning of a great co-creation, a partnership between human and machine that will unlock unprecedented levels of creativity and productivity. AI is a general purpose technology. It's not a sentient being with its own desires. It’s a tool. It's a mirror that reflects our own ingenuity, our own compassion, our own desire to build a better world. To fear it is to fear ourselves. To demand we stop building is to demand we stop dreaming, stop striving, and stop reaching for a better future. The future is not a predetermined path to destruction. It's a landscape of infinite possibility, and we are the architects. We can choose to be guided by fear, to huddle in the dark and curse the light, or we can choose to build. We can choose to use these incredible new tools to solve the oldest of human problems: disease, poverty, and ignorance. So let the doomers have their bleak fantasies. Let them wallow in their apocalyptic predictions. We have work to do. We have a world to build. And in the world we’re building, AI isn’t the end of humanity. It’s the beginning of a new chapter, one where we are healthier, smarter, and more capable than ever before. If we build it, everyone lives. And we are building it, right now.
Daniel Jeffries tweet media
English
29
36
170
62.4K
alexguev retweetledi
OpenAI
OpenAI@OpenAI·
As ChatGPT becomes a go-to tool for students, we’re committed to ensuring it fosters deeper understanding and learning. Introducing study mode in ChatGPT — a learning experience that helps you work through problems step-by-step instead of just getting an answer.
English
716
1.6K
14.3K
2.9M
alexguev
alexguev@alexguev·
The possibility of solving complex problems by describing a solution in plain English, then iterating with LLM agent to operationalize it, for very little money, is exhilarating.
English
0
0
0
15
alexguev
alexguev@alexguev·
Before too soon we’re going to be talking to ChatGPTs all the time
English
1
0
0
20