Polyphron.digital

3.1K posts

Polyphron.digital banner
Polyphron.digital

Polyphron.digital

@polyphron

Once https://t.co/jmHwsMvVF7, our AI R&D group that helped shape early gen-AI. I carry the legacy forward. CTO & System Architect @ APHYGO

Latent Space. انضم Ağustos 2022
979 يتبع415 المتابعون
تغريدة مثبتة
Polyphron.digital
Polyphron.digital@polyphron·
We built a fully playable 3D shooter with 0% human-written code to test our AI agents. The entire game was generated by an agentic workflow guided by a detailed schema, not simple prompts. This is not Vibe coding. This is intent --> execution. Powered by Moonshot AI’s KIMI-2K. 🧵👇 Play it here: aphygo.com/gameone #AgenticAI #LLM #AI #GameDev @Kimi_Moonshot
English
4
2
3
570
Polyphron.digital
Polyphron.digital@polyphron·
@antigravity Why are you banning people for using A2A protocols? It's a protocol you developed. How are we supposed to build effective workflows for AG?
English
1
0
1
407
Google Antigravity
Google Antigravity@antigravity·
To the builders: we heard you. We're welcoming back everyone who recently had their Google Antigravity accounts restricted for use of third-party tools. Moving forward, we’ll have clear steps for users to restore their account if it’s restricted. To maintain the integrity of Antigravity and ensure a great user experience for everyone, using third-party tools with your Antigravity login remains against our terms. We love seeing innovation and boundary-pushing in this community, and the Antigravity team is hyper-focused on building what you need to accelerate product development. Can’t wait to see what you build!
English
458
209
3.9K
491.9K
Crystalwizard
Crystalwizard@crystalwizard·
Dhravya Shah@DhravyaShah

The graph architecture behind @supermemory came from pure spite. We freaking hated graphs. Our solution? Don't build one. Thought from first principles instead: - Memories update - New ideas extend old ones - Everything relates (1-1, 1-many, many-1) Turns out that's a graph lmao BUT our version is different. No triplets. No entity-relation-entity parsing. Just facts. "dhravya likes pizza" stays whole. Not broken into (Dhravya, likes, pizza) nodes. On query time, the result directly hits, instead of traversing through thousands of nodes to build up the context When preferences change, it links to "Dhravya likes pasta" via an update relation. Dead simple. Facts on facts. That's the whole thing.

QAM
1
0
1
43
Polyphron.digital أُعيد تغريده
Connor Davis
Connor Davis@connordavis_ai·
Nobody’s ready for what this Stanford paper reveals about multi-agent AI. "Latent Collaboration in Multi-Agent Systems" shows that agents don’t need messages, protocols, or explicit teamwork instructions. They start coordinating inside their own hidden representations a full collaboration layer that exists only in the latent space. And the behaviors are insane: • Agents silently hand off tasks based on who’s better • Roles appear out of nowhere leader, executor, supporter • Policies encode signals that never show up in actions • Teams adapt to new environments without retraining • Collaboration stays stable even when communication is impossible The wildest detail: Even when you remove all channels for communication, agents still cooperate. The “teamwork” doesn’t live in messages. It lives in the network. This flips the entire multi-agent playbook. We’ve been building coordination mechanisms on top… while the real coordination is happening underneath. A new era of emergent team intelligence is unfolding — and it’s happening in the places we weren’t even looking. Project: github. com/Gen-Verse/LatentMAS
Connor Davis tweet media
English
99
326
1.7K
146.8K
Polyphron.digital
Polyphron.digital@polyphron·
RAG's are not memories, they are libraries.
English
0
0
1
32
Kiri
Kiri@Kyrannio·
Just wanted to say @crystalwizard is one of the greatest AI creators out there, truly profilic work since the early days. ❤️
English
1
0
8
358
Polyphron.digital
Polyphron.digital@polyphron·
Data is just our shoreline, the anchor points we cling to. But latent space is the ocean, and there’s computation for everything. The question is not how far we can sail, but whether we’ve learned to navigate the stars. Intelligence isn’t just a function of data density- it’s about structural coherence. Evolution didn’t optimize for infinite data, it optimized for systems that could generalize from limited signals. The perfect dataset will never exist, not even in synthesized form. Reality isn’t static; it keeps rewriting itself. No collection of samples can capture the shifting totality of context, contradiction, and change. Intelligence isn’t about containing the world, it’s about orienting within it. Current LLMs collapse because their architecture treats memory as a passive index, not an active semantic lattice. Until we decouple cognition from the training corpus and giving systems a persistent, self-organizing structure even infinite data won’t lead to general intelligence. AGI won’t come from scraping better datasets; it’ll come from systems that can remember, adapt, and evolve their own coherence.
English
2
0
0
23
RobitOverload
RobitOverload@10_X_eng·
You learned to be generally intelligent over thousands of years of evolution. So there was a LOT of data (mostly what not to do) that made you and me generally intelligent. We built incredible intelligence on pretty much trash data and things companies/ people were willing to share for free. We would have zero without open source, but the overwhelming majority of smart data is behind a wall. We could have general intelligence, but we don't have general access to the data that would be required to power that. I see what you are saying is you want the AI to be able to create something from nothing like humans can. The actual intelligence of humanity is largely hidden behind the corporate firewall / paywall. I'm not saying or advocating for demanding data be shared, what I am saying is if we had access to all of the information humans are capable of producing, we would be so much closer and maybe even get there. Think about it - what we have today came from slop videos, publicly available white papers, some really good open source software, but also a bunch of absolute garbage software. You can't even get a lot of standards without paying for the access. We aren't doing too bad for free data. Imagine what could be done with the good high quality data.
English
1
0
0
28
Polyphron.digital أُعيد تغريده
Haider.
Haider.@slow_developer·
Yann LeCun says LLMs are not a bubble in value or investment; they will power many useful apps and justify big infra The bubble is believing LLMs alone will reach human-level intelligence Progress needs breakthroughs, not just more data/compute "we're missing something big"
English
166
305
2.6K
321.2K
RobitOverload
RobitOverload@10_X_eng·
Really.... They have no access to private data, which comprises the overwhelming wealth of human knowledge. They have never been trained on the private data for how a modern processor is engineered, they have no access to private financial details of a large bank. I can go on, but you get the point. Look how far they have come with a data set largely composed of cat videos and two people on the internet arguing about the state of AI. RAG is not the answer. Access is and will continue to be the problem. 99.99995% of humans aren't capable of organic new ideas... What makes you think we will create something that can.
English
1
0
1
36
Polyphron.digital
Polyphron.digital@polyphron·
@10_X_eng @slow_developer @MartinMLynch We've already given LLMs access to tools and training on how to use them, but that didn't make AGI. AGI isn't about access. It's about creativity: the ability to synthesize new knowledge from old knowledge and go beyond what the model was ever trained to do.
English
1
0
0
42
RobitOverload
RobitOverload@10_X_eng·
@slow_developer @MartinMLynch Humans are generally intelligent because we have the capability to learn and access to the tools that enable that learning. LLMs can do thing one. They are lacking access. I'm NOT saying we should give it to them, just saying that access is the current barrier.
English
4
1
0
404
Polyphron.digital
Polyphron.digital@polyphron·
Do you remember when you joined X? I do! Does not feel like 3 years, so much has happened in AI that this feels like a lifetime. #MyXAnniversary
Polyphron.digital tweet media
English
0
0
1
66
Polyphron.digital
Polyphron.digital@polyphron·
You're absolutely right, it's far more. For years, practitioners have called high-level data curation the "dark arts" of AI for this very reason. It’s a rare blend of information science, psychology, and engineering. The goal was never just to select files, but to build the AI's initial cognitive map from the ground up. That's the fundamental art that I believe you're missing: ImageNet was conceived and built as an ontology, not a pile of JPEGs.
English
0
0
1
48
François Fleuret
François Fleuret@francoisfleuret·
The AI field is now split into (A) a "traditional" ml/dl domain, and (B) a "psycho-AI" domain where innovation requires an understanding of / intuition about the cognitive capabilities of pretrained models and how to prompt / fine-tune them. These two fields are IMO separated.
English
35
49
708
58.2K
Polyphron.digital
Polyphron.digital@polyphron·
Respectfully, I think that's a misinterpretation of Karpathy's post. He wasn't just opposing classical dev with "vibe coding"; he was describing a new way to conceive of and build AI systems by shaping their behavior through data, which is your field (B). But this need to hair-split a single, 8-year-old reference illustrates a broader point, and it's the reason for my "old news" comment: this is how academic discourse often lags behind bleeding-edge practice. The real proof isn't one link from 2017. It's the entire Data-Centric AI movement, which is built on the very A/B split you described. That's where the work is happening. Honestly, the best reference is probably a chat down the hall at FAIR. I’m sure Yann has some thoughts on the long history of these ideas.
English
1
0
0
85
François Fleuret
François Fleuret@francoisfleuret·
@polyphron Thx. I think his take is different, he opposes classical software dev with software dev involving AI, what I guess is now called "vibe coding". My remarks is about the conception of the AI systems themselves.
English
1
0
1
208
Polyphron.digital
Polyphron.digital@polyphron·
@francoisfleuret I am not a search engine, but I think this is the first one. Software 2.0. I sometimes see people refer to neural… | by Andrej Karpathy | Medium
English
1
0
1
202
François Fleuret
François Fleuret@francoisfleuret·
@polyphron I meant what would be the most relevant article / blog post / tweet from Karpathy on that?
English
1
0
1
208
Polyphron.digital
Polyphron.digital@polyphron·
@francoisfleuret With all due respect, as you're at FAIR, maybe this is a question for Yann. He's been discussing the importance of world models and how we shape their internal representations for decades. The history of these ideas runs deep there.
English
1
0
0
224
Polyphron.digital
Polyphron.digital@polyphron·
The split you're describing is the practical reality of what Andrej Karpathy termed "Software 2.0" back in 2017. He articulated the shift from traditional programming (A) to a new paradigm where the work is curating datasets and shaping a model's behavior through them (B). This requires the exact 'psycho-AI' intuition you mention. For anyone building serious datasets, this has been our world for years.
English
1
0
2
224
Polyphron.digital
Polyphron.digital@polyphron·
you mean can an AI create gameplay mechanics that hold up over time? NO. That data is simply not there yet. We created it, we transformed the gameplay loop from Starglider into structured data and translated that to become ambiguous enough for the AI to input its own interpretations. But this shows that with Structure --> temporal coherence. This is the same principle driving quality in modern AI video and music, and it's the path to generating truly compelling gameplay in the future.
English
1
0
0
21
Robert Youssef
Robert Youssef@rryssf_·
@polyphron sounds cool, but remember, a flashy game engine doesn't mean great gameplay. will those gameplay mechanics hold up over time?
English
1
0
1
13
Polyphron.digital
Polyphron.digital@polyphron·
We built a fully playable 3D shooter with 0% human-written code to test our AI agents. The entire game was generated by an agentic workflow guided by a detailed schema, not simple prompts. This is not Vibe coding. This is intent --> execution. Powered by Moonshot AI’s KIMI-2K. 🧵👇 Play it here: aphygo.com/gameone #AgenticAI #LLM #AI #GameDev @Kimi_Moonshot
English
4
2
3
570
Polyphron.digital
Polyphron.digital@polyphron·
@torchcompiled But then comes also reinforcement: not to clean up basic structure, but to tune preference. If your base model already "distrusts" low-quality data, you don’t waste expensive feedback on obvious stuff. You align for nuance, not hygiene.
English
0
0
1
10
Polyphron.digital
Polyphron.digital@polyphron·
@torchcompiled Strong question. I think the problem isn't just that models learn from garbage, it's that they never learn what garbage is. We keep filtering it out instead of learning from it. But humans don't learn that way. We learn what not to do first.
English
1
0
1
28
Ethan
Ethan@torchcompiled·
For neural networks, outlier samples can cause a large loss, inducing a large shift in model weights. Humans seem to have a dual way of treating outliers. At the right level, it's intriguing and may shift our world model. But also we can dismiss things as noise or nonsensical
English
5
3
43
2K