Sinyx

948 posts

Sinyx

Sinyx

@sinyxdev

ultra-productivity, ultra-curiosity, ultra-efficiency : that's the goal. ↔️ https://t.co/SZnIP3L8cG

The Web Inscrit le Mart 2017
658 Abonnements153 Abonnés
Sinyx retweeté
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: Yann LeCun's team just dropped a world model that runs on a single GPU. It is called LeWorldModel. And to understand why it’s a massive deal, you have to understand the fatal flaw in every AI you use today. LLMs only predict the next word. They are incredibly good at language, but they have absolutely no understanding of reality. They can write a beautiful poem about a ball bouncing off a wall. But they cannot predict where the ball will actually land. World models predict physics. Objects moving, colliding, and falling. It is the foundational intelligence required for robots to plan and self-driving cars to navigate. But until today, world models kept collapsing. They would cheat the test by predicting the exact same output every time. LeCun's team just solved it. They built a 15-million parameter model that learns the laws of physics directly from raw pixels. It uses 200x fewer tokens than the alternatives. No massive supercomputers. No billion-dollar clusters. Just a single GPU and a few hours of training. We spent the last two years teaching AI how to talk. Now, we are teaching it how to see.
Simplifying AI tweet media
English
46
297
1.4K
193.2K
Sinyx retweeté
François Chollet
François Chollet@fchollet·
ARC-AGI-3 is out now! We've designed the benchmark to evaluate agentic intelligence via interactive reasoning environments. Beating ARC-AGI-3 will be achieved when an AI system matches or exceeds human-level action efficiency on all environments, upon seeing them for the first time. We've done extensive human testing that shows 100% of these environments are solvable by humans, upon first contact, with no prior training and no instructions. Meanwhile, all frontier AI reasoning models do under 1% at this time.
English
184
322
2.6K
541.6K
Sinyx retweeté
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
39K
19.1M
Sinyx retweeté
Sinyx retweeté
Mehdi (e/λ)
Mehdi (e/λ)@BetterCallMedhi·
your curiosity is your greatest asset, the school system spends years training you to find the right answer the real world rewards the ones who keep asking better questions the most important thing you can do after school is unlearn the fear of being wrong & get back to that childlike state where not knowing something feels like an invitation not a threat the kids who take everything apart just to see how it works grow up to be the ones who build what didn't exist before
English
8
41
206
5.8K
Kevin Lacker
Kevin Lacker@lacker·
I think LLMs will encourage the adoption of new programming languages. 1. It’s easier for an LLM to learn a new language than for a human to learn a new language. Just try it out and you’ll see. 2. LLMs are good at porting old libraries to new languages.
Eric S. Raymond@esrtweet

My experience with LLM-assisted coding has been great and I'm a big fan of it, but I've just had a slightly depressing realization. It may almost entirely shut down the development and adoption of new computer languages. The percentage, and probably the absolute amount of code, handwritten by humans is going to fall a great deal. But for the foreseeable future, LLMs won't be able to write code fluently in a specific language without having a large volume of good code in that specific language already available to train on. For a new language in 2026 and after, where exactly is that large volume of good training data going to come from? Probably not from human beings, and where is the incentive for an LLM handed a vibecoding task to go looking for an exotic new language to do it in? I find this slightly depressing, because I enjoy contemplating new-language development the way a more physical tinkerer enjoys salivating over shiny new tools. Human beings are still going to write new languages occasionally, because that's huge fun (if you have a brain bent anywhere like the way mine is) and still a way to climb some status ladders. But with the barrier to mass adoption getting so much higher, I have to think the level of research and engineering activity put into this is going to drop a lot. There is one not-unhappy but rather weird way I could be wrong about this. Historically, once the development of compilers got to a certain point it became clear that designing machine instruction sets to be easily reasoned about by humans was a big mistake. We had to figure out how to design machine instruction sets that were easy for the compilers to reason about. Thus, RISC. It could be that's the future of language design, too. But I have no idea what a new language design optimized for LLM code generation would look like. And I don't think anybody else does, either. Interesting times, indeed.

English
9
1
37
8.9K
Sinyx retweeté
François Chollet
François Chollet@fchollet·
AI agents will soon graduate to fully-fledged economic actors that buy services, compute, and even data in the course of accomplishing high-level goals. 1-2 years before we start seeing this at scale.
English
198
185
1.7K
262.1K
Sinyx retweeté
Olivier Duchenne
Olivier Duchenne@inventorOli·
At Mistral, robots keep their workspace tidy. Autonomous, 1x. Robostral WMa1. wip 🚧
English
22
60
611
53.5K
Sinyx retweeté
Anthropic
Anthropic@AnthropicAI·
We partnered with Mozilla to test Claude's ability to find security vulnerabilities in Firefox. Opus 4.6 found 22 vulnerabilities in just two weeks. Of these, 14 were high-severity, representing a fifth of all high-severity bugs Mozilla remediated in 2025.
Anthropic tweet media
English
487
1.4K
15.2K
3.2M
Sinyx retweeté
jacob
jacob@jsnnsa·
My whole theory since leaving Robinhood: you can build a $100B company with under 20 people. Not as a constraint but as a strategy. The density of talent per person matters more than headcount. Renaud is one of the best 3D web engineers alive. He spent a decade building the tools the entire 3D web runs on. Today he's building Spawn's engine. This is what that theory looks like in practice.
Renaud@onirenaud

My dream has always been simple: anyone should be able to create any 3D world or game directly in the browser. For the first time, I believe it's becoming possible, that's why I'm joining @spawn to help build the future of the game industry. We believe AI + WebGPU will unlock an era where creation isn't owned by a few engines. Spawn will acquire Three.js Blocks and merge Blocks' advanced WebGPU features into its renderer. We also plan to open source many features this year. Three.js Blocks users will be immediately fully refunded, thank you for believing in my work. My OSS contributions (tools + Three.js) will only get stronger (Spawn already has PRs in Three.js). Utsubo remains strong and will continue operating.

English
31
68
2.3K
251.9K
Sinyx
Sinyx@sinyxdev·
@PaddleHQ Hi, my team sent you an email to your support address and we didn't get any feedback. It's been 5 days, it's an urgent issue for us (need access to hosted checkout).
English
2
0
2
32
Sinyx retweeté
dax
dax@thdxr·
this thing about open source projects making their tests private you're either open source or not - part of that is enabling a successor to show up for net benefit of the world making that intentionally difficult is fine but then it's something different
English
74
35
1.2K
211K
Sinyx retweeté
Harminder Virk
Harminder Virk@AmanVirk1·
v7 is finally out 🚀 I've shipped a lot of AdonisJS releases over the years. This one is different. End-to-end type safety was something I've wanted to get right for a long time, and I think we nailed it. Watch the video - You'll see what I mean. Big thanks to @julien_rpt and @romainlanz for helping in every way possible. And, kudos to Insiders for sponsoring my work 🙏
AdonisJS@adonisframework

AdonisJS v7 is here 🚀

English
17
25
232
8.9K
Sinyx retweeté
Joran Dirk Greef
Joran Dirk Greef@jorandirkgreef·
The most valuable asset you can invest in as a software engineer is: - Understanding The more you understand, the deeper your understanding, the greater your impact. This means crafting “one level deeper”, thinking more. It may take years, but understanding will reward you.
Joran Dirk Greef@jorandirkgreef

People never paid you: - for the time it took to write the code, - but for the value you created. Focus on tools that improve quality and value in the software you ship.

English
11
30
309
22.5K
Sinyx retweeté
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
AI eliminated the natural barrier to entry that let OSS projects trust by default. People told me to do something rather than just complain. So I did. Introducing Vouch: explicit trust management for open source. Trusted people vouch for others. github.com/mitchellh/vouch The idea is simple: Unvouched users can't contribute to your projects. Very bad users can be explicitly "denounced", effectively blocked. Users are vouched or denounced by contributors via GitHub issue or discussion comments or via the CLI. Integration into GitHub is as simple as adopting the published GitHub actions. Done. Additionally, the system itself is generic to forges and not tied to GitHub in any way. Who and how someone is vouched or denounced is up to the project. I'm not the value police for the world. Decide for yourself what works for your project and your community. All of the data is stored in a single flat text file in your own repository that can be easily parsed by standard POSIX tools or mainstream languages with zero dependencies. My hope is that eventually projects can form a web of trust so that projects with shared values can share their vouch lists with each other (automatically) so vouching or denouncing a person in one project has ripple effects through to other projects. The idea is based on the already successful system used by @badlogicgames in Pi. Thank you Mario. Ghostty will be integrating this imminently.
English
223
360
4K
591K
Sinyx retweeté
Yann LeCun
Yann LeCun@ylecun·
@andrewgwils Precisely. Solving problems using known templates is only a small component of intelligence.
English
20
14
230
14.3K
Sinyx retweeté
Jarred Sumner
Jarred Sumner@jarredsumner·
reading LLM-generated usage of node:fs is still very painful so much readFileSync(path).slice(0, max) so much if (existsSync(path)) statSync(path) so much readdirSync(path).map(_ => unlinkSync(_))
English
20
18
596
87.4K
Sinyx retweeté
Brendan Dolan-Gavitt
Brendan Dolan-Gavitt@moyix·
Lots of people are claiming this is useless, unimpressive, etc but I will note that compiling the Linux kernel on x86_64 to the point where it could boot and run took Clang/LLVM about 10 years! And they, too, had access to a reference gcc compiler :p
Anthropic@AnthropicAI

New Engineering blog: We tasked Opus 4.6 using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later, it worked on the Linux kernel. Here's what it taught us about the future of autonomous software development. Read more: anthropic.com/engineering/bu…

English
39
22
535
75.5K
Sinyx retweeté
vixhaℓ
vixhaℓ@TheVixhal·
Software engineering has always contained two distinct modes of work. The first is developmental: taking a clearly specified concept and translating it into a reliable, working system. This is no longer the bottleneck. AI tools like Claude Code and Codex have effectively solved it. The second mode is research. Here, the problem itself is undefined. The task is not to implement a solution, but to discover what the solution should be, new abstractions, algorithms, architectures, and ways of reasoning about computation. This layer resists automation because it depends on framing, taste, and deep conceptual synthesis rather than procedural construction. While AI can assist exploration, it does not yet originate the governing questions that drive genuine breakthroughs. For that reason, software engineering is unlikely to disappear. Instead, its center shifts toward the research frontier.
English
55
78
497
42.4K