Jonathan Sugden

613 posts

Jonathan Sugden

Jonathan Sugden

@jbsugden

Making the world a better place through responsible tech and public services. Love gadgets and the simple things in life. Views are recycled from chatGPT.

GB Katılım Ocak 2009
387 Takip Edilen221 Takipçiler
Jonathan Sugden retweetledi
Bury Football Club
Bury Football Club@buryfcofficial·
See you Monday Shakers 🫡
English
6
7
129
10.2K
Jonathan Sugden retweetledi
Bury Football Club
Bury Football Club@buryfcofficial·
⏱️ 90+4‘ Runcorn 🟡 0-3 ⚪️ Shakers A MASSIVE THREE POINTS!! GET IN!! #BuryFC | #bfc140
Bury Football Club tweet media
English
30
41
276
133.9K
Jonathan Sugden retweetledi
Lior Alexander
Lior Alexander@LiorOnAI·
Just read LeCun's latest paper. His team trained the first world model that can't collapse. Let me explain why this matters. It's called LeWorldModel. World models predict what happens next physically. Objects moving, falling, colliding. That's the base layer for robots that plan, cars that simulate before they steer, any AI that acts in reality instead of just talking about it. The catch is nobody could train these reliably. The models kept cheating. They'd map every input to the same output. Like a weather app stuck on "sunny" forever. Technically predicting. Completely useless. So teams piled on fixes. Frozen encoders, stop-gradient hacks, 6+ loss hyperparameters. A fragile stack too brittle for production. This team asked a different question. What if you make collapse mathematically impossible? An encoder turns each video frame into a small vector. A predictor takes that vector plus an action and guesses the next one. First loss: how wrong was the guess. Second loss: a regularizer called SIGReg that checks if vectors spread out like a bell curve. If they start looking the same, the loss spikes. The model can't cheat because the math won't let it. That simplicity is what makes the results possible. Six hyperparameters became one. 15M parameters. Trains on one GPU in hours. Plans 48x faster. Encodes with ~200x fewer tokens. Open-source. I could run this on my own hardware. Which changes who gets to build physical AI. Not just big labs anymore. Any team, any startup, any grad student. LeCun has pushed JEPA as the path forward. The criticism was always training instability. This paper removes that objection. Two directions compete in AI right now. Bigger LLMs with more compute. Or small models learning physics from raw pixels.
Lior Alexander tweet media
English
74
170
1.2K
98.2K
Jonathan Sugden retweetledi
Epic Clip Vault
Epic Clip Vault@EpicClipVault·
Jet suit paramedics can reach the top of Helvellyn in 3.5 minutes instead of 1 hour 15 minutes.
English
145
390
2.4K
96.5K
Jonathan Sugden retweetledi
Gandalv
Gandalv@Microinteracti1·
Project Hail Mary opened last week. Great film. But nobody is talking about the credits. They should be. A guy with a telescope spent hundreds of hours collecting light from objects so distant that the photons hitting his sensor left their source before Rome was founded. His name is Rod Prazeres. His images ended up on 70-foot IMAX screens worldwide. Look at what he captured. The Rosette Nebula is a cloud of gas 5,000 light-years away that has arranged itself into the shape of a human eye, ringed by fire. The Vela filaments are a stellar explosion still spreading outward through space – blue threads so fine they look like frost on glass. The dust pillar in the Pelican Nebula is manufacturing new suns right now. While you read this. None of it was rendered. All of it is real. Weir spent years getting the science right. The filmmakers felt the same way about the sky. When they needed something beautiful enough to close the film, they went looking for something that actually exists. They found it. 5,000 light-years out. Gandalv / @Microinteracti1
English
103
1.7K
11.1K
1.1M
Jonathan Sugden retweetledi
Tuki
Tuki@TukiFromKL·
🚨 Andrej Karpathy just explained the scariest thing happening in software right now.. someone poisoned a Python package that gets 97 million downloads a month.. and a simple pip install was enough to steal everything on your machine.. SSH keys.. AWS credentials.. crypto wallets.. database passwords.. git credentials.. shell history.. SSL private keys.. everything.. and here's the part that should terrify every developer alive.. the attack was only discovered because the attacker wrote sloppy code.. the malware used so much RAM that it crashed someone's computer.. if the attacker had been better at coding.. nobody would have noticed for weeks.. one developer.. using Cursor with an MCP plugin.. had litellm pulled in as a dependency they didn't even know about.. their machine crashed.. and that crash saved thousands of companies from getting their entire infrastructure stolen.. Karpathy's take is the real wake up call.. every time you install any package you're trusting every single dependency in its tree.. and any one of them could be poisoned.. vibe coding saved us this time.. the attacker vibe coded the attack and it was too sloppy to work quietly.. next time they won't make that mistake.
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
285
2.2K
13.9K
3.2M
Jonathan Sugden retweetledi
Claude
Claude@claudeai·
You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.
English
5K
14.4K
139.2K
77.4M
Jonathan Sugden retweetledi
Brady Long
Brady Long@thisguyknowsai·
🚨 BREAKING: Meta researchers showed a model 2 million hours of video. No labels. No physics textbook. No supervision at all. It learned gravity. Object permanence. Inertia. And it just beat Gemini 1.5 Pro and GPT-4 level models at physics understanding. Here's what just happened:
Brady Long tweet media
English
42
165
953
111.6K
Jonathan Sugden retweetledi
Thomas Wolf
Thomas Wolf@Thom_Wolf·
Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…
English
101
285
1.8K
1M
Jonathan Sugden retweetledi
Bury Football Club
Bury Football Club@buryfcofficial·
Door 6️⃣ | 6 - 0 🏆 In 1903, We won the FA Cup in record fashion by beating Derby County 6-0. The result remains the joint largest victory margin in an FA Cup final, only recently equalled by Manchester City. #buryfc | #bfc140
English
1
4
27
7.4K
Jonathan Sugden retweetledi
vittorio
vittorio@IterIntellectus·
holy shit AI can now decode the human proteome without any reference genome. a new paper published today presents a new model that can read raw mass spec and generate novel peptide sequences from scratch. no database or priors. just biology. 1/
vittorio tweet media
English
35
223
1.6K
149.8K
Jonathan Sugden retweetledi
Matthew Berman
Matthew Berman@MatthewBerman·
Major AI breakthrough: Diffusion Large Language Models are here! They're 10x faster and 10x cheaper than traditional LLMs. Here's everything you need to know:
English
149
384
3K
429K
Jonathan Sugden retweetledi
João Moura
João Moura@joaomdmoura·
Most people use AI wrong. They try to build one perfect agent. Instead, build a crew: • Research agent finds data • Analysis agent processes it • Writing agent creates content • Review agent polishes it Flow / Multi-Agents > Individual capability.
English
14
12
185
8.4K
Robert Sterling
Robert Sterling@RobertMSterling·
I don’t want to connect my coffee machine to the wifi network. I don’t want to share the file with OneDrive. I don’t want to download an app to check my car’s fluid levels. I don’t want to scan a QR code to view the restaurant menu. I don’t want to let Google know my location before showing me the search results. I don’t want to include a Teams link on the calendar invite. I don’t want to pay 50 different monthly subscription fees for all my software. I don’t want to upgrade to TurboTax platinum plus audit protection. I don’t want to install the Webex plugin to join the meeting. I don’t want to share my car’s braking data with the actuaries at State Farm. I don’t want to text with your AI chatbot. I don’t want to download the Instagram app to look at your picture. I don’t want to type in my email address to view the content on your company’s website. I don’t want text messages with promo codes. I don’t want to leave your company a five-star Google review in exchange for the chance to win a $20 Starbucks gift card. I don’t want to join your exclusive community in the metaverse. I don’t want AI to help me write my comments on LinkedIn. I don’t even want to be on LinkedIn in the first place. I just want to pay for a product one time (and only one time), know that it’s going to work flawlessly, press 0 to speak to an operator if I need help, and otherwise be left alone and treated with some small measure of human dignity, if that’s not too much to ask anymore.
English
9.4K
55.2K
332.7K
13M
Jonathan Sugden
Jonathan Sugden@jbsugden·
@RobertMSterling My oven has bluetooth. It serves no purpose as the buttons and controls are physical on it. It doesn't even let me set the time or timer which would actually be useful. It must be some kind of botnet agent as, surely, no one is that bad at product design right?
English
0
0
0
7
Jonathan Sugden
Jonathan Sugden@jbsugden·
I'm torn between being astounded or skeptical that this is publicity before product. I'm not sure I've achieved GI as I don't know to be excited or scared for our future. How are y'all feeling? 🔮🤖 youtube.com/live/SKBG1sqdy…
YouTube video
YouTube
English
0
0
0
16