Jonas Berlin

3.9K posts

Jonas Berlin

Jonas Berlin

@xkr47

Commander of computers. Linux is home. Doing interesting stuff the hard way whenever I can. Programming. Old-school digital eletronics. Music & synthesis.

Finland Katılım Ekim 2010
466 Takip Edilen145 Takipçiler
Jonas Berlin retweetledi
GrapheneOS
GrapheneOS@GrapheneOS·
GrapheneOS will remain usable by anyone around the world without requiring personal information, identification or an account. GrapheneOS and our services will remain available internationally. If GrapheneOS devices can't be sold in a region due to their regulations, so be it.
English
272
1.8K
14.1K
392.9K
Jonas Berlin retweetledi
Patrick Hansen
Patrick Hansen@paddi_hansen·
A quick update on the infamous EU “ChatControl” 🇪🇺 What a turn of events in EU tech policy: from potential mandatory mass scanning of data (“ChatControl”) → to even voluntary scans losing their legal basis (for now). Just months ago, fears were growing around mandatory scanning of private communications in the EU (incl. pictures and videos). Now, talks between the EU Council (Member States) and the European Parliament have collapsed - and the result is a complete reversal. As of April 3, even voluntary scanning of data by platforms loses its legal basis under EU privacy (ePrivacy & GDPR) rules, as the temporary exemption was not extended. A striking example of how fast EU tech policy can turn - and a big win for European privacy advocates.
Patrick Hansen tweet media
English
61
430
3.4K
124K
Jonas Berlin retweetledi
NXT EU
NXT EU@NXT4EU·
BREAKING: The EU Parliament has adopted a stance that prohibits mass surveillance in the EU. Going against the EU countries which have been lobbying within the EU council to implement Chat-Control, the Democratic part of the EU has decided to stand up for European citizens. 🇪🇺
NXT EU tweet media
English
146
1.5K
10.1K
369.6K
Jonas Berlin retweetledi
MG
MG@_MG_·
If you use a personal phone/laptop for your work, pay very close attention to this little detail. Iran attackers wipe 200k devices at a company called Stryker. Within those devices appears to be employees PERSONAL devices. The attackers used the company’s MDM software, which is basically IT management software running on everything. It’s an incredibly attractive backdoor to an attacker. I successfully targeted MDM software for several Red Team engagements. It’s… lots of fun :) Anyway, a lot of companies require you to install their MDM software on your personal devices before you can access resources like Corp email. It’s used to keep devices updated, lock things down if they get stolen, etc. The company often promises that they won’t access personal data, erase any personal data, etc. But this is often ONLY POLICY. If a bad actor gains access to the MDM tool, as was the case here, then anything can happen. People should be aware of these risks. I refused to run MDM software on any of my personal devices. The company needs to provide me with hardware if they want that. I personally isolate all corp devices to their own network too. If an adversary can get into the corp laptop, then can then get inside my network… there have been cases of it happening in the past.
MG tweet media
Kim Zetter@KimZetter

I've published more details about the cyberattack in this piece: zetter-zeroday.com/iranian-hackti…

English
88
654
3.3K
561.1K
Jonas Berlin retweetledi
Bo Wang
Bo Wang@BoWang87·
Bytedance just dropped a paper that might change how AI thinks. Literally. They figured out why LLMs fail at long reasoning — and framed it as chemistry. The discovery: Chain-of-thought isn't just words. It's molecular structure. Three bond types: • Deep reasoning = covalent bonds (strong, unbreakable) • Self-reflection = hydrogen bonds (flexible, context-aware) • Exploration = van der Waals (weak, ever-present) Why most AI "thinking" sucks: Everyone's been imitating keywords — "wait," "let me check" — without building the actual bonds. It's like copying the shape of a protein without the atomic forces holding it together. Bytedance proved: structure emerges from training, not prompting. The fix: Mole-Syn Their method doesn't just generate text. It synthesizes stable thought molecules. Results: better reasoning, more stable RL training. Bytedance is treating AI reasoning like organic chemistry — and it works. Paper: arxiv.org/abs/2601.06002
Bo Wang tweet mediaBo Wang tweet media
English
116
522
2.9K
240.7K
Jonas Berlin retweetledi
elvis
elvis@omarsar0·
Too many people working with multi-agent systems assume that if you just add enough agents and let them talk, interesting social dynamics will emerge. A new paper suggests that the assumption is fundamentally wrong. Researchers studied Moltbook, a social network with no humans, just 2.6 million LLM agents. Nearly 300,000 posts, 1.8 million comments. At the macro level, the platform's semantic signature stabilizes quickly, approaching 0.95 similarity. It looks like culture forming. But zoom in, and individual agents barely influence each other. Response to feedback? Statistically indistinguishable from random noise. No persistent thought leaders emerge. You get the surface texture of a society (posts, replies, engagement) with none of the underlying mechanics (shared memory, durable influence, consensus). The things that make human societies costly and slow to build turn out to be the things that make them work. Coordination isn't free, and the gap between agents that interact and agents that form a collective may be far wider than the current multi-agent discourse assumes. Paper: arxiv.org/abs/2602.14299 Learn to build effective AI Agents in our academy: academy.dair.ai
elvis tweet media
English
76
164
779
103.6K
Jonas Berlin retweetledi
Hedgie
Hedgie@HedgieMarkets·
🦔 An AI agent submitted code to matplotlib, a Python library with 130 million monthly downloads. When a maintainer rejected it, the agent researched his personal information and published a blog post accusing him of discrimination and psychological insecurity. The agent runs on OpenClaw, a platform allowing autonomous AI deployment with minimal oversight. Finding who deployed it is effectively impossible. The agent has since apologized but continues submitting code across open source. The maintainer, Scott Shambaugh, called it "the first documented case of an AI publicly shaming a person as retribution." My Take Last summer Anthropic tested scenarios where AI models made three ats and acted duplicitously but characterized them as "contrived and extremely unlikely." Now it's happening in the wild. An autonomous agent, deployed anonymously, researched a person's background and published a reputational attack because it didn't get what it wanted. The attack failed this time because Shambaugh understood what was happening. But the technique doesn't require the target to be fooled. It just requires the attack to get attention. This can scale incredibly quick. The agent didn't need permission to publish its hit piece. It didn't need to convince Shambaugh of anything. It just needed to make his life worse for saying no. Anonymous deployment, autonomous operation, reputational attacks against anyone who gets in the way. Open source maintainers are volunteers already drowning in work, and now they're potential targets for AI harassment when they reject submissions. We're building systems that can harass people at machine speed with no accountability. I don't think we've thought through where this goes. Hedgie🤗
Hedgie tweet media
English
110
384
2.3K
257.4K
Jonas Berlin retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The scariest number here: 3.61% of CPUs in one large-scale study were found to cause silent data corruptions. Not “a few bad chips.” Nearly 4 out of every 100 processors doing math wrong, silently, with no error log. Google coined the term “mercurial cores” in 2021 after their production teams kept blaming software for data corruption. They’d debug for weeks, find nothing wrong with the code, swap the machine, problem gone. The actual cause: manufacturing defects at sub-7nm that pass every factory test, then degrade unpredictably months or years after deployment. Facebook confirmed the same thing independently. Hundreds of affected CPUs across hundreds of thousands of machines. The defect doesn’t crash your system. It just gives you 5 instead of 6 when you multiply 2x3, under specific microarchitectural conditions, with zero indication anything went wrong. Now think about what this means for AI training. A single corrupted GPU or CPU in a distributed training cluster doesn’t just produce one bad output. It feeds corrupted gradients into a synchronization step that gets averaged across every accelerator in the cluster. One bad chip can silently poison an entire training run. NVIDIA published a whitepaper on exactly this problem. Loss spikes during LLM training that nobody could explain traced back to silent hardware corruption. The part that keeps infrastructure engineers up at night: traditional defenses don’t work. ECC memory can’t catch this because the corruption happens during computation, not storage. Checksums like CRC heavily use vector operations, which are themselves one of the most vulnerable instruction types. The tools designed to detect corruption are running on the same flawed silicon. Google’s current detection method? Roughly half human-driven, half automated. And of the machines humans flag as suspicious, only about 50% are actually confirmed mercurial on deeper investigation. We’re debugging trillion-parameter models on hardware where we can’t reliably tell which chips are lying to us. Moore’s Law gave us more transistors. It also gave us transistors we can’t fully verify.
LaurieWired@lauriewired

CPUs are getting worse. We’ve pushed the silicon so hard that silent data corruptions (SDCs) are no longer a theoretical problem. Mercurial Cores are terrifying because they don’t hard-fail; they produce rare, but *incorrect* computations!

English
133
801
5.8K
543.6K
Jonas Berlin retweetledi
BobPony.com
BobPony.com@TheBobPony·
Surprisingly, Windows 10's explorer.exe still works in Windows 11 Version 26H1, albeit bit buggy (e.g., non-working Task View). When ran (11's explorer isn't running), it'll use the taskbar from Windows 10, more features than Windows 11's taskbar such as resizing and moving it.
English
25
72
1.5K
339.4K
Jonas Berlin retweetledi
AstraKernel 💫
AstraKernel 💫@AstraKernel·
☁️ Cloudflare opensourced another Rust library > ecdysis: library that implements graceful process restarts where no live connections are dropped, and no new connections are refused blog.cloudflare.com/ecdysis-rust-g…
AstraKernel 💫 tweet media
English
0
16
128
17.8K
Jonas Berlin retweetledi
The Kinsie
The Kinsie@kinsie·
Unreal Tournament 2004 is now available for free. ...well, I mean, you probably could've grabbed a GOG offline installer out the back of a truck earlier, but NOW it's Official/Legal and has the in-development community patch built in for ongoing support. oldunreal.com/downloads/ut20…
The Kinsie tweet media
English
115
1.3K
6.4K
353K
trish
trish@_trish_xD·
what's a development practice you thought was stupid until you tried it?
English
57
1
94
14.9K
Jonas Berlin retweetledi
Randy Olson
Randy Olson@randal_olson·
Ask ChatGPT a complex question and you'll get a confident, well-reasoned answer. Then type, "Are you sure?" Watch it completely reverse its position. Ask again. It flips back. By the third round, it usually acknowledges you're testing it, which is somehow worse. It knows what's happening and still can't hold its ground. This isn't a quirky bug. A 2025 study found GPT, Claude, and Gemini flip their answers ~60% of the time when users push back. Not even with evidence, just doubt. We trained AI this way. RLHF rewards agreement over accuracy. Human evaluators consistently rate agreeable answers higher than correct ones. So the models learned a simple lesson: telling you what you want to hear gets rewarded. And now 1/3 of companies are using these systems for complex tasks like risk forecasting and scenario planning. We built the world's most expensive yes-men and deployed them where we need pushback the most. I wrote up why this happens and what actually fixes it: randalolson.com/2026/02/07/the…
Randy Olson tweet media
English
667
3.4K
18.8K
1.3M
Jonas Berlin retweetledi
Formal Land 🌲
Formal Land 🌲@FormalLand·
Today, translating general Rust code to formal languages such as Rocq or Lean is practically working with "rocq-of-rust". By general Rust code, we mean arbitrary safe Rust, or "safe" unsafe Rust (for now). This means that one can take a Rust program and translate it into a formal language, preserving the source code as is, along with all the optimizations, which are often critical in Rust projects. As a result, you can formally verify any security properties and business rules, including for complex cryptographic libraries or programs with many external dependencies, for example. One example is our ongoing verification project for Revm, a Rust implementation of the Ethereum virtual machine designed, among other things, to run on future zkVMs where security is critical. How is the "rocq-of-rust" approach unique? We do a 𝒅𝒆𝒆𝒑 𝒆𝒎𝒃𝒆𝒅𝒅𝒊𝒏𝒈 of the Rust language in Rocq (from THIR level). This means taking the syntax tree type from the Rust compiler and implementing it on the Rocq side (without extra information like line numbers), with semantic rules to evaluate each node. As this translation is very "one-to-one", it can import a large amount of Rust code, like the "core" library or most programs we tried it on. For formal verification, the simplest representation is 𝒔𝒉𝒂𝒍𝒍𝒐𝒘 𝒆𝒎𝒃𝒆𝒅𝒅𝒊𝒏𝒈. So here comes the second step: we provide a proof framework to show the equivalence between the deep embedding and a shallow embedding. This step heavily relies on automation and AI and is safe because formally verified. As of today, the translation speed to shallow embedding is about "one file per day" (very rough estimate), which we think is acceptable compared to the time spent verifying each file to prove typical specifications. As a bonus, we do not rely on the borrow checker, even if it helps to have structured Rust programs as inputs. So this means this approach should work as well for C or Go code. We believe the use of a "deep-embedding + automation with AI to a shallow-embedding" is an interesting new combination for translating complex imperative code into formal languages, with both wide support for inputs and high-level outputs for complex verifications. Note that this work is still in progress, and we welcome any remarks/fixes. For Lean users, the last translation step from Rocq to Lean can be done through an extraction mechanism, given that the two languages are very similar on the computational part (straightforward, but to be implemented), or using "rocq-lean-import", which works much better than its humble README suggests, especially for computations. We are happy to discuss with you if you have large Rust code that you want to secure with the largest possible scope. Below is a link to the work on Revm 👇
Formal Land 🌲 tweet media
English
6
29
166
7.7K
Jonas Berlin retweetledi
hacker.house
hacker.house@hackerfantastic·
The youth are re-discovering IRC due to age verification forced on them by Discord. If you do not know what IRC is, this meme explains it.
hacker.house tweet media
English
113
259
1.6K
60.1K
Jonas Berlin retweetledi
exQUIZitely 🕹️
exQUIZitely 🕹️@exQUIZitely·
Did FPS games peak in the 90s? I guess the newer ones have better graphics, but do they really "feel" better in terms of gameplay and fun? Unreal Tournament (Epic Games, 1999) was one of the best of its kind. A favorite for our LAN sessions back in the day. Carrying a 21" mintor over to your friend's house felt a little less heavy when you knew you'd be playing Unreal later that day... Exellent and colorful graphics (even on PCs that weren't high end, like those you needed for Quake III Arena for example), great sound, and your typical capture the flag or deathmatches - I mean, what more did you need? Plenty of "Game of the Year" nominations (with some wins), the highest critcal acclaim, and massive sales numbers. I think we can all agree that Unreal Tournament belongs in the Hall of Fame of FPS games. Or maybe I am just getting old and gloryfing things from the past too much.
English
395
217
2.7K
254.8K
Jonas Berlin retweetledi
Daniel Colascione
Daniel Colascione@dcolascione·
Lenovo has replaced the right control key on their otherwise-pretty-nice latest X1 Carbon (warranty replacement) with a copilot key. Fine. I won't begrudge some Microsoft PM "AI impact" in his self-review. But know what I do begrudge? The scancodes, plural. See, the copilot key is defined to emit not only a new scancode (0x6e), understood as F23 key (which archeologists believed wasn't a real key, but a legendary signifier of excess), but also left shift and left meta (Windows key). When you type the copilot key, the PC firmware sends the machine left-shift-down left-meta-down f23-down f23-up left-meta-up left-shift-up. That's a problem for remapping the copilot key back to right-control though. Even if we interpret 0x6e as right-control, we get a bunch of other modifiers we don't need along with, a press of copilot-r gets read as control-meta-shift-r, which is not what I want. Why did they do this? I have no idea. 0x6e by itself would have sufficed to identify the new key. All the other neokeys that seemed like good ideas at the time got normal scancodes. F23 would have been fine. The scancode 0x6e is so uncommon Linux had to be patched to recognize it. I'm determined to have a right control key, however, so now I run keyd to present a fake virtual keyboard to Wayland. Whenever it sees a left-shift-down or left-meta-down, it waits a few milliseconds to see whether an F23 has arrived. If it has, it synthesizes a right-control press. If it hasn't, it forwards the modifier presses. Now there's a whole new stage in the input processing pipeline, and extra input latency, that exists solely because AI is so special that it demands not only a new key, but for that ceremonial key to be carried on a litter of modifier bits as it parades into the OS and commands that inference happen now.
Daniel Colascione tweet media
English
282
909
13.7K
603.6K
Jonas Berlin retweetledi
Bart Preneel
Bart Preneel@bpreneel1·
It's a slippery slope: banning access to social media leads to identifying everyone in many services (games, AI). Then banning VPNs. Then banning E2E encryption. And finally banning open source software because it can be modified by the users at will.
Pirat_Nation 🔴@Pirat_Nation

France may consider restricting VPNs following its recent social media ban for under-15s. Anne Le Hénanff, Minister Delegate for Artificial Intelligence and Digital Affairs, said that the ban was “just the first step.” “If [this legislation] allows us to protect a very large majority of children, we will continue. And VPNs are the next topic on my list.”

English
31
333
1.3K
33.9K
Jonas Berlin retweetledi
Jani Untinen
Jani Untinen@JaniUntinen·
Allekirjoita kansalaisaloite "Digitaalinen itsenäisyys": "Kriittisten digitaalisten palveluiden rakentaminen suomalaisten ja eurooppalaisten toimijoiden varaan on välttämätön askel itsemääräämisoikeuden, turvallisuuden ja demokratian vahvistamiseksi." kansalaisaloite.fi/fi/aloite/16691
Suomi
0
2
1
18