Stanislav Fort

2.4K posts

Stanislav Fort banner
Stanislav Fort

Stanislav Fort

@stanislavfort

AI security @Aisle_Inc | Stanford PhD in AI & Cambridge physics | ex-Anthropic and DeepMind | scientific progress + economic growth

Prague Katılım Mayıs 2009
8.1K Takip Edilen15.3K Takipçiler
Sabitlenmiş Tweet
Stanislav Fort
Stanislav Fort@stanislavfort·
AISLE is now the #1 source of accepted security findings in OpenClaw, the fastest-growing AI agent framework. Our AI discovered 15 vulnerabilities: 1 Critical (CVSS 9.4), 9 High, 5 Moderate. 21% of all OpenClaw security advisories globally are from us, more than anyone else ⏬
Stanislav Fort tweet media
English
5
8
60
6.6K
Mario Krenn
Mario Krenn@MarioKrenn6240·
After the apparently amazing announcement by @mathematics_inc on the formalization of a major recent Fields-medal winning theorem, i had no idea how pissed the math-formalization community is. Very worrying discussions by some of the leaders/founders of Lean's mathlib. cc @ChrSzegedy
Mario Krenn tweet media
English
15
30
384
88.9K
Stanislav Fort retweetledi
Surya Ganguli
Surya Ganguli@SuryaGanguli·
Our new paper: "Solving adversarial examples requires solving exponential misalignment", expertly lead by @AleSalvatore00 w/ @stanislavfort arxiv.org/abs/2603.03507 Key idea: We all want to align AI systems to human values and intentions. We connect adversarial examples to AI alignment by showing they are a prototypical but exponentially severe form of misalignment at the level of perception. The fact that adversarial examples remain unsolved for over a decade thus serves as a cautionary tale for AI alignment, and provides new impetus for revisiting them. We shed light on why adversarial examples exist and why they are so hard to remove by asking a basic question: what is the dimensionality of neural network concepts in image space? For ResNets, and CLIP models, we show that neural network concepts (the space of images the network confidently labels as a concept) fill up almost the ENTIRE space of images (~135,000 dimensions out of ~150,000 for ImageNet & ~3000 out of 3072 for CIFAR10). In contrast natural image concepts are only ~20 dimensional. This indicates exponential misalignment between brain and machine perception (neural networks perceive exponentially many images as belonging to a concept that humans never would). This also explains why adversarial examples exist: if a concept fills up almost all of image space, ANY image will be close to that concept manifold. We further do experiments across > 20 networks showing that adversarial robustness inversely relates to concept dimensionality, though the most robust networks do not completely align machine and human perception. Overall the curse of dimensionality raises its ugly head as an impediment to both adversarial examples and alignment: if can be difficult to get AI systems to behave in accordance with human intentions, values, or perceptions over an exponentially large space of inputs. See @AleSalvatore00's excellent thread for more details: x.com/AleSalvatore00…
Surya Ganguli tweet media
English
10
25
183
15K
Jacob Shell
Jacob Shell@JacobAShell·
Sure. So the “insider” knowledge is to buy from Deutsche Bahn not Trenitalia, even when the Munich-Rome train is a Trenitalia train. Fine! Well I certainly know that now. But I’ve been told another piece of “insider” knowledge, that when taking the train from Germany to Czechia one should buy the tix from the Czech train agency not German, bc this is cheaper. So in this case it’s the reverse!! Eurosupremecists, do you really not see the problem here? How can you people have produced all those amazing paintings and symphonies but not grasp the problem with this kind of arbitrary and illegible bureaucratic chaos?
Polar Bear@PolarBearFinn

@JacobAShell For international train tickets, I warmly recommend to buy from db.de

English
13
0
38
4.3K
Stanislav Fort
Stanislav Fort@stanislavfort·
@thkostolansky @jparkerholder For me: Deep neural networks are essentially magic and are able to learn what could most closely be described as intuition about data from a very limited set of training examples.
English
0
0
1
38
Tim Kostolansky
Tim Kostolansky@thkostolansky·
@jparkerholder nice. takes on important lessons from alphago that you see as continuing to be important today? (in particular contrasted with red herrings that it may have produced)
English
1
0
2
288
laurent
laurent@afrogmaen·
@norpadon You CAN, and then you will see that it will output word for word its training data.
English
7
0
23
3.1K
Stanislav Fort
Stanislav Fort@stanislavfort·
I always assumed that the reasoning was primarily: lidar's are big, expensive, and need cooling => adding them to normal cars would be infeasible => this goes against the strategy of retrofitting old models with sef-driving. And since then they talked about it so much that it might be a point of pride not to revert even though lidars are now cheap and small.
English
0
0
7
765
Isaac King 🔍
Isaac King 🔍@IsaacKing314·
I'm generally hesitant to accuse successful people of being terrible at their own field, but I have long thought that Tesla's reasoning for this made very little sense. Yes it's true that humans manage to drive without lidar, but humans would drive more safely if they did have lidar! Even if Tesla had succeeded at human-level driving AI, there are fundamental limits on sensors that can't see in the dark. The obvious next step would be to make your cars even safer via superhuman perception, so why would you not just include those sensors from the beginning?
Tenobrus@tenobrus

tesla's decision to point blank refuse to touch lidar has proven to be one of the most insane self owns of any technology company ever. they easily have the research talent, and waymo has proved they could be doing millions of fully autonomous rides. at this point it's a choice

English
73
23
976
154.4K
Stanislav Fort
Stanislav Fort@stanislavfort·
@AlexKontorovich In my experience at Stanford the honor code was just a thin veil for students to cheat without recourse (and I could do basically nothing as a TA). Fairness is more important than maintaining impractical traditions.
English
1
0
6
901
Alex Kontorovich
Alex Kontorovich@AlexKontorovich·
Very sad indeed. When I moved from Princeton to Columbia for grad school, I was *shocked* when I was told that I actually had to proctor my calc exams, and just not trust their "honor code". At Princeton, the honor code genuinely meant something. In the first week of freshman year, we had to write an essay explaining in detail what the consequences of cheating were, why it didn't serve our long term interests, and how even if we weren't the ones cheating, if we knew that others were and didn't report it, we would be just as guilty. (I remember vividly, because my first attempt at such an essay was rejected as insufficiently detailed! I had to write a much longer version.) As a result, people really didn't cheat (as far as I knew; every year there was ~1 student kicked out of school for cheating). It was something really special, it turned out; that's not how it works elsewhere. Sad to see further deterioration of the culture at Princeton.
Steve McGuire@sfmcguire79

Exams at Princeton have been unproctored under an Honor Code since 1893. “Students pledge both to refrain from infractions of academic dishonesty and to report any breaches of the Constitution they witness.” But AI has led to an increase in academic dishonesty cases, so:

English
39
50
1.4K
507.5K
Luke Burgis
Luke Burgis@lukeburgis·
I think this is an individual suffering from AI psychosis. If you think this is "very touching", I don't know what to tell you. Reading both the AI-generated constitution and the replies to it—and her choosing to publish this in the first place—makes me wonder what is actually going on at Anthropic. If we strip away the government drama, what's left is something that feels superficial, masquerading as profound.
Amanda Askell@AmandaAskell

I asked Claude to write my constitution. I thought its Amanda constitution was very touching.

English
101
41
905
91.8K
Stanislav Fort
Stanislav Fort@stanislavfort·
4th (!!) high-severity (!!!) vulnerability in Firefox discovered by @Aisle_Inc 's autonomous AI system in the last few months. CVE-2026-2757: heap overflow in WebRTC H.264 decoding, attacker-controlled out-of-bounds read/write. Patched in Firefox 148.
Stanislav Fort tweet media
English
1
10
83
7.4K
Stanislav Fort
Stanislav Fort@stanislavfort·
AWS directly credited our AI system, AISLE, in their security bulletin with 3 new CVEs in AWS-LC = Amazon's backbone cryptographic library. Certificate chain validation bypass, timing side-channel in AES-CCM, signature validation bypass.
Stanislav Fort tweet media
English
0
1
18
1.7K
Stanislav Fort retweetledi
Adam Křivka
Adam Křivka@adam_krivka·
Speaking about how we found 12/12 CVEs in the most recent OpenSSL release using AI, tomorrow 4:35pm at unpromptedcon.org in SF. Come tune in. @Aisle_Inc
Adam Křivka tweet media
English
0
1
7
1K
Anjuli Pierce
Anjuli Pierce@anjulipie·
@bitcloud @stanislavfort Ironic that I was reading my 1965 Encyclopedia Britannica last night and came to this same conclusion independently, and from a very basic normie understanding of how all of this works.
English
1
0
1
60
Lachlan Phillips exo/acc 👾
These are big models because there is zero true innovation at the architectural level at these organisations. They all read one paper, realised there was economic utility in commercialising it, realised there was great benefit to ensuring it remains centralised, added random seeds to remove auditability and maximise mystery and proclaimed themselves custodians of our future. The myth is that free people need benevolent compute lords to control their neurons. There's very little commercial incentive to solve this problem but it's without a shadow of a doubt solvable. This rhetoric only proves to me that they have no intention of solving it, and that that revolution will need to come from the basements of small startups unafraid to upset the bottom line.
Dustin@r0ck3t23

Dario Amodei just dismantled the biggest myth in the AI industry. Open source AI isn’t free. It never was. Amodei: “It’s not free. You have to run it on inference and someone has to make it fast on inference.” For decades, open source meant something real. It meant a teenager in a basement could download the same tools as a Fortune 500 company. Could read the code. Could modify it. Could build something that competed with the giants. That was genuine democratization. That actually happened. AI is different. Fundamentally. Physically. In ways the ideology hasn’t caught up to yet. Downloading the weights is the easy part. The part that actually costs something is turning the weights into a running system. Into responses. Into intelligence operating in real time at scale. That requires compute. Power. Infrastructure. The kind measured in billions of dollars and years of construction. Amodei: “These are big models. They’re hard to do inference on. Ultimately you have to host it on the cloud. The people who host it on the cloud do inference.” The open source debate was never about who owns the model. It was always about who owns the cloud. And Amodei goes further. When a competitor drops a new open model, he doesn’t ask whether it’s open or closed. He doesn’t care about the licensing. He doesn’t engage the ideology. Amodei: “I don’t think it mattered that DeepSeek is open source. I think I ask, is it a good model? Is it better than us at the things that matter? That’s the only thing that I care about.” That’s the ruthless clarity of someone actually trying to win. While the media debates licensing frameworks, Amodei is asking one question. Is it better. Everything else is a distraction. Amodei: “I don’t think open source works the same way in AI that it has worked in other areas. Here we can’t see inside the model.” This isn’t Linux. You can’t read it. You can’t fork it. You can’t understand it the way generations of developers understood the tools they inherited. You can download it. And then you need a data center to run it. The teenager in the basement who was supposed to be empowered by this revolution needs a billion dollars of infrastructure before the empowerment starts. The era of the basement coder rewriting civilization on a laptop is over. The future belongs to whoever commands the compute, owns the power grid, and can actually turn the intelligence on. Open weights without infrastructure isn’t democratization. It’s a promise the physics of the universe won’t let us keep.

English
16
16
210
13.8K
Stanislav Fort
Stanislav Fort@stanislavfort·
This just brings me back to my original point that you haven't responded to. > Have any effective LLMs been released that aren't based on transformers or diffusion since 2017? have would you know? from the outside, it looks the same, tokens in, tokens out. If there were a lot of progress, one thing you might expect is that the models get smarter, and they have been getting smarter by quite a bit.
English
1
0
4
324
Lachlan Phillips exo/acc 👾
Have any effective LLMs been released that aren't based on transformers or diffusion since 2017? If you presented to me a way of training or running inference in a distributed manner I would believe you. Saying "ooh we have a secret" doesn't cut it. I could say the same. So sure, I'll ask: Have you encountered any true breakthroughs in decentralised or distributed language models that use any architecture other than transformers? Or are the innovations optimisations around the proven architectural breakthrough everyone is using?
English
4
1
19
1.2K
Stanislav Fort
Stanislav Fort@stanislavfort·
@phl43 This was my impression back in late 2015 around the AlphaGo days. I felt like we discovered magic which has no right to work and yet it very obviously did.
English
2
0
21
3.2K
Philippe Lemoine
Philippe Lemoine@phl43·
We constantly talk about how weird it is that "predict the next word" can lead to the kind of capabilities that LLMs exhibit, but I'm almost done reading Understanding Deep Learning by Simon Prince and honestly I kind of feel the same way about other types of models. For instance, when you know how diffusion models work, it also feels magical that we can use them to do that kind of shit. I just think that deep learning in general is extremely weird.
Philippe Lemoine@phl43

This is funny but also it's just amazing what AI can do at this point. I know you're supposed to show disdain for AI-generated slop, but this is honestly impressive.

English
14
13
371
38.3K
Stanislav Fort
Stanislav Fort@stanislavfort·
@GivETHLife @benedictk__ If the UK were still in the EU, it would definitely be London and a large gap to the runner up. Without London, I don't see the EU as having any dominant tech capital tbh.
English
3
0
4
95
Benedict Kerres
Benedict Kerres@benedictk__·
Munich and Zurich are popping. Just got back to Vienna - beautiful place - but it feels like there is no tech or excitement at all.
English
36
3
267
29.8K
Charlotte Lee
Charlotte Lee@cljack·
@mattyglesias this desire is quickly cured by having literally any interaction with a non-tourism industry Portuguese company or government agency
English
4
0
60
4K
Stanislav Fort
Stanislav Fort@stanislavfort·
@bswud @aryehazan I think you are measuring 1 - (employee net / employER expense), right? I was talking about 1 - (employee net / employEE gross). That's probably it
English
1
0
0
22
Ben Southwood
Ben Southwood@bswud·
@stanislavfort @aryehazan Maybe we are just talking past each other. I think you have to include the employer-side taxes (which are enormously higher in the EU-6) since they affect the salaries that you actually see (which are much lower)
English
1
0
1
25