Paul Vann

718 posts

Paul Vann banner
Paul Vann

Paul Vann

@pjvann

founder @ validia | cyber & ai researcher

New York, NY Katılım Şubat 2025
111 Takip Edilen151 Takipçiler
Sabitlenmiş Tweet
Paul Vann
Paul Vann@pjvann·
I'm unsure if this is a hot take or not, but I recently dove into mythos and why I think the cyber industry's response to it has been bloated, chaotic, and distracting additionally touching on why I think it's bad for the cyber industry to be so focused on it check it out below!
Paul Vann tweet media
English
1
0
2
133
Paul Vann
Paul Vann@pjvann·
@JFPuget apologies - I did get the sarcasm, this was geared at your audience not you 😃
English
1
0
1
49
JFPuget 🇫🇷🇺🇦🇨🇦🇬🇱
If true it would be quite ironic coming from the Mythos company, right? How come they did not detect that one?
Cyber Security News@The_Cyber_News

⚠️ Critical Anthropic’s MCP Vulnerability Enables Remote Code Execution Attacks Source: cybersecuritynews.com/anthropics-mcp… A critical flaw in Anthropic’s Model Context Protocol (MCP) exposes over 150 million downloads to potential compromise. The vulnerability could enable full system takeover across up to 200,000 servers. Unlike a traditional coding bug, this vulnerability is architectural, meaning any developer building on Anthropic's MCP foundation unknowingly inherits the exposure from the ground up. The flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation. Successful exploitation grants attackers direct access to sensitive user data, internal databases, API keys, and chat histories, effectively handing over complete control of the affected environment. #cybersecuritynews

English
8
6
84
10.1K
Paul Vann
Paul Vann@pjvann·
finished @nxthompson book, the running ground this past weekend and it was such a great read, I had to order my mom a copy now my recent amazon orders look like this:
Paul Vann tweet media
English
2
0
3
233
Paul Vann
Paul Vann@pjvann·
this is great, but isn't enough on it's own. will need fiber technicians -> hvac professionals for cooling datacenters -> electricians for providing power to these datacenters and more then you actually need all of the infra (electric grid, approval from gov, etc) to be there first
English
0
0
0
34
Paul Vann retweetledi
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
Thinking that LLMs are "AGI", is a similar psychological mechanism to 2000s kids who thought their "20 Questions" gadget game was reading their mind. Statistical algorithms are impressive, but they aren't magic or sentient.
English
77
96
1.1K
19.7K
Paul Vann
Paul Vann@pjvann·
@tanayj one thing i wonder. similar to the failures that killing manufacturing jobs in the US brought about today, what impact will killing the software engineer / computer science skillset have years from now not sure, but i'm sure it'll be ever-present
English
0
0
0
138
Tanay Jaipuria
Tanay Jaipuria@tanayj·
Wow. The number of students graduating with a CS degree at Berkeley is projected to fall off steeply
Tanay Jaipuria tweet media
English
33
63
972
155K
Paul Vann
Paul Vann@pjvann·
@Star_Knight12 and 99% are a database and/or an openai call with a nice UI hahaha
English
0
0
1
512
Prasenjit
Prasenjit@Star_Knight12·
90% of SAAS products are just a database with a nice UI
English
168
106
2.7K
133.1K
Paul Vann
Paul Vann@pjvann·
i think a concept that many people haven't grasped with AI is that, yes it does make mistakes, but at large scale, it makes less mistakes than humans do. the reason why its difficult to deploy is not because of the mistakes it makes, but because there is no one to hold accountable
English
0
0
0
230
Ethan Mollick
Ethan Mollick@emollick·
Classic study gave 146 economist teams the same dataset & got wildly different answers New paper reruns it with agentic AI. Claude Code & Codex land near the human median, but with far tighter dispersion & no extremes. Suggests that AI is now useful for doing scalable research.
Ethan Mollick tweet media
English
40
135
775
62.1K
Paul Vann
Paul Vann@pjvann·
@simplifyinAI I don't mean to be that guy, but I'm going to be: Did people really believe that LexisNexis had built a hallucination free AI? Like did people actually believe this? 😂
English
1
0
0
91
Simplifying AI
Simplifying AI@simplifyinAI·
turns out "hallucination-free" AI was a lie the whole time.. stanford and yale just published the first real audit of LexisNexis and Thomson Reuters' legal AI tools.. the ones marketed to every lawyer in america as "100% hallucination-free." the results are brutal: → LexisNexis hallucinates 17% of the time → Thomson Reuters hallucinates 17% AND refuses to answer 62% of the time → one response claimed Justice Ginsburg dissented in Obergefell. she didn't. she joined the majority. → another cited a real case to defend a law the supreme court already overturned → Lexis even cited opinions by "Judge Luther A. Wilgarten", a judge who has never existed RAG doesn't kill hallucinations. it just hides them behind real-looking citations. and lawyers are getting sanctioned for trusting it. 100% peer-reviewed. from stanford law.
Simplifying AI tweet media
English
32
114
271
15.2K
Paul Vann
Paul Vann@pjvann·
@vivoplt at this point, I'd argue that most SaaS products for Sales Teams are exactly this in some way shape or form almost always vibe coded as well with the same UI / Design look and feel
English
0
0
3
1.2K
Vivo
Vivo@vivoplt·
90% of "AI startups" are just: Take user input Send to OpenAI API Display response Charge $29/month
English
154
91
2.9K
175.2K
Paul Vann
Paul Vann@pjvann·
I hate to say it, but in some sense it's already too late adversaries / hackers could collect as much encrypted traffic as they'd like to now or over the past 30 years, and the day that we reach this point, decrypt it all we need to make the shift over to quantum resistant cryptography
English
0
0
0
36
Guri Singh
Guri Singh@heygurisingh·
2012: "We need 1 billion qubits to break encryption" 2021: "We need 20 million" Feb 2026: "We need 100,000" Last week: "We need 10,000" That's a 100,000x drop in 14 years. The encryption protecting your Bitcoin. Your banking. Your passwords. Your private messages. Caltech. Google. A quantum startup called Oratomic. They all published papers the same week. Your digital life has an expiration date now: ↓
Guri Singh tweet media
English
16
62
225
24.4K
Paul Vann
Paul Vann@pjvann·
I mean, this is already here in a certain sense, but quantum computing will definitely drive some significant innovation over the next few years / decades apart from that, it's really learning more about genetics, biology, and humans that's the next frontier AI will of course help with all of this 😁
English
0
0
3
533
Floro S.
Floro S.@sflorimm·
What will come after AI?
English
2.4K
82
1.3K
252.2K
Paul Vann
Paul Vann@pjvann·
people don't pay for Bloomberg terminal because of its interface, UI, or how it surfaces data. They pay for Bloomberg terminal because it has data that you can only get on Bloomberg terminal. This is awesome that Anthropic is doing this, and it could very slightly disrupt simple tools like CRMs but I'd put a ceiling here on what capabilities like this actually add
English
0
0
1
432
Ejaaz
Ejaaz@cryptopunk7213·
my god. anthropic casually going after bloomberg terminal and every single data tracking provider under the sun 😂 bloomberg terminal charges $24K per seat this could affect major data platforms like DataDog, Google analytics, CRM dashboards and sooo much more Anthropic is building the control center for every single enterprise company unbelievable
Claude@claudeai

In Cowork, Claude can now build live artifacts: dashboards and trackers connected to your apps and files. Open one any time and it refreshes with current data.

English
145
94
2.4K
838.9K
Paul Vann
Paul Vann@pjvann·
when I see stats like this, it makes me wonder how much AI-generated music is being surfaced on platforms like Apple Music or Spotify. I personally haven't noticed any drop in quality of music on these platforms, haven't seen anything blatantly coming across is AI generated I agree that the internet as a whole is getting hit with this problem, but I think in certain places its harder to find (i.e. apple music, spotify)
English
1
0
0
75
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 Human art simply CANNOT keep up with the pace of AI-generated art. Dismissing the problem by saying "people will still prefer human art" ignores the massive dilution already underway. If human art is not actively protected, it will become invisible and devalued.
Luiza Jarovsky, PhD tweet media
English
93
78
267
10K
Paul Vann
Paul Vann@pjvann·
I think this is a big reason why Yann Lecun and other proponents of world models might be on the right track - LLMs today are next token predictors whereas a real reasoning model should be able to think through how the world changes in response to an action. I could be oversimplifying - but point still stands
English
0
0
1
65
Yasir Ai
Yasir Ai@AiwithYasir·
🚨 Just IN: This paper from Stanford and Harvard explains why most “agentic AI” systems feel impressive in demos and then completely fall apart in real use. The core argument is simple and uncomfortable: agents don’t fail because they lack intelligence. They fail because they don’t adapt. The research shows that most agents are built to execute plans, not revise them. They assume the world stays stable. Tools work as expected. Goals remain valid. Once any of that changes, the agent keeps going anyway, confidently making the wrong move over and over. The authors draw a clear line between execution and adaptation. Execution is following a plan. Adaptation is noticing the plan is wrong and changing behavior mid-flight. Most agents today only do the first. A few key insights stood out. Adaptation is not fine-tuning. These agents are not retrained. They adapt by monitoring outcomes, recognizing failure patterns, and updating strategies while the task is still running. Rigid tool use is a hidden failure mode. Agents that treat tools as fixed options get stuck. Agents that can re-rank, abandon, or switch tools based on feedback perform far better. Memory beats raw reasoning. Agents that store short, structured lessons from past successes and failures outperform agents that rely on longer chains of reasoning. Remembering what worked matters more than thinking harder. The takeaway is blunt. Scaling agentic AI is not about larger models or more complex prompts. It’s about systems that can detect when reality diverges from their assumptions and respond intelligently instead of pushing forward blindly. Most “autonomous agents” today don’t adapt. They execute. And execution without adaptation is just automation with better marketing.
Yasir Ai tweet media
English
73
178
577
63.3K
Paul Vann
Paul Vann@pjvann·
I'm unsure if this is a hot take or not, but I recently dove into mythos and why I think the cyber industry's response to it has been bloated, chaotic, and distracting additionally touching on why I think it's bad for the cyber industry to be so focused on it check it out below!
Paul Vann tweet media
English
1
0
2
133
Paul Vann
Paul Vann@pjvann·
@blackroomsec sorry I actually have not read your timeline before! love the take, and thanks for flagging 🙂
English
1
0
1
19
BlackRoomSec
BlackRoomSec@blackroomsec·
That is what I'm saying and have been but I understand that you might not have read my timeline before. I think it's a bunch of BS. Will it have more advanced capabilities? Yeah if you're in a testing environment but I don't think it's going to have the impact in scaled environments they're saying it's going to. There's a lot of unknown variables. Will people be hacked by it? Yes they will but they would have been hacked by a human who could have done the same technique. It's just Mythos will do it faster.
English
1
0
0
104
BlackRoomSec
BlackRoomSec@blackroomsec·
I don't want to see pearl clutching after Mythos drops and everyone's getting popped. Every single company that gets hacked after that moment is 100% responsible for being hacked because they've had plenty of time, they've had all the correct talent in place and it will only demonstrate what all of us know which is that they don't listen to us. Zack is going to be busy keeping receipts on who was "compliant" in the various frameworks, supposedly, but doesn't have least privilege in place. 🙄 CC: @ZackKorman
Florian Roth ⚡️@cyb3rops

There is now a write-up on infostealers.com, apparently based on Hudson Rock data, that adds more detail to the #Vercel breach Many will focus on the Lumma stealer infection and the Roblox download. Okay. That matters too. But for me, the bigger failure came after that … Infections happen - always. The real question is what one infected machine can reach afterwards. If one compromised path was enough to expose access to Google Workspace, Supabase, Datadog, Authkit and Vercel-related admin resources, then the problem was not just the infostealer. The problem was too much access, weak separation, missing limits and security monitoring that failed to highlight highly suspicious activity on that account The mantra should be: “assume compromise” infostealers.com/article/breaki…

English
8
14
94
15.2K