aitization 𝕏 

13.4K posts

aitization 𝕏  banner
aitization 𝕏 

aitization 𝕏 

@aitization

AI, ML, research, data, cybersecurity, privacy, learning and improving, solution focused, critical thinking, startups, business, VALUE CREATION $

Global Katılım Ağustos 2023
745 Takip Edilen227 Takipçiler
aitization 𝕏 
aitization 𝕏 @aitization·
@nrmehta No such thing as AGI. you automatically lose credibility if you use that term lol 😂
English
0
0
0
2
Nick Mehta
Nick Mehta@nrmehta·
FDEs, Bottlenecks and AGI: One of the interesting questions in the road to AGI is: "How do businesses change at a fundamental level to leverage AI?." For founders, this seems like an inane question - just change! Move fast. Grind. Make it happen. Try out Codex with GPT 5.5. Or Claude Managed Agents. etc. But large companies have inertia. And much like physical inertia, it’s not always for bad reasons. Sometimes, they have a way of doing things because the risk to brand, quality or safety is too large to risk switching. Other times, regulatory and other complex forces conspire to constrict the rate of change. And occasionally, they are just poorly run. And large companies matter in the overall transformation of the economy. According to Forbes, the aggregate revenue of Global 2000 companies is $52.9T. While it’s a mathematical error to compare that $52.9T to the $111.3T 2024 worldwide GDP number, since revenue double counts some items in GDP, it’s fair to say getting large companies to adopt AI in depth is a big rock to tackle. And it’s not a fait accompli. While large organizations are indeed spending massive sums on tokens - either directly from the labs or via intermediary apps - business results are harder to pin down. It’s striking when you look at the productivity metrics of AI native startups (e.g., revenue/employee) versus those of their historical brethren. A very good friend of mine experiences this first hand. She works in a Global 2000, long-respected, well-run company. She uses LLMs, Claude Code and other AI automations to do her job not just faster but better. The company gives employees access to ChatGPT, Claude and Gemini. They connect them to Databricks. They have L&D classes on AI. And yet the default is still open-ended meetings, long planning cycles and abstract documents. No matter how brilliant and enterprising my friend is (and she is!), it can only go so far. Meanwhile, AI native startups operate with ridiculous velocity and scoff at the speed of big companies. The 2017 NBER paper “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies” (link in comments) discusses the idea that general purpose technologies run into bottlenecks for mainstream rollout. A key quote is: “General purpose technologies (GPTs) such as AI enable and require significant complementary investments, including co-invention of new processes, products, business models and human capital.” So if deploying LLMs to users at large companies isn’t enough, what does it take? We’re watching several approaches in real-time: * Companies building top-down transformation offices. Unfortunately many such transformation offices for other trends failed in the past. * Hiring strategy consulting firms to architect the transformation. While these work sometimes, there are many bodies buried on this path too. * Firing a large number of people and hoping the constraint fixes things. This is certainly the intention behind some of the recent layoffs. * Taking these companies private, possibly as a part of rollups. General Catalyst and other funds are betting on this path to some extent. This reduces the public, quarterly earnings tax, and allows for rapid AI-ification in theory. That being said, much of the institutional inertia remains. * Leveraging existing Private Equity owners to do the work. For Global 2000 firms that had already been taking private, PE sponsors have a huge incentive to move fast on AI. This also connects with their investments in the FDE JVs with OpenAI and Anthropic. * FDEs from the labs. Speaking of which, no one has more incentive than the big labs in removing the bottlenecks to adoption. * Letting the disruptors win. A cynical person might say “all of the large companies will never get there. You just need the AI native startups to eat their lunch.” Potentially true, though this can take a while. There are probably more approaches that I’m missing. When you look up and down those alternatives, there is one commonality. You need humans who are a combo of consultative and technical to do this work. They can work for the end customers internally, for consulting firms, for rollups, for PE firms, for the labs or for disruptors. But these people are needed. And that explains some of the sudden surge in FDE hiring. In summary: 1. AGI has become the top priority of the world, as evidenced by capital investments (duh). 2. Large companies are a big part of GDP. 3. Just selling LLMs or AI native tech to large companies isn’t enough. 4. These companies need to be fundamentally rethought in terms of strategy, processes and org. 5. Without that, they’ll never get to the performance level of the AI native firms. 6. There are many approaches to addressing these bottlenecks. 7. A part of almost all of them is people with a combination of process and engineering skills. I welcome thoughts on what others are seeing in this area.
English
4
0
7
324
aitization 𝕏 
aitization 𝕏 @aitization·
@theo I was aware & prepared. I never trusted any of the above platforms + I knew that / talked about the security & privacy issues of the apple ecosystem, tagged apple employees previously. Linux can be hardened & Microsoft is a joke. ISPs / cell carriers are full of holes ⚠️🚩
English
0
0
0
3
Theo - t3.gg
Theo - t3.gg@theo·
Security things from the last few days: - CopyFail (linux pwn'd) - CopyFail 2/Dirty Frag - 13 advisories in Next.js - Over 70 CVEs addressed in MacOS 26.5 - ~50 CVEs addressed in iOS 26.5 - YellowKey (Windows Bitlocker pwn'd entirely) - GreenPlasma (Windows privilege escalation) - CVE-2026-21510 and CVE-2026-21513 confirmed to be used by Russia for Windows RCE - CVE-2026-32202 separately confirmed to be used by Russia for sensitive document access - Mini-Shai Hulud (over 300 JS and Python packages compromised via GitHub Action cache poisoning) - Google confirms they have identified AI-powered exploitation of zero days in an unidentified "open-source, web-based system administration too" - Canvas (popular LMS used in most schools) pwn'd entirely - PAN-OS (palo alto networks) pwn'd with a 9.3 severity CVE-2026-0300 Are you scared yet?
English
348
999
6.9K
757.8K
Harrison Chase
Harrison Chase@hwchase17·
“Dependably for LLM agent failures”
Saurabh@sauvast

@hwchase17 Started on this and finding it awesome; also LangSmith engine sparked an idea. The "Dependabot like for LLM agent failures". LangSmith Engine gives you the smoke detector. The natural next layer is a sprinkler system; an auto-remediation with a human approval gate. A four-stage pipeline comes to mind: Classify → Patch → Eval → Shadow Trying it and will share trace results. This is a real gap in the LLMOps ecosystem; glad to see it being closed. 🔥 Will keep updated on the progress @LangChain_OSS

English
4
0
15
4.3K
aitization 𝕏  retweetledi
s1r1us (mohan)
s1r1us (mohan)@S1r1u5_·
security research now has this weird incentive where finding the bug is only half the game. the other half is packaging the story as "claude/codex found it" because that’s where all the attention is right now. model providers, with their big accounts and distribution, will push the story for you. it looks win-win. weirdly, the human taste, target selection, hand holding, all get compressed into "the model found it". frontier model companies happily push that narrative, while the researcher slowly gets devalued.
English
6
10
131
25.2K
Larsen Cundric
Larsen Cundric@larsencc·
Seeing SF comp move in real time is wild. A few weeks ago, inbound was 150-250k base. Now no one names a number below 200k. Top end is 400k + equity. Hiring great people just got harder than ever.
English
28
10
708
104.8K
Sidu Ponnappa
Sidu Ponnappa@ponnappa·
until llms, computers had to be instructed with perfect 100% clarity using the unambiguous grammar of a programming language because (obviously) the ambiguous grammars of human languages become the problem. as you may imagine, this is pretty hard because perfect clarity is pretty hard to establish. (tangentially, especially so in a business context, 100X especially so in a large enterprise). ai doesn't change this, and this is the trap waiting for the unsuspecting new vibe coder who thinks they can now code using agents. code was never the hard problem. code merely enables defining exactly what you want with 100% clarity. getting to perfect clarity is the problem.
English
8
12
96
5.3K
Calif
Calif@calif_io·
Early this week, we had a meeting at Apple Park in Cupertino. While there, we also shared with Apple our latest vulnerability research report: the first public macOS kernel memory corruption exploit on M5 silicon, surviving MIE. It was laser printed, in honor of our hacker friends. Full story: open.substack.com/pub/calif/p/fi…
Calif tweet media
English
8
55
348
72.9K
Guillermo Flor
Guillermo Flor@guilleflorvs·
does anyone else feel like codex is performing way better than claude now?
English
1
0
3
188
aitization 𝕏  retweetledi
Marc Andreessen 🇺🇸
Now tell them it’s a real Monet.
Marc Andreessen 🇺🇸 tweet media
English
115
188
4.1K
132.8K
aitization 𝕏  retweetledi
Benjamin Manning
Benjamin Manning@BenSManning·
After listening to a bunch of awesome talks at behavioral econ conference yesterday, I really like an aspirational framing of AI agents as “flexible commitment devices.” It's like a retirement account but for everything.
English
2
8
42
16.4K
aitization 𝕏 
aitization 𝕏 @aitization·
obsessed with continual learning and much longer context windows 😎✨
English
0
0
0
12