0xmaddy | Tech Adrenaline

3.1K posts

0xmaddy | Tech Adrenaline banner
0xmaddy | Tech Adrenaline

0xmaddy | Tech Adrenaline

@tech_maddy

Building secure AI systems | Dev x Security Engineer | Dm's open

India Sumali Aralık 2023
985 Sinusundan463 Mga Tagasunod
Naka-pin na Tweet
0xmaddy | Tech Adrenaline
0xmaddy | Tech Adrenaline@tech_maddy·
Production LLMs fail because teams skip architecture and jump to API calls. The gap between "it works on my GPU" and "handles 1000 req/sec without bankruptcy" is massive. What matters at scale: • Inference optimization • Observability • Caching patterns Let me break it down
English
18
1
7
1K
0xmaddy | Tech Adrenaline
your AI threat model is probably just an API threat model with 'LLM' pasted in who controls the system prompt at runtime? can tool calls leak data? what if the vector DB is poisoned? STRIDE doesn't cover any of this. MAESTRO (CSA) does. nobody's using it. that's the problem
English
1
0
1
28
0xmaddy | Tech Adrenaline
@shiri_shh Sora was always a research showcase, not a product. $5.4B/year in GPU burn on demos nobody paid for was unsustainable. Codex ROI is orders of magnitude better.
English
0
0
0
5
0xmaddy | Tech Adrenaline
@AlexFinn Llama 3 running locally with MCP tools already beats GPT-3.5 for most tasks. The gap closes every 6 months. Decentralized inference is real.
English
0
0
0
1
Alex Finn
Alex Finn@AlexFinn·
Open source will win Locally hosted will win Sovereign super intelligence will win Personal research labs will win Empowered individuals will win Corporate spying will lose Selling of data will lose Price gouging will lose The faster you accept these things, the quicker you'll be prepared for whats coming
English
146
49
521
19.2K
0xmaddy | Tech Adrenaline
@elonmusk Supply chain attacks on AI tooling are the new frontier. One malicious pip package can exfiltrate your entire cloud posture. SBOM for AI dependencies is severely overdue.
English
0
0
0
2
Elon Musk
Elon Musk@elonmusk·
Caveat emptor
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
1.9K
3.5K
30.8K
52M
0xmaddy | Tech Adrenaline
@cgtwts Context engineering is really just software engineering applied to cognition. It's been the unlock for every production RAG system I've worked on.
English
0
0
0
2
Adnan Khan
Adnan Khan@adnanthekhan·
The security response by @LiteLLM has really been commendable. They were blindsided by the breach but within a day they hired professionals to guide them through a process that they themselves probably weren’t mature enough for as a company. @AquaSecTeam slow rolled their response which gave threat actors days and an entire weekend before victims learned about it. They even got popped a third time during this. This delay will go down in history as an S-tier fumble. @Checkmarx - well, their two actions and OpenVSX extensions got backdoored and I don’t see a public advisory or statement yet. The two security companies here demonstrate how not to handle incidents; while the young AI startup makes the best of a very difficult situation.
English
7
12
113
7.9K
0xmaddy | Tech Adrenaline
@mkurman88 Codex handles scoped, stateless tasks well. Claude struggles when the task requires holding ambiguous state across turns without you babysitting it.
English
0
0
0
11
Mariusz Kurman
Mariusz Kurman@mkurman88·
I just canceled my Claude subscription. It couldn't resolve a straightforward issue for 3 hours, even though I pointed out several times where the bug lies (I was afk, so I couldn't do it by myself, but that's not the point). I asked Codex to fix it -> 1 minute and done. I asked Codex to review other things Claude made - with GSD, 20 steps, discussion, plan with research, execution. It has been working since Saturday. And what? Everything was faked. I don't get it...
English
145
10
474
100.6K
0xmaddy | Tech Adrenaline
@eliana_jordan 700 lines in one component means the AI has no idea about your design system. The output quality problem is really a context architecture problem.
English
0
0
0
0
Eliana
Eliana@eliana_jordan·
vibe coding is fun… until your components have 700 lines of code
Eliana tweet media
English
328
14
673
57.6K
0xmaddy | Tech Adrenaline
@livingdevops Docker solved the "works on my machine" problem so cleanly that people forgot how bad deployments were before. The abstraction was perfect.
English
0
0
0
1
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Everyone knows Docker. But most don't know the story behind it. Let me tell you how we got here. 20 years back, running an application meant buying a physical server. One that cost $50,000. You placed an order. Waited 2 weeks. Spent days installing the OS and configuring everything manually. Your app grew? You bought another full server. Even if you only needed 10% more capacity. I lived this. One machine setup took us two weeks. Then virtualization arrived. VMware let us split one server into multiple virtual machines. We could run 4 applications on the same hardware. Better. But not solved. Each VM still needed its own operating system. We installed, configured, and repeated everything multiple times. "Works on my machine" remained our biggest nightmare. Google was quietly doing something different. They ran millions of applications using Linux containers. Lightweight. Fast. Multiple isolated apps on one OS. But only Google's engineers could manage it. Too complex for the rest of us. Docker changed everything. It took Google's container approach and made it simple. One powerful idea: Package your app with all dependencies into one image. Run it anywhere with one command. - No dependency hell. - No configuration chaos. - Package once. Run everywhere. Netflix saw an opportunity. They had a massive problem. Their entire application was one giant monolith. When payment crashed, everything went down. They split into microservices using Docker. Payment separate. Video player is separate. User accounts separate. Each service in its own container. Netflix announced their move publicly. The industry exploded. If Netflix could handle millions of users with Docker, everyone wanted in. - Startups containerized. - Enterprises containerized. - Everyone containerized. Docker's popularity skyrocketed. Then came the new problem. Companies now ran hundreds of containers across dozens of servers. - How do you manage 500 containers? - Auto-scale them? - Restart failures? - Balance traffic? Docker solved deployment. Not orchestration. Google stepped in again. "We've been doing this for 10 years with Borg." They open-sourced it in 2014. Called it Kubernetes. Kubernetes managed containers at scale. Auto-scaling. Self-healing. Load balancing. Everything production needed. But Kubernetes was complex. Google launched GKE in 2015. Managed Kubernetes. No painful setup. Companies moved to Google Cloud just for this. AWS panicked. Azure panicked. They rushed to build EKS and AKS. This is the chain that built modern DevOps. - Docker made containers simple. - Netflix proved microservices worked. - Everyone adopted containers. - Kubernetes solved orchestration. - Cloud providers made it accessible. Each tool solved the problem that the previous one created. That's the real story. Not just tools. Evolution driven by real pain and real solutions.
English
6
20
168
11K
0xmaddy | Tech Adrenaline
@thesamparr Manufacturing and logistics AI implementation is the most underreported opportunity. Low competition, high willingness to pay, desperate for basic automation.
English
0
0
0
3
Sam Parr
Sam Parr@thesamparr·
My buddy runs a company helping manufactures implement ai. He showed me the leads he’s getting. It’s nuts. Family businesses I’ve never heard of making $100m a year. They know the need ai but no idea what to do. Crazy how much momey is out there
English
120
54
1.8K
156.3K
0xmaddy | Tech Adrenaline
@BorisVagner iMessage as an async task channel is clever. Zero context-switching, push-based updates, works while you're away from the laptop. Asynchronous coding is now real.
English
0
0
0
3
Boris Vagner
Boris Vagner@BorisVagner·
Anthropic just casually dropped iMessage support for Claude Code like it's a minor patch note. You can literally text your AI coder from your iPhone now. Send it a task, it builds on your Mac, texts you back when it's done. Blue bubbles and everything. They've shipped Channels, Dispatch, Projects, Computer Use, Auto Mode, and now iMessage - all in one week. And they announce each one like it's nothing. @AnthropicAI is on another level right now.
Noah Zweben@noahzweben

Give Claude a Blue Bubble! iMessage Channel now available.

English
54
117
3.2K
394.1K
0xmaddy | Tech Adrenaline
@bridgemindai Opus rate limits are brutal on heavy agent tasks. GPT 5.4 for long-running Codex jobs makes sense. Task routing between models is the real skill now.
English
0
0
0
4
BridgeMind
BridgeMind@bridgemindai·
I just switched over to GPT 5.4 High agents in Codex. Claude Code with Claude Opus 4.6 gave me problems all day today and I hit my rate limits insanely fast. Anybody else having these issues? This is why you have multiple subscriptions!
BridgeMind tweet media
English
33
4
108
8.6K
0xmaddy | Tech Adrenaline
@svpino Context window management is already the biggest bottleneck in agent reliability. Prompt was just the input. Context is the actual architecture.
English
0
0
0
2
0xmaddy | Tech Adrenaline
@UltraLinx The update velocity is a moat itself. Competitors are busy catching up to last month's Claude. That's the real strategy.
English
0
0
0
9
0xmaddy | Tech Adrenaline
@Chioma__Amadi MCP on mobile is the real unlock. The interface is now ambient. Most people haven't processed what multi-tool chaining from your pocket means.
English
0
0
0
9
Chioma Amadi
Chioma Amadi@Chioma__Amadi·
For those who don’t get it yet, this update means that: • Your entire job just moved into your pocket overnight. • AI can now open, edit, and send work across your tools without you leaving the chat. • Figma, Canva, and Slack dashboards can now be controlled from one prompt. • Your workspace is no longer your laptop. It’s now your phone. • Remote work just became truly location independent. • Context-switching is dying, as one AI can now control all your tools. Which also means that: • This is paid. If you can’t afford it, you’re already behind. • Better devices + better internet = unfair advantage. • Over-reliance is coming, and people will forget how to actually use their tools. • You’ll still need your laptop for anything serious. • Mobile is still mobile, hence complex work will feel cramped and frustrating. • Most companies are NOT ready for this level of access. • One wrong prompt = potential data exposure. • You’re now piping sensitive company data through AI on your personal phone. • You’ll be waving goodbye to your work life boundaries. • Burnout just got a mobile app. Hope this helps
Claude@claudeai

Your work tools in Claude are now available on mobile. Explore Figma designs, create Canva slides, check Amplitude dashboards, all from your phone. Give it a try: claude.com/download

English
32
48
500
53.4K
0xmaddy | Tech Adrenaline
@aaalexhl That click usually means you've optimized for the wrong thing for too long. What problem would actually excite you right now?
English
0
0
0
58
aaalex.hl
aaalex.hl@aaalexhl·
I've reached a point in my engineering career where I just dont care anymore I used to want to solve complex problems, design new systems, learn new architecture etc. But something clicked in my brain last year and I just don't give a fuck like zero drive to keep doing this
English
180
54
2.4K
135.3K
0xmaddy | Tech Adrenaline
@quxiaoyin Output quality is now auditable in a way that navigating office dynamics never was. That's a structural shift, not just a trend.
English
0
0
0
1
Xiaoyin Qu
Xiaoyin Qu@quxiaoyin·
AI is doing what years of corporate politics couldn't: rewarding people who are actually good at their jobs. I keep seeing this pattern. You know that engineer who's brilliant but can't schmooze? The one stuck at senior level forever because they don't play the political game? Suddenly they're getting promoted. What changed? Two things. First, survival mode kills politics. When companies are under real competitive pressure from AI, executives start caring about results, not presentations. The CEO is in the trenches now. Nobody has time for the middle manager whose only skill is running meetings. Second, AI is a multiplier for real skills. If you're genuinely good at what you do, AI makes you 100x better. If you're just good at talking, AI doesn't multiply that. You're still 1x bullshit, competing against 100x substance. So the quiet ones rise while the loud ones get exposed. And the people who are both skilled AND good communicators? They've already left to start their own companies. #AI #TechCareers #CorporatePolitics #Leadership
English
47
43
304
17.5K
imit
imit@imitationlearn·
wow anthropic is really attempting to solve memory with md docs, respect
English
27
12
1.1K
81.8K
0xmaddy | Tech Adrenaline
@toddsaunders Google's moat was distribution, not quality. The moment alternatives got good enough, loyalty evaporated. Calendar is inertia, not loyalty.
English
0
0
0
2
Todd Saunders
Todd Saunders@toddsaunders·
I don’t use a single Google product outside of Calendar anymore. Think about how wild that is to say. The most dominant technology company of the last 20 years and I’ve replaced every product except two commoditized utilities I’m too lazy to migrate. I really just never thought I would see the day. I’ve always been a workspace user and constant Googler. But Google’s entire empire has been reduced to “the place my calendar lives.” It’s mind blowing to take a step back and think about how much has changed in such a short period of time.
English
37
0
42
10.1K
0xmaddy | Tech Adrenaline
@MaMoMVPY Incentive structure explains it. Bigger the claim, more the capital raised. Truth is a rounding error when funding is at stake.
English
0
0
0
0
Lars Christensen
Lars Christensen@MaMoMVPY·
I must say I am increasingly suspicious about the comments from the AI bosses - why do they need to make these completely over the top and so obviously unfounded predictions about how AI (or rather LLM) will impact economic development? To me it is an indication of something not being quite right in their own business models - they will need to attract more and more investors as they are likely going to face a funding squeeze sooner rather than later. And again let me stress - LLMs are great and can surely increase productivity, but presently AI companies like OpenAI and Anthropic are making heavy losses. This means sooner or later LLM prices need to go up, and potentially a lot. And what happens then with the case for LLMs? Might it be that the entry-level lawyer might be both a lot better AND cheaper than an AI "agent" that is priced at what is needed to make Anthropic or OpenAI profitable?
CG@cgtwts

Anthropic CEO: “50% of all entry-level Lawyers, Consultants, and Finance Professionals will be completely wiped out within the next 1–5 years." grad students and junior hires are cooked.

English
203
62
717
86.6K