Marcin Jan Puhacz

177 posts

Marcin Jan Puhacz

Marcin Jan Puhacz

@marcinph

founding engineer @runlayer ex @lithic ex @stedi

Krakow, Poland Inscrit le Şubat 2017
850 Abonnements243 Abonnés
Aaron White (Appy.ai)
Aaron White (Appy.ai)@aaronwhite·
And this is why Appy.Ai's advanced swarm agents don't use real shell/containers- way too much surface area on top of what is already a hard problem
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
1
0
1
335
kepano
kepano@kepano·
If you are using MCP to access your data, you are trusting that third party with unencrypted access to your data. Third-party MCPs are not an option if you value privacy. End-to-end encryption makes MCP effectively useless.
English
51
22
627
54.1K
Marcin Jan Puhacz retweeté
Andy Berman
Andy Berman@berman66·
This week, our team celebrated in Times Square: 5,000,000+ MCP calls secured with @Runlayer. That’s a massive milestone, with a massive billboard. AI is moving at breakneck speed, and we love that. But every new tool introduces new security risks. We’re building Runlayer so no organization has to choose between adopting AI and protecting what they’ve built. 5,000,000+ secured MCP calls means thousands of agents, workflows, and AI products running with real guardrails in place. It means peace of mind for our enterprise customers like Gusto & Opendoor. Huge congrats to the team. And a genuine thank you to our customers.
Andy Berman tweet media
English
0
5
16
934
Marcin Jan Puhacz retweeté
Andy Berman
Andy Berman@berman66·
Today, we're launching OpenClaw for Enterprise. The IDEA of OpenClaw is excellent. That's why your employees already tried ClawdBot last weekend. They probably spent hours linking it to everything - email, Slack, Jira, you name it. They installed a giant security nightmare. 1/
English
81
61
676
316.6K
The Guy
The Guy@Guygies·
@knowclarified I cancelled 2 weeks ago. One of their updates broke an MCP I use regularly and I can still use the free version for my janky junk coding then run it through Windsurf with Opus to refine the results.
English
3
0
25
17.3K
Chris
Chris@knowclarified·
Cursor is probably seeing massive churn right now
English
111
5
751
190.9K
Jamon
Jamon@jamonholmgren·
Bit by bit, we are starting to see what the new AI assisted software development world is going to look like for the next several years. My current (still evolving) take: - Massive unleashing of experimental work, proofs of concept, rough drafts This should lead to a huge boost in the amount and creativity of software products that come to market, at the cost of a sudden increase in noise, a veritable din - Significant decline in average code quality Some code gets better, a lot gets worse, and the limitations of the current technology and unlocking less experienced developers to create software will lead to a near crisis in poorly built products in the near term - Large proliferation of tools As we scramble to adapt, experimentation and perspectives will lead to a vast array of possible solutions, each with their own sets of tradeoffs. Reminds me of the early days of Web 2.0, where there was a new framework every week and they all sucked - Some emerging best practices Over the past 6 weeks I have talked to many very experienced developers (often 1 on 1 video calls), and we are now starting to circle some common threads for AI-assisted software dev best practices (these are off the cuff, so don’t expect perfection): 1. Slow down, learn the tools, figure out the tradeoffs 2. Quality still matters when it matters; often, the existing models and tools fall short of maintaining that quality on larger code bases 3. The developer is responsible for the code they ship 4. Documentation (via skills, tasks, or just markdown docs) is tremendously helpful, but you should also understand it, not just rely on the AI to 5. High level architecture is still an area where a human with a lot of experience can add a ton of value 6. AI is not a substitute for good taste (and caring about things) 7. Some techniques are locally productive and globally harmful (more on this below) 8. The bottleneck is in review, understanding, and higher level systems architecture more so than coding speed. 9. Some developers are more adept than others at various parts of this new value chain. Current teams are full of developers vetted for and hired to do one job who are facing a significantly different way of doing it. 10. Coding itself might get done a different way, but the fundamental engineering patterns are often still extremely important. Not in cases where it’s just about satisfying some developer love of symmetry, but definitely in domains like data modeling and the like. More on local optimization vs global concerns: agent chains like Ralph and aggressive code gen can feel incredibly fast, but tend to accumulate inconsistencies and tech debt over time. Speed at a file/feature level does not guarantee speed at the overall system level, and I have felt this personally when I’ve leaned too hard on such tools. - We will learn more as time moves on. Be kind We are adapting, evolving, playing with the tools, sharing, feeling the bruises when we get it wrong. There are educators trying to stay ahead of it and provide value. There are normal devs just trying to make a living, and stay relevant. None of this is as unique to you as you might think — I hear from others, and they’re feeling the same pressures. Be kind to each other
English
22
22
287
25.3K
Peer Richelsen
Peer Richelsen@peer_rich·
for @openclaw i only need a mac mini if i plan to have iMessage right?
English
20
0
46
19.5K
Marcin Jan Puhacz
Marcin Jan Puhacz@marcinph·
people buying six mac minis just to run claude code on each was not on my 2026 bingo card
English
1
0
2
143
Matthew Berman
Matthew Berman@MatthewBerman·
Just bought a Mac Mini to setup Clawd lets goooooo AGI is here
Matthew Berman tweet media
English
126
40
1.8K
97K
Marcin Jan Puhacz retweeté
Tal Peretz
Tal Peretz@talperetz_·
Your team is running MCP servers you don’t know about. Different clients, different devices, no central view. That’s a security problem. Our latest release fixes that. Runlayer 1.25 gives you full visibility into what’s actually running, and the controls to lock it down ↓
Tal Peretz tweet media
English
4
9
38
614.1K
ian
ian@shaoruu·
tip for @cursor_ai that a lot of people don't know about: cmd+period to switch to plan mode! very useful
English
13
3
112
9.2K
Marcin Jan Puhacz
Marcin Jan Puhacz@marcinph·
your API is already an e2e test suite - you just havent wrapped it in MCP yet. point your agent at it, say "be paranoid, make no mistakes" it doesnt skip the boring cases. absurdly effective.
English
0
0
4
73
Kamil Stanuch
Kamil Stanuch@KamilStanuch·
Cursor is landing in Poland! 🇵🇱 After stops in SF, NY, and London, the global Cafe Cursor series hits #Kraków on Feb 18 9:30 - 16:00 We’re taking over Coffee Garden (Józefa 11) for a full day of building and high-quality caffeine. Bring your laptop, work on your latest project alongside the @cursor_ai team, and connect with the local dev community. ✅ Free coffee on us ✅ Exclusive Cursor credits for builders ✅ Meet the Cursor engineers Limited co-working slots (AM/PM) so secure your spot now: luma.com/63yubrr2 Kudos to @benln and @ftnabeelah for your! support
Kamil Stanuch tweet media
English
32
17
315
27.7K
Thomas H. Ptacek
Thomas H. Ptacek@tqbf·
I didn't spend much time on it in the post, but the global orchestrator backed by object storage is a fun design: it's SQLite in a configuration where we can scale to indefinitely huge numbers of Sprites. "Multiple SQLite databases" is a design people sleep on.
English
4
0
48
8.3K
Marcin Jan Puhacz
Marcin Jan Puhacz@marcinph·
@thdxr >agents love databases and a filesystem is just the worst kind of database why
English
0
0
2
120
dax
dax@thdxr·
i just put all my opencode data into sqlite unsurprisingly it can query, filter, aggregate way better than it can when these were flat files agents love databases and a filesystem is just the worst kind of database
dax tweet media
English
160
88
2.6K
188.3K
Rhys
Rhys@RhysSullivan·
i want a "notes app" that i can put thoughts / ideas in and claude picks them up and starts working on them in the background this is especially wanted for ideas that i think would be cool but don't want to have to setup, where an ai can implement it in the background
English
241
40
1.6K
206.8K