Pere Pages

1.4K posts

Pere Pages banner
Pere Pages

Pere Pages

@pearpages

Fullstack developer, contributor @cinemajs, @angularcamp & @angularbeers. Interested in technology, tv shows, movies, food and contemporary subcultures.

Barcelona Katılım Haziran 2009
321 Takip Edilen126 Takipçiler
Pere Pages retweetledi
Ole Lehmann
Ole Lehmann@itsolelehmann·
karpathy just admitted that his own app got oneshotted and he thinks yours is next. he built menu gen. you take a photo of a restaurant menu and it shows you pictures of what the food actually looks like (because 30-50% of menu items you genuinely have no clue what they are) he vibe coded the whole thing: photo upload → ocr extracts item names → image model generates a picture for each dish → app re-renders the menu with photos next to every item → deployed on vercel but then someone showed him the "software 3.0" version: 1. take the same photo. 2. give it to gemini. 3. say "overlay pictures of each dish onto the menu" gemini returned the original menu photo with food images rendered directly into the pixels just 1 prompt and his entire app became entirely unnecessary here's karpathy's way to test if you're still stuck building in old paradigm: 1. take away all the code in your app. 2. give the raw input directly to an llm. is the output roughly the same? if yes, your code is just adding steps between the input and the output. karpathy thinks the apps that survive are the ones where the code does something the model genuinely can't: > persisting state across users > enforcing access controls > processing payments > connecting to hardware he calls anything else outdated "software 1.0 thinking." the question to ask yourself before you build anything right now: is this an app, or is it just a prompt with extra steps? you simply won't win if your answer is the latter
English
47
60
872
129.5K
Pere Pages retweetledi
Hipotelab | Broker Hipotecas
Por esta receta de Jordi Cruz… vas a ver los espaguetis de otra forma. Ragú de butifarra. Fuego lento. Y un aroma que no se olvida. No es pasta. Es memoria. Míralo.
Español
21
69
873
43.5K
Pere Pages retweetledi
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
Assemblers were faster at writing binary than humans were. Compilers were faster at writing assembly than humans were. AIs are faster at writing compiled languages then humans are. Deal with it. There's still plenty left for you to do.
English
150
338
3.2K
119.6K
Pere Pages retweetledi
Addy Osmani
Addy Osmani@addyosmani·
Tip: Figure out your personal ceiling for running multiple agents in parallel. We need to accept that more agents running doesn't mean more of _you_ available. The narrative is still mostly about throughput and parallelism, but almost nobody's talking about what it actually costs the human in the loop. You're holding multiple problem contexts in your head at once, making judgment calls continuously, and absorbing the anxiety of not knowing what any one agent might be quietly getting wrong. That's a new kind of cognitive labor we don't have good language for yet. I've started treating long agentic sessions the way I'd treat deep focus work: time-boxed and tighter scopes per agent dramatically change how much mental overhead each thread carries. Finding your personal ceiling with these tools is itself a skill and most of us are going to learn it the hard way before we learn it intentionally.
Lenny Rachitsky@lennysan

"Using coding agents well is taking every inch of my 25 years of experience as a software engineer, and it is mentally exhausting. I can fire up four agents in parallel and have them work on four different problems, and by 11am I am wiped out for the day. There is a limit on human cognition. Even if you're not reviewing everything they're doing, how much you can hold in your head at one time. There's a sort of personal skill that we have to learn, which is finding our new limits. What is a responsible way for us to not burn out, and for us to use the time that we have?" @simonw

English
85
92
686
168.6K
Pere Pages retweetledi
Rohit
Rohit@rohit4verse·
Anthropic just accidentally taught you how to build the best AI agent harness. Here's everything inside Claude Code's source code and how you can use it to build something smarter. Not a teardown. A blueprint. Claude Code isn't a chat wrapper. It's a 55-directory, 331-module agent operating system. Here's what's actually powering it: Got it. Here's every section rewritten: The Core Loop: >The entire agent runs as a single async generator. >Every event - model output, tool calls, errors - streams live instantly. >The UI renders character by character, not after 30 seconds of silence. >You can abort it, pause it, or nest it inside a subagent. Build yours the same way. Use async generators. Full stop. Tool Execution: >Claude Code doesn't wait for the model to finish streaming. >The moment a tool's input arrives mid-stream, execution begins. >That's 2-5 seconds of hidden latency saved every single turn. >Read-only tools like Glob and Grep run in parallel. >Write tools like Bash and Edit run serially - no race conditions. >Speed of parallelism. Safety of serial execution. Both at once. Context Management: >Most harnesses truncate old messages and call it a day. >Claude Code runs four compaction strategies in order of cost. >Micro compact caches tool results that haven't changed - runs every turn. >Snip trims old messages while protecting the most recent context. >Auto Compact summarises prior conversation when snip isn't enough. >Context Collapse does staged compression for the longest sessions. >Cheapest strategy runs first. Most expensive only fires when nothing else works. Prompt Engineering: >The system prompt is 577+ lines, but the structure is the point. >Everything before the dynamic boundary is cached globally across all users. >Everything after is cached per session or recomputed per turn. >~80% of every API call hits the prompt cache before new tokens are processed. >CLAUDE.md loads in four levels — enterprise, user, project, local. >Each tool generates its own description based on the live environment. >The model never gets generic instructions. Only context-specific ones. The Permission System: >Before any tool executes, it runs through a 7-stage pipeline. >Input validation. Deny rules. Allow rules. Tool-specific checks. >Then hooks, an ML classifier, and finally a user prompt. >Rules use glob-like patterns - Bash(git *) allows all git commands. >Enterprise admins can enforce blocks. Project maintainers can set rules. >Users can write shell scripts for cases the engine can't handle. Not a toggle. >A rule engine with progressive trust built in. Error Recovery: >The retry system is 823 lines of production-hardened code. >Rate limited? It checks the Retry-After header before doing anything. >Under 20 seconds? Stay fast. Over 20? Enter 30-minute cooldown. >Three consecutive 529s? Trigger a model fallback automatically. >Context overflow? Recalculate token budget and retry inline - no crash. >For CI/CD sessions, it retries indefinitely with a 5-minute max backoff. >A 30-second heartbeat prevents idle kills during long unattended runs. >Error recovery is not a wrapper around the loop. It is the loop. Extensibility: >Skills are markdown files that inject prompts and restrict tool access. >Hooks are shell scripts that fire on events like PreToolUse or PostToolUse. >MCP servers provide external tools over a protocol, six transports supported. >Plugins bundle all three into a single installable directory. >None of these require a single line of source code to change. >You extend the system by dropping files in. That's the whole model. The lesson isn't "look what Anthropic built." The lesson is: every one of these principles is yours to use right now. >Async generators for your loop. >Streaming tool execution. >A layered compaction hierarchy. >A permission rule engine, not a toggle. >A system prompt designed around the cache boundary. That's what separates a demo from a production harness. That's the full breakdown.
Chaofan Shou@Fried_rice

Claude code source code has been leaked via a map file in their npm registry! Code: …a8527898604c1bbb12468b1581d95e.r2.dev/src.zip

English
55
82
444
105.7K
Pere Pages retweetledi
Martin Fowler
Martin Fowler@martinfowler·
NEW POST The quality of what AI coding assistants produce depends on how well the prompter articulates team standards. @techygarg proposes treating such instructions as infrastructure: versioned, reviewed, and shared artifacts. martinfowler.com/articles/reduc…
English
16
55
408
34.2K
Pere Pages
Pere Pages@pearpages·
@JordiGraupera Em fa molta por que es carreguin la vall. I crec que tens raó que nomes incentivarà encara més camions.
Català
0
0
0
78
Jordi Graupera
Jordi Graupera@JordiGraupera·
Així és com es subvenciona la indústria del porc. 450 milions d'euros per desplaçar els 2.000 camions diaris que passen per Olot i que en siguin 4.000. Som líder mundial en escorxar: mà d'obra barata, semiesclavitud. Indústria subvencionada. 3cat.cat/3catinfo/prese…
Català
106
333
685
127.1K
Pere Pages retweetledi
Ramon Rafart
Ramon Rafart@RamonRafart11·
@JordiGraupera Destruirem una de les zones agrícoles més fèrtils de Catalunya , un paisatge extraordinari per matar i escorxar porcs en un lloc on no se'n crien per exportar-los a llocs dels que ni sabem com es pronuncien...
Català
0
2
21
852
Pere Pages retweetledi
Martin Fowler
Martin Fowler@martinfowler·
Fragments: 80,000 people's opinion on AI, fixing the common gap in spec-driven development, weakening defenses against covert attacks on the U.S. martinfowler.com/fragments/2026…
English
4
12
111
15.9K
Pere Pages retweetledi
Gergely Orosz
Gergely Orosz@GergelyOrosz·
If you use GitHub (especially if you pay for it!!) consider doing this *immediately* Settings -> Privacy -> Disallow GitHub to train their models on your code. GitHub opted *everyone* into training. No matter if you pay for the service (like I do). WTH github.com/settings/copil…
Gergely Orosz tweet media
English
391
919
5.1K
584.2K
Pere Pages retweetledi
Scott Hanselman 🌮
Scott Hanselman 🌮@shanselman·
When you're the expert, AI is the intern with unlimited energy. When you're not the expert, AI is senior to you. Until your code smell catches up. hanselminutes.com/1039/
English
7
32
185
19K
Pere Pages retweetledi
Martin Fowler
Martin Fowler@martinfowler·
NEW POST ADRs are short documents that capture a decision, structured into logs that reveal a history of architectural thinking for a product martinfowler.com/bliki/Architec…
English
11
68
589
50.9K
Pere Pages retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.4K
5.4K
28K
66.5M
Pere Pages retweetledi
RAC1
RAC1@rac1·
🗣️ "M'aixecava a les 5 h, ara no tinc alarma" 👉 Una tècnica per ser el més ràpid del món nedant menys que la resta #Echobox=1774185694" target="_blank" rel="nofollow noopener">rac1.cat/esports/202603…
Català
0
1
6
5.3K
Pere Pages retweetledi
Aastha
Aastha@aastha_mhaske·
Anthropic just launched the Claude Architect Certification! You’ll have to complete 60 multiple-choice questions across five competency areas in a single session. No external resources or breaks. Here’s how I’m planning to prepare for it (steal my roadmap): Week 1 Complete the recommended courses: - Building with the Claude API - Introduction to Model Context Protocol - Claude Code in Action - Claude 101 Week 2 Build real projects with: - Claude Code - Agent SDK - Anthropic API - MCP Week 3 Get familiar with the exam structure and guide: - Go through the six exam scenarios - Get familiar with the five competency areas / domains - Learn the skills needed for each task assessment Week 4 Do the preparation exercises from the exam guide: - Build a Multi-Tool Agent with Escalation Logic - Configure Claude Code for a Team Development Workflow - Build a Structured Data Extraction Pipeline - Design and Debug a Multi-Agent Research Pipeline Week 5 - Take the practice exam - Aim for a score greater than 850 / 1000 Week 6 - Take the real exam - Only one attempt allowed NOTES: - At this point the certification is exclusive for Anthropic Partners and early access is free for first 5,000 partner company employees. - Your mileage may vary depending on your skill level. E.g. It may take 2 weeks for some but 10 weeks for others. If you are eligible, register here → lnkd.in/eEYwUGV5
Aastha tweet media
English
56
554
3.8K
269.3K