PDA

10.7K posts

PDA banner
PDA

PDA

@arkatPDA

Founder @ stealth Mode Startup. ex CTO & Co founder de @tuquejasuma

Córdoba, Argentina (CBA) Beigetreten Mart 2008
1.1K Folgt846 Follower
Angehefteter Tweet
PDA
PDA@arkatPDA·
Ciudad Autónoma de Buenos Aires, Argentina 🇦🇷 QME
0
1
5
0
Fernando Carnota
Fernando Carnota@fernandocarnota·
Se consolidó la estafa que me hizo @SamsungArg . La tele de 58” vivió solo 3 años. Reclamè. Me pidieron todo tipo de pruebas, fotos, videos. Ahora el modelo salió de mercado porque tuvieron muchas denuncias. Respuesta: 700 mil para arreglarlo. Comprá marca Pirulo, es mejor y no te boludean. @SamsungArg, ladrones.
Español
308
330
1.8K
178.1K
PDA retweetet
Noe Barral Grigera
Renunció la secretaria de Trabajo de Donald Trump por "conducta indebida". ¿Cuál fue su conducta indebida? Hizo viajes personales en avión financiados con fondos de los contribuyentes. 🌚
Español
108
2.7K
12.8K
140.5K
PDA retweetet
Aakash Gupta
Aakash Gupta@aakashgupta·
I'll be honest: I thought I could replace parts of my team with AI agents. I kept trying. It kept failing. What I'm actually seeing as a founder: my most AI-pilled employees are becoming more effective than they have ever been. The right employee who is 10x-ing with AI, using it better than I do and teaching me how, is more valuable than anyone I could hire today. Most employees don't see this. Only CEOs and founders do. The frontier is moving so fast that the people learning it best are pulling away from everyone else on the team. All the alpha right now is in being at the frontier edge of learning AI. The way you get there is to follow the handful of people who are actually doing the work and distilling what matters. The ones I follow: Greg Eisenberg, Riley Brown, Lenny Rachitsky, Carl Vellotti, Hannah Stulberg, Ruben Hassid, Paweł Huryn. The empty pattern is scrolling X for three hours a day and absorbing noise. The winning pattern is a few high-signal digests from people you trust who are testing the latest stuff so you don't have to. That's why I've shifted most of my own time to using the latest tools and distilling what actually works. The punchline: jobs for people who can use AI well are going up, not down. Same thing happened with Excel. Same thing happened with the computer. The people who learned them early got the high-paying jobs. Be the 10x employee. Or hire one.
English
38
50
454
46K
PDA
PDA@arkatPDA·
Hoy, programando con LLMs, hay dos tendencias: Agentic engineering: entender tu sistema para generar código tan bien que casi no necesitas mirar lo que hace. Vibe coding: no entenderlo. No mirar. Confiar en el LLM.
Español
0
0
0
19
PDA retweetet
BSCN
BSCN@BSCNews·
🚨 CRYPTO: NYT REVEALS ARGENTINE PRESIDENT MILEI MADE 7 CALLS TO LIBRA PROMOTER ON NIGHT OF TOKEN LAUNCH The New York Times reported that Argentine President Javier Milei exchanged seven phone calls with crypto lobbyist Mauricio Novelli on the night he promoted the LIBRA token on X in February 2025. Federal investigators have named Milei as a person of interest. LIBRA briefly hit $4.6B market cap before crashing 90%. Forensic analysis of Novelli's phone revealed a draft $5 million payment agreement tied to Milei's endorsement, monthly payments to Milei dating back to 2021 that doubled after he became president, and WhatsApp messages referencing payments to his sister Karina. Milei's government later dissolved the investigative task force probing the scandal. A congressional committee found Milei provided "essential collaboration" to the project. Blockchain data showed eight wallets linked to LIBRA cashed out $107 million. Approximately 114,410 investor wallets suffered losses. Milei denies wrongdoing.
English
293
2.7K
6.4K
318.2K
PDA retweetet
Boris Cherny
Boris Cherny@bcherny·
Mistakes happen. As a team, the important thing is to recognize it’s never an individuals’s fault — it’s the process, the culture, or the infra. In this case, there was a manual deploy step that should have been better automated. Our team has made a few improvements to the automation for next time, a couple more on the way.
English
321
837
11K
1.4M
PDA retweetet
Paweł Huryn
Paweł Huryn@PawelHuryn·
I thought people were exaggerating. Then I burned $100+ of extra Claude usage in 1 hour. What happened: agentic coding with Claude in Chrome enabled. 10-20x multiplier on already expensive operations. A survival guide until the fix ships: Claude Code (terminal): → npx @anthropic-ai/claude-code (bypasses the Bun cache bug) → Avoid --resume (breaks cache) → /compact to compress context mid-session → /effort medium for routine work, /effort low for simple tasks → --model sonnet for exploration. Opus for final passes only → /mcp — disable servers you're not actively using VS Code / Cursor: → Same engine. Same bugs. Same fixes. Cowork (desktop): → Default to Sonnet. Not every task needs Opus. → Auto-selects effort → Start a fresh session if context bloats Dispatch (mobile): → No model picker in UI → Delete session, start new one if it spirals → You can ask it to dispatch sessions with specific models The biggest lever most people miss: stop using Claude in Chrome for scraping. Dev-browser gives you DOM snapshots instead of screenshots. Text is 10-100x cheaper than pixels. Anthropic knows. Fix is coming. Until then — /compact, /effort medium, and kill your unused MCPs.
Paweł Huryn tweet media
Alex Volkov@altryne

PSA: If you've been running out of Claude session quotas on Max tier, you're not alone. Read this. Some insane Redditor reverse engineered the Claude binaries with MITM to find 2 bugs that could have caused cache-invalidation. Tokens that aren't cached are 10x-20x more expensive and are killing your quota. If you're using your API keys with Claude this is even worse. This is also likely why this isn't uniform, while over 500 folks replied to me and said "me too", many (including me) didn't see this issue. There are 2 issues that are compounded here (per Redditor, I haven't independently confirmed this) : 1s bug he found is a string replacement bug in bun that invalidates cache. Apparently this has to do with the custom @bunjavascript binary that ships with standalone Claude CLI. The workaround there is to use Claude with `npx @anthropic-ai/claude-code` 2nd bug is worse, he claims that --resume always breaks cache. And there doesn't seem to be a workaround there, except pinning to a very old version (that will miss on tons of features) This bug is also documented on Github and confirmed by other folks. I won't entertain the conspiracy theories there that Anthropic "chooses" to ignore these bugs because it gets them more $$$, they are actively benefiting from everyone hitting as much cached tokens as possible, so this is absolutely a great find and it does align with my thoughts earlier. The very sudden spike in reporting for this, the non-uniform nature (some folks are completely fine, some folks are hitting quotas after saying "hey") definitely points to a bug. cc @trq212 @bcherny @_catwu for visibility in case this helps all of us.

English
29
21
214
46.2K
PDA retweetet
Aaron ⚡️
Aaron ⚡️@TheEcomNomad·
Full reverse engineering writeup, Ghidra analysis, MITM methodology, and verification script here reddit.com/r/ClaudeAI/com… Credit to the researcher who dug through a 228MB binary with Ghidra + MITM proxy + radare2 to find this. Legit work.
English
0
1
3
50
PDA retweetet
La estamos viendo
La estamos viendo@LaEstamosViendo·
Más de 10.000 docentes renunciaron a la universidad pública en los últimos dos años por los bajos salarios. Algunos hacen Uber. Otros se van del país. "Nunca hubo un desguace tan rápido y tan grosero, ni en los 90 ni en 2001". Odian la educación. Odian el conocimiento.
Español
24
945
2.4K
39.5K
PDA retweetet
Tendencias Mundiales
Tendencias Mundiales@porqetendencias·
"6.3 MILLONES" Porque la esposa de Manuel Adorni fue contratada por YPF para dar tres horas de clase de "coaching ontológico" a 6.3 millones de pesos
Tendencias Mundiales tweet mediaTendencias Mundiales tweet media
Español
286
2.6K
11.7K
640.5K
PDA retweetet
Ben Holmes
Ben Holmes@BHolmesDev·
I’ve used Opus 4.6 and GPT 5.4 on a mix of projects since release, and want to break down where I think they uniquely excel. It’s more nuanced than you’d think! Rigor of code - GPT 5.4. It goes the distance validating its work without asking. Opus needs explicit instruction to do this, and even then, it misses more edge cases. Clarity of code - Opus 4.6. Claude is a better communicator, which carries into the code. Variable names are clearer and less mechanical, which improves reviewability. This is very important since code review is the bottleneck for most engineering teams. It also adds the right amount of doc comments. GPT simply never comments or explains its work; it’s like working with an obtuse engineer that wants the solution to speak for itself. Sometimes it does, other times not. Similarly, rigor of plans goes to GPT 5.4, while clarity of plans goes to Opus 4.6. An interesting point though: GPT performs better talking through a strategy without a plan, while Opus needs planning mode to put in any rigor. I find myself forgetting plan mode altogether using GPT 5.4. Quality of research - toss-up. Opus spends longer researching with web search, but GPT spends longer studying the existing codebase. You may think codebase research matters more, but researching how others solve the same problem can be just as important. Maybe more important for greenfield. Quality of conversation - Opus 4.6. It’s just better to talk to, which matters using these things everyday. GPT 5.4 was clearly trained to challenge the user more, which results in a tendency to *always* say you are wrong. I’ve had bizarre interactions where GPT claims something is “not quite right,” the restates exactly what we’ve decided on in the last turn. On a personal level, it’s annoying. On a practical level, it makes iteration on a plan slower. THAT SAID, it takes sufficient pushing for Opus to challenge your thinking in this way. Simply say “I’m impartial” and ask questions to avoid that, as you would a person. Overall winner - Opus to make it work, GPT to make it good. I don’t have a good system of when to switch tools, but on average, I prefer Opus early on and GPT for optimization and discussing architectural decisions. Opus is also better for any design related tasks (but state management in frontend apps is better handled by GPT).
English
140
91
1.5K
202.8K
PDA retweetet
Facundo Iglesia
Facundo Iglesia@FacundoIglesia·
Hablé con los familiares de los 12 desaparecidos identificados en La Perla y con los científicos que participaron en la búsqueda. Se puede leer en The Guardian. theguardian.com/world/2026/mar…
Español
1
118
433
21.4K
PDA retweetet
Hugo Lamadrid
Hugo Lamadrid@hugohlamadrid·
Hugo Lamadrid tweet media
ZXX
31
444
4.1K
23K
PDA retweetet
maneul
maneul@_mrmanu·
Mi abuelo nunca habla mucho de cuando estuvo en la perla, no le gusta revivir la tortura en voz alta. Hoy me sacó el tema él, me preguntó si mis amigos hablan de la dictadura, si yo cuando escribo, escribo sobre estas cosas. Que él me las puede contar porque las vivió de cerca.+
Español
316
21.8K
83.3K
0
PDA retweetet
Boca Juniors
Boca Juniors@BocaJrsOficial·
Boca Juniors ratifica su compromiso por mantener viva la memoria de los 30.000 desaparecidos que dejó la última dictadura militar en la Argentina. A 50 años del golpe cívico-militar, decimos #NuncaMás
Boca Juniors tweet media
Español
758
8.3K
29.2K
512.2K
PDA retweetet
Tina
Tina@feminstina·
Cada 24 de marzo recuerdo este escrito de Juan Solá: “Dicen que antes de parir, volvió a hablar con dios. Dicen que cada noche lloraba y le pedía por favor que la hija tuviera sus ojos. Que así, a lo mejor, la abuela la reconocería”.
Español
16
5.4K
34.3K
347.9K
PDA retweetet
Maguy-Ta🔥
Maguy-Ta🔥@maguyar83·
Me llamó Elena. LPM me destruyó el vídeo 🥺🇦🇷❤️
Español
26
4.9K
27.3K
807K
PDA retweetet
Gabriel Castro
Gabriel Castro@GabrielCastroOK·
En sólo dos minutos, las cifras escalofriantes de lo que fue la Dictadura cívico militar. #NuncaMas #Fueron30000
Español
76
5.7K
20.2K
314.5K
PDA retweetet
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ ' hosted RL and inference platform as part of an authorized commercial partnership.
English
521
1.4K
20.4K
3.5M