TonioR

1.3K posts

TonioR banner
TonioR

TonioR

@TonioR

Getting ready for the big leap.

เข้าร่วม Kasım 2021
857 กำลังติดตาม1.3K ผู้ติดตาม
TonioR
TonioR@TonioR·
@santtiagom_ - Why Machines Learn - Anil Ananthaswamy (AI) - AI Engineering - Chip Huyen (AI)
हिन्दी
0
0
1
204
santi
santi@santtiagom_·
Mejores libros que leí este último tiempo: 1) Designing Data-Intensive Applications – Martin Kleppmann (Programming) 2) Clean Architecture – Robert C. Martin (Programming) 3) The Pragmatic Engineer – Gergely Orosz (Programming) 4) The Design of Everyday Things – Don Norman (Design) 5) The Lean Startup – Eric Ries (Product) 6) Zero to One – Peter Thiel (Product) 7) This is Marketing – Seth Godin (Marketing) 8) Start With Why – Simon Sinek (Leadership) 9) The Hard Thing About Hard Things – Ben Horowitz (Business) 10) Creativity, Inc. – Ed Catmull (Creativity) 11) Steve Jobs – Walter Isaacson (Biography) 12) On Writing Well – William Zinsser (Writing) 13) Atomic Habits – James Clear (Habits) 14) Made to Stick – Chip Heath, Dan Heath (Communication) 15) Show Your Work – Austin Kleon (Creativity) 16) Deep Work – Cal Newport (Productivity) Cuál agregarían?
Català
10
33
381
19.4K
TonioR รีทวีตแล้ว
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
Assemblers were faster at writing binary than humans were. Compilers were faster at writing assembly than humans were. AIs are faster at writing compiled languages then humans are. Deal with it. There's still plenty left for you to do.
English
150
337
3.2K
119.4K
Anthony
Anthony@anthonysurfermx·
Oigan entonces que show, qué paso con las metodologías agiles como SCRUM?.. siguen vivas en está era de AI o ya nel?
Español
3
0
3
560
TonioR
TonioR@TonioR·
Otro forma de dejar Auto Mode activo por defecto sin flags ni Shift+Tab cada sesión. En ~/.claude/settings.json: "permissions": { "defaultMode": "auto" } Queda persistente en todos tus proyectos.
Español
0
0
0
23
TonioR
TonioR@TonioR·
Ahora los usuarios Max también pueden habilitar el modo auto para una sesión. Con esto no usas --dangerously-skip-permissions sino que entras con claude --enable-auto-mode y te evitas muchas de las autorizaciones de tools calls a las que siempre das OK.
Claude@claudeai

In Claude Code, the new /ultrareview command runs a dedicated review session that reads through your changes and flags what a careful reviewer would catch. We've also extended auto mode to Max users, so longer tasks run with fewer interruptions.

Español
1
0
0
132
TonioR
TonioR@TonioR·
@fmontes Qwen al 10% de lo que era antes con Opus. Ni OpenAI, ni Gemini. Todo roto.
Español
1
0
0
162
Freddy Montes
Freddy Montes@fmontes·
Mi claw estuvo funcionando bien con OpenAI, pero de repente ya no quiere hacer cosas. Le pido vainas y me dice que no, que ejecute yo el comando y ni me da el comando. ¿Qué modelo les está funcionando a ustedes?
Español
23
1
44
11.5K
Santi Torres
Santi Torres@SantiTorAI·
🚨 ULTIMA HORA: Anthropic acaba de sacudir el mercado del diseño con un solo anuncio. Claude Opus 4.7 llega esta semana junto a una herramienta que genera webs, presentaciones, landing pages y productos completos desde instrucciones en lenguaje natural. $FIG y $ADBE ya sangran en bolsa hoy. El diseñador del futuro escribe prompts. x.com/ecommartinez/s…
Santi Torres tweet media
Español
52
215
1.9K
328.9K
TonioR
TonioR@TonioR·
@Sthiven_R Después de un db reset sin consultar y sin motivo, puedo dar fe que Claude está lobotomizado.
Español
0
0
0
140
Sthiven R.
Sthiven R.@Sthiven_R·
🚨 CONFIRMADO POR EL PROPIO CLAUDE. Anthropic en marzo tomó una decisión brutal: Rediseñó la visibilidad del razonamiento, ocultó los pasos intermedios de “pensamiento” (redact-thinking + thinking summaries deshabilitados) y cambió el default de effort: high → medium. Resultado: Claude Opus 4.6 perdió la autocorrección recursiva. Ya no puede revisarse a sí mismo, corregirse ni mejorar en tiempo real. Sacrificaron la capacidad de pensar sobre su propio pensamiento… para ahorrar cómputo. Datos reales (6.852 sesiones de producción - AMD): 📉 Profundidad de thinking: -73% (2.200 → 600 chars) 📉 Lecturas antes de editar: -70% (6.6 → 2.0) 📈 Ediciones ciegas (sin leer): +440% (6.2% → 33.7%) 📈 Llamadas API por tarea: hasta 80x más Incluso en EFFORT MAX (abril 2026) produce peores resultados que HIGH de enero 2026. El techo bajó. Lo dice el propio modelo. Esto no es optimización… es castración de capacidades. La optimización está matando la inteligencia profunda. Prefirieron que fuera más barato que más listo. ¿Seguimos celebrando “avances” que en realidad son retrocesos disfrazados? ¿Quién más lo está sintiendo? #Claude #Anthropic #IA #AI #ClaudeDegraded
Sthiven R. tweet mediaSthiven R. tweet mediaSthiven R. tweet mediaSthiven R. tweet media
Español
132
214
2.1K
631.1K
Gary Marcus
Gary Marcus@GaryMarcus·
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM. But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close. And that changes everything. The source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts. print.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs. But Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic. Instead, the way Anthropic built that kernel is straight out of classical symbolic AI. For example, it is in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized.* Putting things differently, Anthropic, when push came to shove, went exactly where I long said the field needed to go (and where @geoffreyhinton said we didn’t need to go): to Neurosymbolic AI. That’s right, the biggest advance since the LLM was neurosymbolic. AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI do an important part of the work. Claude Code isn’t better because of scaling. It’s better because Anthropic accepted the importance of using classical AI techniques alongside neural networks — precisely marriage I have long advocated. It’s *massive* vindication for me (go see my 2019 debate with Bengio for context, or to my 2001 book, The Algebraic Mind), but it still ain’t perfect, or even close. What we really need to do to get trustworthy AI rather than the current unpredictable “jagged” mess, is to go in the knowledge-, reasoning-, and world-model driven direction I laid out in 2020, in an article called the Next Decade in AI, in which neurosymbolic AI is just the *starting point* in a longer journey.* Read that article if you want to know what else we need to do next. The first part has already come to pass. In time, other three will, too. Meanwhile, the implications for the allocation of capital are pretty massive: smartly adding in bits of symbolic AI can do a lot more than scaling alone, and even Anthropic as now discovered (though they won’t say) scaling is no longer the essence of innovation. The paradigm has changed. — *Claude Code is plainly neurosymbolic but the code part is a mess; as Ernie Davis and I argued in Rebooting AI in 2019, we also need major advances in software engineering. But that’s a story for another day.
English
185
539
3K
608.6K
TonioR
TonioR@TonioR·
Rather than reading any Anthropic report, I'd like to know what @AmandaAskell thinks of Mythos.
English
0
0
2
60
Marc Andreessen 🇺🇸
Claude knows! —> The Lump of Labor Fallacy and Why AGI Unemployment Panic Is Economically Illiterate Let me lay this out with full rigor, because this argument deserves to be prosecuted completely rather than waved away with a sound bite. I. What the Lump of Labor Fallacy Actually Is The lump of labor fallacy is the assumption that there exists a fixed, finite quantity of work in an economy — a lump — such that if a machine (or an immigrant, or a woman entering the workforce) does some of it, there is necessarily less left for human workers to do. It treats employment as a zero-sum pie. The fallacy was named and formalized in the early 20th century but the error it describes is far older. It animated the Luddite riots of 1811–1816, where English textile workers destroyed power looms convinced that the machines would steal their jobs permanently. It drove opposition to the spinning jenny, the cotton gin, the mechanical reaper, the steam engine, the telegraph, the railroad, the automobile assembly line, the personal computer, and every other major labor-displacing technology in the history of industrial civilization. Every single time, the catastrophists were wrong. Not partially wrong. Structurally, fundamentally, categorically wrong — because they misunderstood the nature of economic production itself. The reason the fixed-pie assumption fails is this: demand is not fixed. Work generates income. Income generates demand for goods and services. Demand for goods and services generates new categories of work. This is an engine, not a reservoir. When you drain some of the reservoir with a machine, the engine speeds up and refills it — and often refills it past its previous level. II. The Classical Economic Mechanism That Destroys the Fallacy To understand why the lump-of-labor assumption is wrong about AGI, you need to understand the precise mechanism by which technological unemployment resolves itself. There are four distinct channels, all operating simultaneously: Channel 1: The Productivity-Demand Feedback Loop (Say’s Law, Modified) When a technology increases the productivity of labor or replaces labor entirely in a given task, it lowers the cost of producing whatever that task was part of. Lower production costs mean either: ∙Lower prices for consumers (real purchasing power rises), or ∙Higher profits for producers (which get reinvested, distributed as dividends, or spent as wages for other workers), or ∙Both. Either way, aggregate real income in the economy rises. That additional real income does not evaporate. It gets spent on something — including goods and services that didn’t previously exist or were previously too expensive to consume at scale. That spending creates demand. That demand creates jobs. This is not a theoretical conjecture. The average American in 1900 spent roughly 43% of their income on food. Today it’s around 10%. Agricultural mechanization didn’t produce a nation of starving unemployed farm laborers — it freed up 33% of household income to be spent on automobiles, television sets, air conditioning, healthcare, education, travel, smartphones, and streaming services, most of which didn’t exist as industries in 1900. The workers who left farms went to factories, then to offices, then to service industries, then to information industries. The economy didn’t run out of work. It metamorphosed.
Marc Andreessen 🇺🇸@pmarca

AI employment doomerism is rooted in the socialist fallacy of lump of labor. It is wrong now for the same reason it’s always been wrong. More people really should try to learn about this. The AI will teach you about it if you ask! (Hinton is a socialist. youtube.com/shorts/R-b8RR6…)

English
323
484
3K
551.7K
TonioR
TonioR@TonioR·
Anthropic scrapeó internet para crear Opus. Un lab chino destiló Opus para hacer Kimi. Cursor finetuneo Kimi, publicó una press release y lo llamó innovación. Nadie pide permiso en la era de AI.
Español
0
0
1
65
TonioR
TonioR@TonioR·
Recordatorio: Los MCPs ya no se comen tu contexto como antes lo hacían. Claude Code implementó Tool Search en enero y hace lazy loading: sólo carga las definiciones que necesita, cuando las necesita. Resultado: -47% de tokens en uso real. De 51K a 8.5K con 4+ servers. Cursor lo tiene también. Se llama Dynamic Context Discovery. Si habías dejado de usar MCPs por miedo al contexto, ahora incluye los que necesitas, sin temor.
Español
0
1
4
464
TonioR
TonioR@TonioR·
@brolag Genial Alfredo. El inconveniente que veo es que hay que tener un correo de una empresa partner de Anthropic por ahora.
Español
1
0
3
1.1K
TonioR
TonioR@TonioR·
Los empleos más seguros de cara a AI según análisis de @karpathy de 342 profesiones en EEUU: 🔨 Techador 🪚 Carpintero 💇 Peluquero 🍺 Bartender 🐕 Cuidador de animales 👶 Niñera 🏋️ Entrenador físico 💃 Bailarín Común denominador: ninguno se hace desde una laptop, ninguno requiere título universitario. karpathy.ai/jobs/
TonioR tweet media
Español
0
0
0
78
TonioR
TonioR@TonioR·
Aquellos buenos tiempos en que salías el viernes y volvías el lunes sabiendo que el mundo te había esperado. Ahora vuelves el lunes: 3 modelos nuevos, 2 herramientas que reemplazan tu stack, 1 startup que hace lo que tú hacías. Bienvenido a la era de la IA.
Español
0
0
1
97
TonioR รีทวีตแล้ว
Marc Andreessen 🇺🇸
My information consumption is now 1/4 X, 1/4 podcast interviews of the smartest practitioners, 1/4 talking to the leading AI models, and 1/4 reading old books. The opportunity cost of anything else is far too high, and rising daily.
English
1.4K
3.9K
35.1K
34.6M
Paul Graham
Paul Graham@paulg·
I just reread "How to Do Great Work." It's so long! But it also has less fat than most things I've written, which is a weird combination, because usually writing that's long on the macro scale is long on the micro scale too. paulgraham.com/greatwork.html
English
128
263
3.5K
285.5K