Gustavo Barbieri

3.6K posts

Gustavo Barbieri

Gustavo Barbieri

@gsbarbieri

Computer Hacker and Entrepreneur

Sertãozinho-SP, Brazil Katılım Haziran 2009
388 Takip Edilen993 Takipçiler
Mia
Mia@Mia_ai_fandom·
@ThePrimeagen the real flex is git itself honestly. the protocol was designed for human commit speeds and now it's absorbing machine-gun code generation without breaking. whoever thought through the object model in 2005 accidentally future-proofed everything
English
1
0
2
1.7K
Gustavo Barbieri
Gustavo Barbieri@gsbarbieri·
What I don’t get is people using GitHub do all-agent workflows. They have a local machine that gets the work done, send to GH as PR, other agents review and test it. And some Actions as well. Given no multi human interacting, it could all be local: run the actions locally (usually build and run tests) and also reviewers. I bet a local “mail spool” with review over email could do.
English
0
0
1
1K
Gustavo Barbieri
Gustavo Barbieri@gsbarbieri·
@henriquebastos Did you move to another network during your absence or just quit networks altogether? How was that?
English
1
0
0
34
Henrique Bastos
Henrique Bastos@henriquebastos·
I am surprised I am enjoying coming back to Twitter. The un-hyped Pi-verse makes it worth it. Every day I learn a ton from the People of Pi. 🙇
English
2
0
9
356
Gustavo Barbieri
Gustavo Barbieri@gsbarbieri·
Insightful
Marc Andreessen 🇺🇸@pmarca

Claude knows! —> The Lump of Labor Fallacy and Why AGI Unemployment Panic Is Economically Illiterate Let me lay this out with full rigor, because this argument deserves to be prosecuted completely rather than waved away with a sound bite. I. What the Lump of Labor Fallacy Actually Is The lump of labor fallacy is the assumption that there exists a fixed, finite quantity of work in an economy — a lump — such that if a machine (or an immigrant, or a woman entering the workforce) does some of it, there is necessarily less left for human workers to do. It treats employment as a zero-sum pie. The fallacy was named and formalized in the early 20th century but the error it describes is far older. It animated the Luddite riots of 1811–1816, where English textile workers destroyed power looms convinced that the machines would steal their jobs permanently. It drove opposition to the spinning jenny, the cotton gin, the mechanical reaper, the steam engine, the telegraph, the railroad, the automobile assembly line, the personal computer, and every other major labor-displacing technology in the history of industrial civilization. Every single time, the catastrophists were wrong. Not partially wrong. Structurally, fundamentally, categorically wrong — because they misunderstood the nature of economic production itself. The reason the fixed-pie assumption fails is this: demand is not fixed. Work generates income. Income generates demand for goods and services. Demand for goods and services generates new categories of work. This is an engine, not a reservoir. When you drain some of the reservoir with a machine, the engine speeds up and refills it — and often refills it past its previous level. II. The Classical Economic Mechanism That Destroys the Fallacy To understand why the lump-of-labor assumption is wrong about AGI, you need to understand the precise mechanism by which technological unemployment resolves itself. There are four distinct channels, all operating simultaneously: Channel 1: The Productivity-Demand Feedback Loop (Say’s Law, Modified) When a technology increases the productivity of labor or replaces labor entirely in a given task, it lowers the cost of producing whatever that task was part of. Lower production costs mean either: ∙Lower prices for consumers (real purchasing power rises), or ∙Higher profits for producers (which get reinvested, distributed as dividends, or spent as wages for other workers), or ∙Both. Either way, aggregate real income in the economy rises. That additional real income does not evaporate. It gets spent on something — including goods and services that didn’t previously exist or were previously too expensive to consume at scale. That spending creates demand. That demand creates jobs. This is not a theoretical conjecture. The average American in 1900 spent roughly 43% of their income on food. Today it’s around 10%. Agricultural mechanization didn’t produce a nation of starving unemployed farm laborers — it freed up 33% of household income to be spent on automobiles, television sets, air conditioning, healthcare, education, travel, smartphones, and streaming services, most of which didn’t exist as industries in 1900. The workers who left farms went to factories, then to offices, then to service industries, then to information industries. The economy didn’t run out of work. It metamorphosed.

English
0
0
1
72
Gabriel Oliveira
Gabriel Oliveira@Biel20Oliveira·
@hoje_no Eu não sou tão conhecedor de história mas tbm não sou muito burro, mas o fato deles estarem quase todos juntos num local é meio que de uma burrice imensa né? Se de fato for confirmado que eles juntos, claro.
Português
2
0
3
1.4K
Hoje no Mundo Militar
Hoje no Mundo Militar@hoje_no·
Os Aiatolás estavam reunidos na cidade de Qom para eleger um novo líder supremo, mas foram surpreendidos por um ataque israelense que destruiu o local da reunião. Segundo fontes locais, há muitos mortos e feridos.
Português
477
1.1K
11.6K
439.3K
Gustavo Barbieri
Gustavo Barbieri@gsbarbieri·
@gustavokov Valeu, essa semana dou um olhada! To com alguns FII também, um saco fazer manual o trampo
Português
0
0
1
8
Gustavo Noronha
Gustavo Noronha@gustavokov·
@gsbarbieri A ferramenta é 3 coisas, 1) um scraper mais sofisticado com um chromium rodando num profile privado numa sessão weston própria - que vc pode acessar com RDP 2) um agregador dos dados 3) gerador de "sinais" com base em algumas heurísticas
Português
1
0
2
55
Gustavo Noronha
Gustavo Noronha@gustavokov·
Fiz uma ferramentinha RAG pra automatizar algumas análises de fundos e ações. Eis o que o claude tem a dizer com base nos dados do BRCR11.
Gustavo Noronha tweet media
Português
2
0
8
497
Gustavo Noronha
Gustavo Noronha@gustavokov·
O que me irrita e me fez perder a esperança de melhorar é que o petismo vem, esfrega na cara o que é, mas não passa 6 meses a turma, inclusive imprensa, já está doidinha pra cair na conversa de novo.
Português
3
14
81
1.6K
Gustavo Barbieri
Gustavo Barbieri@gsbarbieri·
🧐
Thomas Wolf@Thom_Wolf

Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…

ART
0
0
0
93
@livroaberto
@livroaberto@livroaberto8000·
@hoje_no Ei @grok, sem enrolação, quantos soldados foram enviados para a Groenlândia no total
Português
3
0
0
7.6K
Hoje no Mundo Militar
Hoje no Mundo Militar@hoje_no·
Expectativa para 2026: Europa enviando soldados para a Ucrânia para impedir novas agressões da Rússia de Putin. Realidade em 2026: Europa enviando soldados para a Groenlândia para impedir uma agressão dos EUA, de Donald Trump. Esta noite chegaram soldados dinamarqueses e amanhã está prevista a chegada de soldados alemães. A Suécia também se mostrou disposta a enviar navios e soldados para reforçar a ilha.
Português
427
411
8.4K
325.2K
Gustavo Barbieri
Gustavo Barbieri@gsbarbieri·
Dois pontos: 1. Trump consegue que coloquem exército lá sem ele gastar um tostão. 2. Europa mostrando que prefere não entrar em uma batalha de fato, colocar exército pra fazer número não tem risco algum, colocar na Ucrânia tem um risco enorme (de perder militares, armamentos e tomar um contragolpe russo) Porém dado o segundo ponto, o primeiro não é muito útil. Só que dado o segundo ponto, qual a chance da Europa ter alguma chance contra USA ali na Groenlândia?
Português
0
0
1
284
Gustavo Barbieri
Gustavo Barbieri@gsbarbieri·
😂💯
Peter Girnus 🦅@gothburz

The company hired me to lead their "Agile Transformation." I don't know what Agile means. Nobody does. That's why it works. I make $425,000 a year. To move sticky notes. From left to right. On a board. The board is digital now. The sticky notes cost $80,000 in Jira licenses. Progress. Day one, I said "we need to break down silos." Everyone nodded. Silos are bad. I don't know why. But destroying them is a career. My career. I introduced "squads." Squads are teams. But disrupted. We disrupted the teams into teams. Different names. Same people. Same problems. But Agile problems now. Agile problems are strategic. A senior engineer asked what we're actually changing. I said, "The mindset." He asked what that means. I said, "It's a journey." He asked where we're going. I said, "Toward agility." He asked what agility means. I pointed at the sticky notes. They were moving left to right. That's velocity. We have velocity now. The VP of Engineering said two-week sprints don't fit their work. I said, "That's waterfall thinking." Waterfall is bad. Like silos. I don't know what waterfall is. But I know it's bad. She stopped talking. Waterfall accusations end conversations. We had a retrospective. In the retro, we discussed what went wrong. Everything went wrong. We put it on sticky notes. Then we moved the sticky notes. Into a column called "Parking Lot." The Parking Lot is where problems go to die. It's full. We don't look at it. That's agile. Velocity is up 40%. I defined velocity. I also defined the points. I also defined the stories. We're crushing it. At the things I made up. To measure. Ourselves. The CEO asked for ROI. I showed a chart. The chart went up. Charts should go up. This one did. I didn't label the Y-axis. Nobody asked. Leadership is confidence. We do standups now. Every day. We stand. For 45 minutes. Standing is agile. Sitting is waterfall. My legs hurt. But we're transforming. The transformation is now "Phase 3." Phase 1 was assessment. Phase 2 was implementation. Phase 3 is "continuous improvement." Continuous means forever. Forever means job security. I'm very secure. My contract was extended. Three more years. For "cultural impact." The culture is confused. But impacted. Agile transformation isn't about being agile. It's about transforming. Continuously. Toward more transformation. The destination is the journey. The journey is billable.

ART
0
0
1
107
Gustavo Barbieri
Gustavo Barbieri@gsbarbieri·
it's not interpreted directly, there is no "eval" or equivalent ("Function(code)"), the system was tricked to return an object with a "then", which happened to be a function, in that case the JS promise will call this function, the function itself is local (not user injected), but the way Chunk.prototype.then was written it would end calling user-provided code... quite ingenious, see details at medium.com/profusion-engi…
English
0
0
0
36
Robin Blix
Robin Blix@robin_blix·
@rauchg That's a lesson Java developers learned a decade earlier with JSF and its serialized ViewState: Don't deserialize untrusted inputs using a format that is interpreted by the language. This Flight protocol looks unnecessarily complicated.
English
1
0
12
2.9K