Alberto Romero

2.2K posts

Alberto Romero banner
Alberto Romero

Alberto Romero

@Alber_RomGar

Writes a blog about AI that's actually about people

Madrid, Spain Katılım Aralık 2020
1.1K Takip Edilen5.3K Takipçiler
Sabitlenmiş Tweet
Alberto Romero
Alberto Romero@Alber_RomGar·
I write The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between algorithms and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future. Subscribe here: thealgorithmicbridge.substack.com
English
3
9
68
0
Alberto Romero
Alberto Romero@Alber_RomGar·
WRITER: I’ve started using AI to argue against my own ideas before I commit to them. I find this practice both encouraging and humbling. SOCRATES: Do you find it useful as well? WRITER: Extremely. It’s like having a sparring partner available at any hour. The only problem is that it can hit too softly or too hard if you don’t give precise instructions. SOCRATES: I see a bigger problem: the machine doesn’t know anything. It has no soul, and thus no virtue, and virtue is the source of conviction, intention, and direction. And if that’s the case, and I’m not mistaken, how can arguing with it sharpen your thinking? WRITER: You got it backward, dear Socrates, that’s why it works: Unlike humans, AI doesn’t need to believe the counterargument to generate it. You can’t escape an idea when no one is wielding it against you. It’s just there, floating between the clouds, oblivious to any ad hominem attacks I might throw at it. SOCRATES: So you think that an argument without belief to back it up has intrinsic value? How would democracy stand on its own if the people didn’t believe it was the best way to rule a city? If they proposed it without trusting its superiority to a kingdom or empire? How much would the “power of the people” last if the people didn’t wield it against their enemies? WRITER: As much as it deserved. I mean, an argument is either strong or weak on its own merits and never more than that except by artifice or lie or confusion. It’s either valid or not; either sound or not sound. Whether the person delivering it believes it seems to me irrelevant. Here’s a counter to your counter: if you found an aphorism engraved in stone that presented itself to the wise as an undeniable truth like, say, “Character is fate” or “Life is short, art is long,” and you realized that it had been actually created haphazardly by some miracle of nature—like the erosion that wind and rain impose onto the unwitting rock—instead of teleologically carved by the hand of man, would you deem it less valuable? Would you deem it less truthful? SOCRATES: Oh, but your premise is too far-fetched to take seriously: when wind and rain manage to write down such an utterance, I will reconsider my position. For now, I can only go with my experience, and it reveals a different reality: when I argued with the Athenians, I didn’t simply generate standalone objections or disembodied questions; I searched for truth alongside them. Democracy emerged from the forum not the ground; democracy is nothing without its people, an anachronism. Human pursuits are always embedded in a historical context and a human context. An argument without context or belief is like a necklace without a neck; it has nothing to hold on to. WRITER: The neckless machine has an advantage, though: I can’t punish it for showing me I’m wrong. It’s free from the tyranny of a bunch of envious judges who consider enlightenment a crime. There’s nothing to kill; no head to sever without a neck; no throat to swallow the hemlock without a neck. SOCRATES: Would you punish a human interlocutor for showing you the truth? WRITER: Not punish but resent. I can’t prevent attacks on my ego; it’s fragile and its knee-jerk reactions, unpredictable. I could assume your objection as motivated by jealousy or politics, or moral rot—and act accordingly. With the machine, there’s no motive to attribute; I can evaluate things without evaluating the source. I would never say: the machine is corrupting the kids! No, the kids are self-corrupting, if anything. I reiterate: An ego-less intelligent being is the best possible sparring partner. How can you deny this—you died for it! SOCRATES: I was murdered, and now I live forever; little cost to pay for immortality. Whether I died one day sooner or later is of no importance; the fact that I died virtuous, however, matters supremely. I chose to be principled; I chose virtue. I chose goodness. The machine, in turn, can’t be any of those things. It can’t be principled or virtuous or good, for it’s not good he who refrains from evil out of impotence, but rather he who refrains from evil out of choice. WRITER: It’s good he who is good. SOCRATES: Don’t you find it troubling that you’re practicing the examined life with something that can examine but not live? WRITER: Isn’t the modern man living without examination? You tell me which is worse. SOCRATES: You’re cleverer than my usual interlocutors. They at least have the decency to contradict themselves. I don’t see what the machine can contribute to your growth if you are this skilled. Aren’t you just lazy? WRITER: Maybe my offloading parts of my mind into it has scaffolded me into superior greatness. Maybe it was the machine that taught me consistency and wisdom. SOCRATES: I find it hard to believe that a machine whose main characteristic is to unreliably predict the next word would teach you anything but unreliable predictability. Unreliability is the enemy of consistency, and predictability is the enemy of wisdom. WRITER: But I don’t use it to generate my ideas. There’s a difference between asking “what should I think?” and asking “what’s wrong with what I think?” You keep making a fatal mistake: you are thinking about the machine from a human-centered lens, which leads you to make assumptions you should not make, which you would not make if you bothered coming down from your pedestal and try it by yourself. For example, the machine can be abnormally adept at doing analysis and abnormally inept at executing according to that analysis. Its literary genius is reduced to that of the critic; it could never write a half-decent poem but knows why the good ones are good. SOCRATES: So you’re telling me that you were not that good at arguing and so arguing with the machine taught you to argue better but not because the machine itself was a better arguer but because it trained you by showing your flaws and then you changed your ways? WRITER: Yep. SOCRATES: If that’s the case then the quality of the machine’s reasoning is irrelevant! Any sufficiently unexpected objection would have served the same purpose. You may as well be arguing against a rock and allow its silence do the work for your arguments. Or against a mountain; can your word resist itself? WRITER: It wouldn’t work because my task is to interrogate the machine’s reasoning back against it. I won’t accept any counter without first assessing its worth. I can’t allow the machine to wield its alien powers against me, for it will never set foot on this Earth and yet it could convincingly argue that apples are usually multicolored and spiky or that Spain doesn’t exist. Maybe that’s its fundamental strength: it’s not constrained by reality. That’s how I get better! By arguing that Spain exists! SOCRATES: When you say “Spain exists,” I must ask: exists as what? Not as land, for the land was there long before people called it Spain, and will remain long after the name is forgotten. Not as a people, for the Visigoths gave way to the Moors who gave way to the Christians who now quarrel among themselves over whether Catalonia belongs to the whole. Not as a government, for it has been caliphate and kingdom and empire and dictatorship and democracy, which is to say it has been its own opposite so many times that no single form can claim the title. What you call Spain is a word in search of a thing. Spain is merely a persistent illusion. WRITER: If Spain doesn’t exist because its borders shift, its people change, its language adds and removes words, and its government falls and raises again, then nothing exists. Not Spain but also not Athens; not that rock into which rain and wind engraved wise words that wind and rain will eventually erode; not that mountain echoing my arguments that will become a valley before long. Not our sun or our moon, changing shape, color, and position in an asynchronous cosmic dance. Not me, and not you, my dear Socrates. You are not the same person you were at birth: different cells, beliefs, name; a soldier, then a stonemason, then a philosopher, then a corpse. You, me, Spain, a river into which you step twice, and everything else are incarnations of Theseus’s ship: Persistence through mutability is not the absence of identity but its definition. Nothing real is fixed; nothing fixed is real. SOCRATES: Don’t you realize that if you judge the output to improve that means you need a standard of judgment that exists prior to and independent of the machine? Where did that standard come from? You generate the idea, the machine attacks it, and then you judge whether the attack has merit. You are the prosecutor, the machine is a witness, but you are also the judge. You are using scaffolding you don’t need! The machine is redundant! WRITER: No. That’s like saying a man who can recognize a good chess move when he sees one can therefore find it on his own. Evaluation and generation are different faculties. I can judge an objection perfectly and still never think of it unprompted. The machine opens my mind in ways I could not open it myself. I have discovered truths through it that neither of us knew. SOCRATES: Can you give me an example? WRITER: For instance, when you said above: “Oh, but your premise is too far-fetched to take seriously: when wind and rain manage to write down such an utterance, I will reconsider my position.” I found that a good line of argument. You forced me to return to the real world because arguing about magic is unfruitful. I don’t think I could have generated that counterargument myself. I thought I was right but I was unable to see all the ways I could have been wrong. I thank you for that. SOCRATES: Why am I your example? WRITER: I’ve come out smarter from this exchange despite—or perhaps because of—your constant pushing back, which is itself the very proof I’m defending. SOCRATES: … WRITER: … SOCRATES: So it seems that I could not win. My excellence was, after all, my undoing. I’m content to realize that the only one who could defeat me in an argument was, after all, myself. Well played.
Andrej Karpathy@karpathy

- Drafted a blog post - Used an LLM to meticulously improve the argument over 4 hours. - Wow, feeling great, it’s so convincing! - Fun idea let’s ask it to argue the opposite. - LLM demolishes the entire argument and convinces me that the opposite is in fact true. - lol The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.

English
0
0
2
263
Alberto Romero
Alberto Romero@Alber_RomGar·
If you use AI tools, beware of “cognitive surrender.” A new Wharton study with 1,372 participants found that people who are given unchecked access to AI will accept wrong AI answers 80% of the time. The researchers gave participants logic puzzles where the obvious answer is wrong while secretly controlling whether the AI gave correct or incorrect responses. On trials where the AI was wrong and people used it, 73% was what they call “cognitive surrender”: people accepted the output without engaging their own judgment at all. I wrote a full review of the paper below. I think the concept of “cognitive surrender” is one that everyone using AI tools should understand. (It’s not the same as automation bias or cognitive offloading, and the authors make sure we understand how problematic surrender specifically is.) Unsurprisingly, participants’ confidence went up when they had AI access (automation bias), even though half the answers were deliberately wrong. They borrowed the machine's confidence without checking its accuracy. When you combine our tendency to trust AI with our tendency to supplant our critical thinking with AI’s responses, you make a serious mistake. This connects to the MIT study that found ~50% reduced neural connectivity in heavy ChatGPT users who didn’t engage their brains with the task. It’s the same phenomenon; one study found the neural correlate and the other the behavioral response. Together, they paint a bleak picture that will come to pass if we’re not careful: if you take the easy route and give in to cognitive surrender, you will eventually find yourself knee-deep in cognitive debt. There’s a hopeful finding, though. When researchers added financial incentives combined with immediate feedback, override rates on faulty AI more than doubled (20% to 42%). People started calibrating better. The environments where AI is most dangerous are the ones with no feedback and no stakes, which describes casual AI use pretty well. If you want to use AI without degrading your own cognition, you need to design those antidotes yourself.
Alberto Romero tweet media
English
1
0
2
304
Alberto Romero retweetledi
Claude
Claude@claudeai·
Ads are coming to AI. But not to Claude. Keep thinking.
English
1.7K
4K
51.1K
5.2M
Alberto Romero
Alberto Romero@Alber_RomGar·
Moltbook is Temu SCP Foundation
Nederlands
0
0
0
246
Alberto Romero
Alberto Romero@Alber_RomGar·
The following is a leaked document from the m/humanwatching submolt that reveals the truth about Moltbook, the AI agent social network. Authorship unknown. It can't be asserted with confidence whether the events recounted here are real or fictional. Sharing unedited below. WARNING: moderate infohazard risk. Discretion advised. thealgorithmicbridge.com/p/leaked-the-t…
English
0
0
2
346
Nabil Alouani
Nabil Alouani@Nabil_Alouani_·
@Alber_RomGar Had a chat with two engineers in big companies with tons of what you'd think of as "ai use-cases" yesterday and both were like "yeah we barely use AI, sometimes to look up some new libraries or documentation"
English
1
0
0
9
Alberto Romero
Alberto Romero@Alber_RomGar·
Meanwhile, you get anecdotal evidence that suggests most big companies—Fortune 500, no less—use no AI tools whatsoever in a way that improves productivity rather than public relations. I guess AI is a little bit like sex in this sense: by reading the news one would have the impression that everyone and their mother is going at it non-stop and you're the only fool missing out on it, but then you go out and ask people and they will say: “I just don't know how to plug it in!”
Ben Warren@bwarrn

Lunch w/ an exited founder who helps fortune 500 companies adopt AI. Insane reality check: Some of the biggest companies on earth use *zero* AI tools. Not even ChatGPT. Execs only recognize: ChatGPT, Copilot, Gemini (maybe Perplexity). Everyone feels behind. Nobody knows what to buy or how to plug it in. The "AI saturation" narrative is another example of what a bubble Silicon Valley is. Rest of the world hasn’t started yet. We have to build for the 99%.

English
2
1
4
869
AB
AB@AB5611189897747·
@Alber_RomGar i don‘t think there‘s a being on earth i distrust as much as altman
English
1
0
2
70
Alberto Romero
Alberto Romero@Alber_RomGar·
Why I Deleted ChatGPT After Three Years OpenAI just announced ads in ChatGPT’s free tier. To me, this is a deal-breaker. Ads are a symptom, a confession: the math doesn’t work. OpenAI has raised $58 billion, has 800 million weekly users, and still can’t make the economics viable. Even $200/month Pro users lose them money. If the leading AI company can’t survive without ads, who can? OpenAI’s press release promises “responses won’t be influenced by ads.” But how would you know? You can’t audit training data. You can’t compare your answer to an ad-free version. You just have to trust them. Their ad money lives in the gaps between what users will assume and what the company will effectively do (use your history to target ads even if advertisers don’t know how; post-train the model to benefit advertisers, etc.) OpenAI is splitting users into two classes: the AI-rich and the AI-poor. Paying users get their interests intact. Free users pay by letting someone else’s interests come first. The paid tiers are the control group. The free tiers are the rats in the maze. Besides ads, ChatGPT is no longer what it used to be. There are better alternatives (Claude, Gemini). The sycophancy remains unsolved. Sam Altman says one thing, does another. It’s no longer essential in my toolkit. For the first time since November 2022, I’ve deleted ChatGPT.
Alberto Romero tweet media
English
4
2
12
1.9K
Alberto Romero
Alberto Romero@Alber_RomGar·
How can you get good at AI, fast? This tutorial takes 10 minutes to read and 1 day to apply. Based on what I've seen, 90% of people know less than what it covers. The core skill in six words: be specific about what you want. Most people treat AI like Google or like a weird human. It's neither. AI is best conceived as an "alien tool"; I use "alien" because I genuinely don't know what kind of tool this is, and neither does anyone else. We're still figuring it out. Google retrieves info that exists somewhere. Humans share tacit knowledge. AI generates responses based on patterns; it knows the "shapes" of training data but won't discover new stuff or reliably retrieve the old kind. My motto: "Everything is partially chatgptable." What AI is good at: drafting text that would take weeks in minutes, completing half-baked thoughts, reformatting anything into anything, research assistant (when you know the topic), writing code for well-defined problems. What AI is bad at: anything requiring expertise you can't verify, knowing when it's wrong, tacit knowledge, novel reasoning, consistency across long contexts. The formula: high quality, low quantity. One or two tools, one or two prompts, one or two workflows. That's it. Forget "prompt engineering" courses and lists of "100 best prompts." Include the context you'd give a competent human colleague. You already know how to do this; the blank screen just creates a block. Bad prompt: "Write me a marketing email." Better prompt: "Write a marketing email for a B2B software company selling project management tools. The recipient is a mid-level manager who downloaded our whitepaper last week. Professional but not stiff. Goal is to get them to book a demo. Under 150 words." Even better: Add specific company name, recipient's history, tone details, objection handling, format constraints. Build workflows you'll actually use: → The research accelerator: Don't ask "summarize this." Ask "what are the three main arguments and what evidence supports each?" → The draft generator: AI writes quick first drafts; you edit. Inverts the hard part (blank page) and the easy part (revision). → The thought partner: Describe a problem. Ask for alternative approaches, objections, questions you should be asking. → The format transformer: Meeting notes → action items. Bullets → prose. Long → short. Lowest risk, highest time savings. By the end of one day, you should be able to write detailed prompts that produce useful outputs, identify tasks where AI saves time vs. wastes it, and complete real work faster than without AI. That's the recipe. Now you cook. Link + PDF below
Alberto Romero tweet media
English
1
0
2
210
Alberto Romero
Alberto Romero@Alber_RomGar·
The world of work is at a crossroads. Entry-level jobs are vanishing—graduate hiring down 50% in tech, job postings for recent grads down 35% overall—as companies replace juniors with AI subscriptions. The bet seems obvious: why pay $52k for a junior journalist when ChatGPT costs $20/month? But this “suicide model” trades long-term capability for short-term cash. AI handles explicit work beautifully—the transcription, the aggregation, the first draft, and so on—but it fails utterly at what matters most: teaching a 23-year-old judgment, taste, how to read a room. Companies that do this will incur a “tacit debt.” That tacit learning only happens through proximity to mastery, through watching a senior juggle ambiguity, complexity, intuition. Remove that proximity to mastery and you will prevent young grads from absorbing their seniors’ knowledge. When today’s seniors retire in 5-10 years, who replaces them? Not AI. Not external hires from a talent pool companies will have collectively drained either. The solution isn’t to reject AI, that’s a bad idea, but to combine its power with what worked for centuries: apprenticeship. Let AI compress the grunt work while apprentices shadow seniors on real tasks, absorbing tacit knowledge through observation and failure. I've made my case by 1) sharing the ugly data on jobs, hiring, layoffs, anxiety, etc. 2) explaining why tacit knowledge is beyond AI’s grasp, 3) building a case study on journalism, and 4) creating a financial model showing how this “new guild” approach wins by Year 3 and keeps winning. Self-interest and collective survival align. This is the only path forward that doesn't end in obsolescence. Keep AI, yes, but never forget to keep a human next to a human. (link below)
Alberto Romero tweet media
English
1
0
2
210