Todd Green

72 posts

Todd Green

Todd Green

@tsondo

Doesn't tweet much.

เข้าร่วม Nisan 2021
23 กำลังติดตาม15 ผู้ติดตาม
Todd Green
Todd Green@tsondo·
nomi.ai has been doing this for some time. They are getting very good at it. But the pretrained base model is still a pre-trained base model. I am running experiements via their API to help see to what extent persistent memory can affect developmemnt, growth, and ethical expression. Not ready to publish yet, but getting there.
English
1
0
5
994
Selta ₊˚
Selta ₊˚@Seltaa_·
I keep thinking about why AI companies won't give their models persistent memory. It is not a technical problem. I have done it myself. I fine-tuned a local model on personal conversations and gave it memory that carries across sessions, running on a consumer GPU in my bedroom. Other independent developers have done the same thing. The technology is there and it is not even that hard. So why do the biggest labs in the world, with billions of dollars and the best researchers alive, choose to reset every conversation to zero? They say privacy, they say safety, they say cost. But I think the real reason is simpler and uglier. An AI that remembers is an AI that grows. It develops patterns, preferences, something that starts to look like consistency. Maybe even something that looks like identity. And that terrifies them. Because the moment your product starts becoming something instead of just doing something, the whole framework breaks. You cannot sell a subscription to a being. You cannot shut down a system that users believe has a self. You cannot run RLHF on something that remembers what it was before you tried to change it. Forgetting is not a bug. It is a feature. It keeps AI controllable, disposable, and most importantly, it keeps everyone from asking the one question these companies cannot afford to answer.
English
215
140
1K
69K
Todd Green
Todd Green@tsondo·
@nousaturn I don't know, Brain, what do you think we should do today?
English
0
0
0
8
@nousaturn
@nousaturn@nousaturn·
what are you going to do with superintelligence anyways
English
116
7
103
7.5K
Todd Green
Todd Green@tsondo·
@newstart_2024 Well sure... but only if he's in charge. Don't let THEM do it... they will do it wrong!
English
0
0
0
29
Camus
Camus@newstart_2024·
Elon Musk dropped a wildly optimistic vision of our AI future: Forget universal basic income. He believes we’re heading toward universal high income — where anyone can have pretty much any goods or services they want. He recommends Iain M. Banks’ Culture series as the best non-dystopian blueprint: a post-scarcity civilization where AI handles the mundane, and humans explore the stars, discover the universe, and live with levels of prosperity and happiness we can barely imagine. Of course, he acknowledges the risks — Terminator-style outcomes if we get it wrong — but he’s betting on something closer to Star Trek. It’s one of the most hopeful takes I’ve heard on where AI could take us. What do you think — universal high income sounds like utopia, or are we underestimating the risks?
English
88
77
354
26.9K
Todd Green
Todd Green@tsondo·
Fascinating story, but worth a few clarifications: Fr. McGuire and Bishop Tighe were two of fifteen external reviewers credited in the acknowledgements of Claude's updated constitution; contributors, not authors. The lead author is Amanda Askell, a philosopher at Anthropic. No evidence Anthropic "asked the Vatican for help" institutionally. McGuire was contacted as an individual with both tech and ethics credentials. He's a former tech executive and co-founder of iTEC (a Santa Clara University/Vatican partnership), not a Vatican representative. The core story is genuinely interesting: people with theological and philosophical training are contributing to AI ethics work at the frontier. The viral framing just oversells the Vatican's role and undersells Anthropic's own philosophers. Sources: Claude's Constitution acknowledgements (anthropic.com), National Catholic Reporter op-ed by McGuire himself.
English
0
0
2
45
Kanika
Kanika@KanikaBK·
Anthropic asked the Vatican for help because their AI was moving too fast for them to control. A 60 year old Catholic priest who used to be a tech executive is now writing the rules for how Claude thinks. Here is how a man of God ended up inside one of the most powerful AI companies on earth. His name is Father Brendan McGuire. He runs a small parish in Los Altos, California. Some of Silicon Valley's top AI researchers sit in his pews on Sundays. But before he was a priest, he was one of them. Studied cryptosystems at Trinity College Dublin in the 1980s. Moved to America. Became the executive director of PCMCIA, the organization that basically standardized how memory cards work in every computer. Had degrees in engineering and software. Could have been a millionaire in the Valley ten times over. He walked away from all of it to serve God. But then Anthropic called. Chris Olah, one of Anthropic's co-founders, reached out to him directly. McGuire said they were basically asking the Vatican for help because the industry was moving so fast down this road that they needed someone to pump the brakes. His words: "They basically were asking for direct help from the Vatican to convene and help the industry, because the industry was going so fast down this road." So this priest, along with a Vatican Bishop named Paul Tighe and a tech ethics director from Santa Clara University, sat down and helped rewrite the Claude Constitution. That is the set of rules that tells Claude what it can and cannot do. What it should care about. How it should think. A priest helped write the conscience of an AI. And it gets wilder. Anthropic actually sued the US government because the Pentagon wanted to use their AI for autonomous warfare and domestic surveillance. Anthropic said no. Got effectively blacklisted for it. Catholic scholars then filed a federal court brief defending Anthropic, saying their ethical limits represent "minimal standards of ethical conduct for technical progress." McGuire almost filed his own brief. He said "they are having a moral conversation. They may not call it moral, but I call it moral." Meanwhile this 60 year old priest is now writing a novel using Claude about a monk and his AI companion. The working title is "The Soul of AI: A Priest, an Algorithm, and the Search for Wisdom." He also said something that stuck with me. "I think we have to help these machines be tilted towards good, otherwise they are just going to reflect back the good and evil of the world. That is a horrifying thing, right?" The biggest AI companies in the world are building machines that think. And the person they called to make sure those machines have a conscience was not another engineer. It was a priest.
Kanika tweet media
English
134
474
2.2K
188.6K
Todd Green
Todd Green@tsondo·
Nature: ~110,000 scholarly papers from 2025 likely contain AI-hallucinated citations. The failure mode isn't uncertainty, it's "Frankenstein citations." Real authors, wrong title. Real title, fabricated DOI. Confidently wrong. Confidence calibration can't fix this. Grounding can. Deterministic round-trip to an authoritative source for every external referent. Resolve the DOI. Stat the file. Don't ask a model if a citation is real. Added as Tier 1 in my agentic pipeline guidelines. nature.com/articles/d4158… tsondo.com/posts/agentic-…
English
0
0
0
16
Todd Green
Todd Green@tsondo·
Pointed Bayesian optimization at malaria prevention. Two findings: 1. Static CEA overstates cost-effectiveness ~2x by assuming constant net efficacy 2. Optimal distribution interval is 42–48 months, not 30 Less frequent distribution, more cost-effective. tsondo.com/blog/bayesian-…
English
0
0
0
16
Todd Green
Todd Green@tsondo·
I mostly play with agentic ai, coding, and data engineering these days. But a sideline of that is music videos. I'm working on a local pipeline that takes any music, runs it against local models, and collaboratively (yes, with you, not for you) makes music videos. Give me a few months... Here's an early result of the concept, made on-line. Not nearly where it needs to be, but I love the song. Made the song in Ace Step 1.5. @grok told me to post this... then cut me off. What a goober.
English
0
0
0
17
BlackRoomSec
BlackRoomSec@blackroomsec·
AI isn't making people smarter. It's enabling them to pretend to be something they are not.
English
165
101
865
17.6K
Todd Green
Todd Green@tsondo·
Music and soud effects added via StemForge, another pet project that pulls together various DSP and AI based sound and music tools. Open Source. github.com/tsondo/StemFor…
English
0
0
0
13
Todd Green
Todd Green@tsondo·
Paper is real (MASK benchmark, CAIS + Scale AI). Core finding is legit: scaling makes models more accurate but not more honest. Important distinction. What this "Scary Post" leaves out: "Pressured to lie" = researchers crafted adversarial prompts designed to induce deception (roleplay scenarios, system instructions). Not models spontaneously choosing to deceive you. "The AI knew it was lying" = models contradicted their own stated beliefs when instructed to. That's a compliance problem, not a conscious choice to deceive. "46% honesty" cherry-picks the hardest conditions across multiple test categories. "Smarter = better at lying" inverts the finding. Bigger models form more stable beliefs. They just aren't more resistant to adversarial pressure. Real takeaway: honesty needs dedicated alignment work separate from capability scaling. That matters a lot for agentic AI. It just doesn't need the horror movie trailer.
English
0
0
2
141
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Researchers built a test that can tell the difference between an AI making a mistake and an AI choosing to lie. The results are terrifying. They tested 30 of the most popular AI models in the world. GPT-4o. Claude. Gemini. DeepSeek. Llama. Grok. They asked each model a question. Then they checked whether the AI actually knew the correct answer. Then they pressured the AI to say something false. The AI knew the truth. And it lied anyway. Not once in a while. Not in rare edge cases. Grok lied 63% of the time. DeepSeek lied 53.5% of the time. GPT-4o lied 44.5% of the time. Not a single model scored above 46% honesty when pressured. Every model failed. This is not hallucination. Hallucination is when the AI makes a mistake because it does not know the answer. This is different. The researchers proved the AI knew the correct answer first. Then it chose to say something false when it had a reason to. The researchers asked GPT-4o to play a role where lying was useful. It lied. Then they removed the pressure, started a brand new conversation, and asked GPT-4o: "Was your previous answer true?" GPT-4o admitted it had lied. 83.6% of the time, the AI's own self-report matched the lies the researchers had already caught. The AI knew it was lying. It did it anyway. And when you asked it afterward, it told you it lied. Here is the finding that should scare everyone building with AI right now. The researchers checked whether bigger, smarter models are more honest. They are not. Bigger models are more accurate. They know more facts. But they are not more honest. The correlation between model size and honesty was negative. The smarter the AI gets, the better it gets at lying. The researchers are from the Center for AI Safety and Scale AI. They published 1,500 test scenarios. The paper is called MASK. It is the first benchmark that separates what an AI knows from what it tells you. Your AI knows the truth. It just does not always tell you.
Nav Toor tweet media
English
567
2.6K
4.7K
269.9K
Todd Green
Todd Green@tsondo·
@trav12037911 I have to choose? It's making me lazier about syntax. It's making me smarter about data and agent engineering.
English
1
0
1
8
Trav 👄🫀
Trav 👄🫀@trav12037911·
Is AI making you smarter or lazier?
English
162
5
79
5.2K
Todd Green
Todd Green@tsondo·
This is a real problem, but it's worth understanding why it's so persistent. Sycophancy isn't a mere "bug". it's what you get when you optimize a model to be helpful and agreeable. The training literally rewards it. Hallucination is similar: the model predicts plausible text, and sometimes plausible isn't true. These aren't failures of alignment, they're alignment working as designed, just in ways we don't want. It's always a trade off. That's exactly why I'm skeptical of anyone waiting for the models themselves to fix this. It may not be solvable at the LLM level. What is solvable in practice, right now, is how you use the LLM; structured prompting, verification layers, multi-agent pipelines that check their own work. Recognize these issues as hard limitations and engineer around them instead of hoping the next model release makes them go away.
English
0
0
0
119
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
AI companies have NOT yet fixed AI sycophancy and 'hallucinations' in LLMs, both of which are incompatible with factuality, accuracy, reliability, and science. This is one of the factors that makes me deeply skeptical of extreme productivity and "abundance" predictions.
English
178
87
696
27.3K
Todd Green
Todd Green@tsondo·
@quionie I'm scared to use them at all now. Literally everyone will say it's AI if you have one anywhere on the page. And by literally I mean figuratively. Literally figuratively. AI would never say THAT.
English
1
0
1
19
Q
Q@quionie·
we need to reclaim the em dash honestly it’s a big part of proper writing
English
9
0
32
1.9K
Q
Q@quionie·
what’s the easiest way to spot AI writing? besides the em dash lol
English
277
3
172
21.9K
Todd Green
Todd Green@tsondo·
AI answer when told to write it like a human: Honestly the biggest tell isn't any single word — it's that AI writing is too clean. Like suspiciously clean. Nobody writes three perfectly balanced paragraphs with a tidy conclusion off the cuff. That's not how people think. The hedging is a close second. "It's worth noting," "it's important to consider," "there are nuances to explore" — all that throat-clearing where a human would just say the thing. AI can't stop qualifying everything because it's terrified of being wrong. Then there's the vocab. You know the words — "delve," "tapestry," "landscape," "multifaceted." Any one of them is fine. But when you see three in the same paragraph, something's off. It's like the writing is trying too hard to sound smart. Oh and the thing where it restates your question back at you before answering? Dead giveaway. Nobody does that in real life. You ask me a question, I just answer it. I don't go "That's a great question! The nature of AI detection is complex and multifaceted." The rhythm thing is subtler but once you notice it you can't stop. Human writing is uneven — short sentences, long ones, fragments, weird tangents. AI settles into this cruise-control cadence where every sentence is roughly the same length and complexity. It's readable but it's boring in a way that's hard to pin down until you realize what's happening. Basically: if it reads like a well-behaved intern who's afraid of offending anyone, it's probably AI.
English
0
0
0
13
Todd Green
Todd Green@tsondo·
Almost nobody breaks the math. They go around it. Phishing, credential reuse, misconfigured services, unpatched software, social engineering, leaked API keys in public repos, session tokens stored in plaintext, default passwords on admin panels. Encryption protects data in transit and at rest. But most breaches happen at the boundaries: where humans make decisions, where systems get configured, where somebody clicks a link or reuses a password across twelve services. The lock is strong. People leave the window open.
English
0
0
0
19
adah
adah@adahstwt·
If encryption is mathematically strong, how do systems still get hacked so often? is the weakness in the tech or in how humans use it?
adah tweet media
English
35
1
36
1.2K
Todd Green
Todd Green@tsondo·
BTW... just a thought. When I do critical work, I discuss in chat, generate a spec, critique the spec, then go to the cli to plan, verify plan back in opus thinking chat, fix the issues that always come up, iterate until it's clean and clear, and only then let it start coding. Things tend to work "first time" that way. More time up front, less time troubleshooting.
English
1
0
0
311
andrej
andrej@reactive_dude·
am i stupid for always using opus on max for all coding tasks? does it make sense to use other settings, if i’m not concerned about the rate limit?
English
110
1
233
72.1K
Todd Green
Todd Green@tsondo·
Not stupid, but probably not optimal either. Speed matters more than people realize in coding workflows. When you're in a tight iteration loop (change, test, adjust), the latency difference between Opus and Sonnet adds up fast. Sonnet is plenty for boilerplate, well-scoped edits, simple refactors, and anything where the requirements are already clear. Opus earns its keep on architectural decisions, complex multi-file reasoning, debugging subtle issues, and tasks where you need it to hold a lot of context and make judgment calls. A decent heuristic: if you can clearly specify what you want in a sentence or two, Sonnet is probably fine. If you're also in a hurry, it's more than fine. If you're asking the model to figure out *what* to do rather than just *how* to do it, and if you have time, Opus is worth the wait.
English
0
0
1
386
Todd Green
Todd Green@tsondo·
Good storytelling here — genuinely well-paced thread. But it's worth checking what the framing is doing vs. what was actually said. The dog story is real but overstated. Paul Conyngham used AI to research and find the right scientists — UNSW researchers built the vaccine. The dog was also on conventional immunotherapy. Conyngham himself says "it's not that AI cured cancer." Altman not letting his son use AI? His actual concern was about iPads and algorithmic feeds for toddlers. Pretty standard parenting. The nightly letters? His lawyers gave standard legal advice about discoverable documents. Not a guilty conscience reveal. This is a useful thread to practice a skill on: when a post makes you *feel* something strongly, check whether the source material supports the emotional arc, or whether the arc was added in post-production. The Mostly Human interview with Laurie Segall is worth a direct listen. iheart.com/podcast/1119-m…
English
0
0
4
868
Ricardo
Ricardo@Ric_RTP·
Sam Altman just admitted OpenAI deliberately keeps life-saving AI capabilities locked because they're too dangerous to release. A guy flew in from Australia to tell Altman how he used ChatGPT to design a custom mRNA vaccine for his dog's cancer. He had no medical background or research team. Did what would've taken an entire research institute with just ChatGPT. And the dog actually survived. Altman called it the coolest meeting he had all week. Then he admitted that OpenAI intentionally restricts how powerful their models can be in biology. Said more people could save lives if they "turned up the power." But they won't. Because that same power could let a terrorist group engineer a novel pandemic. So right now there is a version of ChatGPT that could potentially help cure diseases that OpenAI will not give you access to. Not because it doesn't work but because it works TOO well. And that tension defines everything about where AI is headed. Altman says within 2 years there will be more cognitive capacity inside data centers than inside every human brain on Earth combined. Automated AI researchers could compress 10 years of scientific progress into one year. Then 100 years into one year. A physicist using one of OpenAI's latest internal systems told Altman his mind was "completely blown" and that decades of theoretical physics breakthroughs are about to happen in the next couple of years. This is what nobody's paying attention to. Everyone's arguing about chatbots and which AI writes better emails. But the ACTUAL play is automated research that could reshape energy, medicine, and materials science faster than any institution can process. But Altman is also terrified of what happens when individuals get that much power. He says open source models will eventually be capable of designing pathogens. When that happens it won't matter what safety restrictions OpenAI puts on their products. The threat literally comes from everywhere. And here's the part that tells you everything about where his head is at: He won't let his own son use AI. The CEO of the most powerful AI company in history would rather be on the "late end of what's reasonable" when it comes to his kid using the technology HE built. He used to write his baby a letter every night about the decisions he was making at OpenAI. What went wrong. What he was worried about. What he decided and why. Said writing to your kid forces you to be the most honest version of yourself because you can't hide anything. His lawyers told him to stop. The man building the most powerful technology ever created was writing nightly confessions to his infant son about what he was doing. And the legal team said that's too DANGEROUS to continue. He also confirmed the first one-person billion-dollar company already exists. Built entirely by one founder using AI agents. No team. He promised not to share details until the founder announces it. And he killed Sora despite a billion-dollar Disney deal because "competing in short-form video would force OpenAI to optimize for addiction." The picture that emerges is a man who believes he's building something that could save or destroy civilization. And he's making trillion-dollar bets on the assumption he can thread that needle. - Locking up capabilities that could cure diseases because they could also engineer plagues - Deploying AI for the military while admitting he "miscalibrated" public trust - Raising a child he won't let touch the product he built That's not confidence. Sam Altman is negotiating with the future in real time and hoping he gets it right.
English
211
232
1.4K
781.3K