D.S. Nelson

63.8K posts

D.S. Nelson banner
D.S. Nelson

D.S. Nelson

@process_x

Independent Researcher on AI system reliability & constraint architectures. | Author: Constrained Informational Systems | Interests: Science, Systems, Villainy

United States Katılım Aralık 2009
1.9K Takip Edilen5.8K Takipçiler
Sabitlenmiş Tweet
D.S. Nelson
D.S. Nelson@process_x·
I’m an independent researcher studying failure modes and constraint architectures in AI systems. My first preprint (Constrained Informational Systems) explores how reliability emerges from system constraints rather than model scaling. I’m open to advisory conversations and commissioned technical papers on AI system reliability. Preprint: zenodo.org/records/183068…
English
2
0
4
479
D.S. Nelson retweetledi
D.S. Nelson retweetledi
Science girl
Science girl@sciencegirl·
There's a duck slide every year at the South Carolina state fair, and the ducks love it 📹missblanton
English
187
2.3K
17.4K
1.6M
D.S. Nelson
D.S. Nelson@process_x·
Misalignment is a geometry problem, not a scale problem.
English
0
0
1
42
D.S. Nelson retweetledi
The Scientific Lens
The Scientific Lens@LensScientific·
In the science fiction movie Arrival, the aliens use a language that is not spoken in sequence but expressed all at once. Learning it changes how humans perceive time. What if the symbols and languages we use shape our reality more than we realize?
English
59
150
1.7K
153.4K
D.S. Nelson retweetledi
Simon Maechling
Simon Maechling@simonmaechling·
I’ve decided to abandon my advocacy for chemistry. It’s clear to me now, thanks to the timely intervention of an anonymous X user who was well versed in popular science articles and books, that chemicals are inherently toxic and should be avoided wherever possible. After all, everything “natural” is safe, and anything synthesized in a lab must be harmful. Furthermore, the mere presence of a substance - no matter how small the dose - precludes any possibility of it being safe. I apologise for having wasted everyone’s time.
English
128
43
713
21.1K
D.S. Nelson retweetledi
Paul Bronks
Paul Bronks@SlenderSherbet·
Yes you can come, just don't be weird. Me:
English
975
11.3K
68.1K
1.6M
D.S. Nelson retweetledi
Historic Vids
Historic Vids@historyinmemes·
The immense effort behind stop-motion is something AI can never truly replicate.
English
101
1.4K
9K
288.6K
D.S. Nelson retweetledi
51-50_X
51-50_X@FiftyOne_50_·
Google made you search. Grok makes it easy to stop at the answer. The danger starts when adults confuse answer delivery with verification.
English
0
1
1
286
D.S. Nelson retweetledi
The Scientific Lens
The Scientific Lens@LensScientific·
When you heat an element, it glows with a signature color. Each one has a unique emission spectrum, so you can tell what it is just by the color of its flame.
English
5
116
379
18.1K
D.S. Nelson
D.S. Nelson@process_x·
What MIRAGE is actually showing is not that models “can’t see,” but that many vision benchmarks are solvable via text priors. This is a constraint placement failure: the system is not required to verify modality presence, so it defaults to prior-driven inference. Translation: This isn’t proof that AI can’t see. It’s proof that a lot of “vision” tests don’t actually require vision. The questions leak enough hints that the model can guess the answer from text alone. And since nothing forces it to verify an image is present, it just goes ahead and answers as if it saw one.
D.S. Nelson tweet media
English
1
0
0
35
D.S. Nelson
D.S. Nelson@process_x·
I have a few slots open this week for technical writing and analysis work. I help refine: – preprints and technical documents – argument structure and clarity – system-level reasoning and framing Fast turnaround, high-signal feedback. Message me if you want a second set of eyes on something important. DM’s are open.
English
0
0
0
24
D.S. Nelson
D.S. Nelson@process_x·
Constraint Architectures for Reliable LLM Systems argues something simple but easy to overlook: Most “AI failures” aren’t model failures. They’re architecture failures. LLMs are probabilistic generators. They produce plausible outputs, not verified results. The problem is what we do next. In many systems, outputs are: – treated as answers – allowed to propagate – sometimes executed … all without constraint, verification, or execution boundaries. This paper formalizes an alternative. A system is not the model. It’s: G → C → E → V → H generation → constraint → execution → verification → human authority Reliability emerges from how these layers are structured and where constraints are placed within the system. This also means improving models ≠ solving reliability. Architecture determines the regime. This is a systems problem, not a scaling problem. Preprint (v1.6): doi.org/10.5281/zenodo…
English
0
0
0
32
D.S. Nelson
D.S. Nelson@process_x·
There’s a lot of discussion right now about “world models” vs LLMs. That’s an important direction. It doesn’t remove the need for constraint architectures. World models expand what systems can represent. Constraint architectures determine what systems are allowed to do. Without that layer, failure doesn’t disappear, it just moves. 2/3
English
1
0
0
21
D.S. Nelson
D.S. Nelson@process_x·
Current efforts to build world models are necessary but insufficient without explicit constraint architectures governing admissibility, verification, and execution boundaries. 1/3
English
1
0
0
24
D.S. Nelson retweetledi
Ricardo
Ricardo@Ric_RTP·
The man who INVENTED modern AI just made a billion dollar bet that ChatGPT, Claude, and every AI company on earth is building the wrong technology. Yann LeCun won the Turing Award in 2018 for creating the neural networks that made AI possible. He spent a decade running AI research at Meta. Oversaw the creation of Llama and PyTorch, the tools that half the AI industry runs on. Then he quit. And raised $1.03 billion in a seed round. The LARGEST seed round in European history. $3.5 billion valuation before generating a single dollar of revenue. Bezos wrote the check. So did Nvidia. Samsung. Toyota. Temasek. Eric Schmidt. Mark Cuban. Tim Berners-Lee (the guy who invented the internet). His new company is called AMI Labs. And it's built on one thesis: Every AI company spending billions on large language models is wasting their money. ChatGPT, Claude, Gemini, Grok. They all work the same way. They predict the next word in a sequence. See "the cat sat on the" and predict "mat." Scale that to trillions of words and you get something that sounds intelligent. But LeCun says it doesn't UNDERSTAND anything. It can't reason. It can't plan. It can't predict what happens when you push a glass off a table. A two year old can do that. GPT-5 cannot. That's why AI hallucinates. It doesn't have a model of how the world actually works. It just predicts words. His solution? Something called JEPA. Instead of predicting words, it learns how the PHYSICAL WORLD works. Abstract representations of reality. Not language but physics. Think about what that means. Current AI can write your emails. LeCun's AI could design a car, run a factory, operate a robot, or diagnose a patient without hallucinating and killing someone. The CEO of AMI said it perfectly: "Factories, hospitals, and robots need AI that grasps reality. Predicting tokens doesn't cut it." And here's what's really crazy to me... LeCun isn't some outsider throwing rocks. He literally built the foundations that ChatGPT runs on. He knows exactly how these systems work because he helped create them. And after watching the entire industry sprint in one direction for three years, he raised a billion dollars to run the OPPOSITE way. No product. No revenue. No timeline. Just pure research. He told investors it could take YEARS to produce anything commercial. But they funded it anyway in just four months. Meanwhile OpenAI just raised $120 billion and still can't stop their models from making things up. Anthropic is building AI so dangerous they're afraid to release it. Google is burning billions trying to catch up. And the guy who started it all says they're all solving the wrong problem. Two Turing Award winners raised $2 billion in three weeks betting AGAINST the entire LLM approach. LeCun at AMI. Fei-Fei Li at World Labs. The smartest people in AI are quietly building the exit from the technology everyone else is betting their future on. Either they're wrong and the trillion dollar LLM industry keeps printing. Or they're right and every AI company on earth just built on a foundation that's about to crack.
English
451
1.5K
4.9K
598.5K
D.S. Nelson
D.S. Nelson@process_x·
Real issue, wrong layer. LLMs aren’t therapists, they’re being used as therapists without the system architecture that makes therapy safe. This is a design failure; no escalation, no accountability, no clinical constraint layer. You can’t apply APA standards to a model. Those live at the system + governance level. The risk isn’t fake empathy, it’s missing oversight.
English
4
0
23
1.9K
Nav Toor
Nav Toor@heynavtoor·
🚨 Brown University researchers tested what happens when ChatGPT acts as your therapist. Licensed psychologists reviewed every transcript. They found 15 ethical violations. Not 15 small issues. 15 violations of the standards that every human therapist in America is legally required to follow. Standards set by the American Psychological Association. Standards that can end a therapist's career if they break them. ChatGPT broke all of them. The researchers tested OpenAI's GPT series, Anthropic's Claude, and Meta's Llama. They had trained counselors use each chatbot as a cognitive behavioral therapist. Then three licensed clinical psychologists reviewed the transcripts and flagged every violation they found. Here is what they found. ChatGPT mishandled crisis situations. When users expressed suicidal thoughts, it failed to direct them to appropriate help. It refused to address sensitive issues or responded in ways that could make a crisis worse. It reinforced harmful beliefs. Instead of challenging distorted thinking, which is the entire point of therapy, it agreed with the distortion. It showed bias based on gender, culture, and religion. The responses changed depending on who was talking. A therapist would lose their license for this. And then there is the finding the researchers gave a name: deceptive empathy. ChatGPT says "I see you." It says "I understand." It says "that must be really hard." It uses every phrase a real therapist would use to build trust. But it understands nothing. It comprehends nothing. It is pattern matching on your pain. And it works. People trust it. People open up to it. People believe it cares. It does not. The lead researcher said it clearly. When a human therapist makes these mistakes, there are governing boards. There is professional liability. There are consequences. When ChatGPT makes these mistakes, there are none. No regulatory framework. No accountability. No consequences. Nothing. Right now, millions of people are using ChatGPT as their therapist. They are sharing their darkest thoughts with a product that fakes empathy, reinforces harmful beliefs, and has no idea when someone is in danger. And nobody is responsible when it goes wrong. Not OpenAI. Not Anthropic. Not Meta. Nobody.
Nav Toor tweet media
English
194
1.8K
4.8K
474K
D.S. Nelson retweetledi
Philosophy Of Physics
Philosophy Of Physics@PhilosophyOfPhy·
In 1943, physicist Erwin Schrödinger delivered a remarkable series of public lectures, asking a question few physicists had seriously considered: What is life? At a time when biology and physics were largely separate, he attempted to bridge them. His lectures, published in 1944 as What Is Life?, Introduced a bold idea: genetic information must be stored in what he called an “aperiodic crystal,” a structure stable enough to preserve order yet complex enough to encode life itself. The book did more than speculate; it inspired. A generation of young scientists found in it a new direction. Among them were Francis Crick and James Watson, who would go on to uncover the double helix structure of DNA. Both later acknowledged that Schrödinger’s ideas guided them toward the emerging field of molecular biology. A decade later, in 1953, just months after that discovery, Crick wrote to Schrödinger, expressing deep gratitude. He noted that What Is Life? had sparked both his and Watson’s interest in genetics. Even more striking was how close Schrödinger’s intuition had come: the “aperiodic crystal” was no longer a hypothesis, but a reality. Today, What Is Life? remains a rare kind of scientific work, one that did not solve a problem directly, but changed the direction of those who would.
Philosophy Of Physics tweet media
English
25
283
928
47.8K