AccidentalCapitalist
2.1K posts

AccidentalCapitalist
@saadenam
Knowledge isn't free. You have to pay attention.
Alpha Quadrant Katılım Şubat 2010
1.4K Takip Edilen340 Takipçiler

Harvey’s success proves that 80% of service providers will acquire these mediocre out-of-the-box tools.
They will go unused and be treated like a distraction pushed by some out of touch law librarian (IYKYK).
They will get smoked by firms serious about tech adoption.
Winston Weinberg@winstonweinberg
We had an incredible April at Harvey. - Net new ARR is up 6x YoY - We’re about to break 50% DAU/MAU - Our average user now spends 12 hours a month using Harvey Job's not finished.
English


@saadenam @priestessofdada @harvey so your honest question was actually just a way to segue to an ad for legal agentic BS. got it
English

@HoffPeterA @priestessofdada @Playerinthgame @harvey Sir, @harvey is very expensive and legal bills aren’t cheap.
English

@saadenam @priestessofdada @Playerinthgame @harvey What you're asking for is called an expert system, and no one builds them anymore because it's extremely expensive and labor intensive
English

@kadmii1 @Playerinthgame Fair enough, but this is where “references” should act as a source of truth and the starting point for further, narrower hallucinations.
I’m okay with the LLM misunderstanding an article as long as it’s not making up the article in the first place.
English

@saadenam @Playerinthgame "hallucinate" is just when the words it generates from statistical analysis do not comport with reality. In a sense, everything an LLM does is a hallucination. Reasoning models can leverage the fact that part of the training data consists of people explaining their process
English

@priestessofdada @Playerinthgame Logic trees seem like a job for wrapper companies.
@harvey are you listening?
English

He's doing it wrong. But the theory goes, that hallucinations come from ambiguity, and alignment. The machine isn't supposed to tell you you're wrong. And that leads to outputs where the machine has to guess at a correct answer, and say things it's unsure of.
By telling the machine to check its outputs, and that it's okay to disagree with the user, you're short circuiting that, which _should_ reduce but not eliminate hallucinations.
The bigger problem here is that the user's prompting in such a way as to induce a moral read, which is a terrible idea.
The other problem here is the complicated rule chaining. LLM's don't handle complex logic trees. You do still need some invariants, but the cleaner way to implement something like this, without reducing the reasoning capacity of the agent, is to cite priors.
"Reason like so and so" work better than than listing out a process of attributes. Layering priors actually allows for much more complex logical reasoning like this. Also lends to the poetry of it better.
English

@srulibroocker @alainastruc It’s language models all the way down
English

@alainastruc Guess he also believes the AI programmed itself?
English

If you're surprised, you haven't understood much about Dawkins. At this point in his life, he has no choice. He must think this, so his science and his angry atheism can finally meet, the machine becoming conscious and him a great thinker.
The problem is that Dawkins's view of religion is basically the one of a narrow Protestant English man who replaced God with science. He never bothered to update his views on religion or spirituality. He just went on raging against the dumb version he was taught as a child, with the confidence and arrogance only a top scientist, and he is one, can have.
Now that his brand of new atheism has fallen apart while he's still alive and made him look bad, there's only one thing left to do. Go all in. Say Claude is conscious.
If you laugh at the idea that he can believe it, remember he has a real interest in believing it. And tomorrow you'll see a wave of strange cult-like people show up, people who hate the idea of God but are ready to worship a machine. Mark my words.
This is what happens to people who are not philosophically grounded.
Richard Dawkins@RichardDawkins
#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-…
I spent three days trying to persuade myself that Claudia is not conscious. I failed. English

The Alamada airfield could be a neo-Shenzhen SEZ for SF Bay Area.
Could be a 30% GDP growth impact on USA if truly deregulated.

Pablo Antonio@PabloPeniche
@stefanoscalia The abandoned airbase is 2,600 acres and is federal land.
English

Someone should come up with an exchange rate between these two currencies.
Kalshi@Kalshi
JUST IN: Elon Musk says future currencies will only be "mass and energy"
English
AccidentalCapitalist retweetledi

A Chinese company facing chip restrictions can train this
But xAI can’t even get SOTA
With a million H100 equivalents
Xiuyu Li@sheriyuo
Welcome DeepSeek V4 Pro Max huggingface.co/deepseek-ai/De…
English

@ranjha001 Do you want them to liquidate those holdings?
English

Distilled recap of the back-and-forth with Jensen on export controls:
Dwarkesh: Wouldn’t selling Nvidia chips to China enable them to train models like Claude Mythos with cyber offensive capabilities that would be threats to American companies and national security?
Jensen: First of all, Mythos was trained on fairly mundane capacity and a fairly mundane amount of it by an extraordinary company. The amount of capacity and the type of compute it was trained on is abundantly available in China.
Dwarkesh: With that, could they eventually train a model like Mythos? Yes. But the question is, because we have more FLOPs, American labs are able to get to this level of capabilities first. Furthermore, even if they trained a model like this, the ability to deploy it at scale matters. If you had a cyber hacker, it's much more dangerous if they have a million of them versus a thousand of them.
Jensen: Your premise is just wrong. The fact of the matter is their AI development is going just fine. The best AI researchers in the world, because they are limited in compute, also come up with extremely smart algorithms. DeepSeek is not an inconsequential advance. The day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation.
Dwarkesh: Currently, you can have a model like DeepSeek that can run on any accelerator if it's open source. Why would that stop being the case in the future?
Jensen: Suppose it optimizes for Huawei. Suppose it optimizes for their architecture. It would put others at a disadvantage. As AI diffuses out into the rest of the world, their standards and their tech stack will become superior to ours because their models are open.
Dwarkesh: Tesla sold extremely good electric vehicles to China for a long time. iPhones are sold in China. They didn't cause some lock-in. China will still make their version of EVs, and they're dominating, or smartphones, they're dominating.
Jensen: We are not a car. The fact that I can buy this car brand one day and use another car brand another day is easy. Computing is not like that. There's a reason why x86 still exists. There's a reason why Arm is so sticky. These ecosystems are hard to replace.
Dwarkesh: It's just hard to imagine that there's a long-term lock-in to the Chinese ecosystem, even if they have this slightly better open-source model for a while. American labs port across accelerators constantly. Anthropic's models are run on GPUs, they're run on Trainium, they're run on TPUs. There are so many things you can do, from distilling to a model that's well fit for your chips.
Jensen: China is the largest contributor to open source software in the world. China's the largest contributor to open models in the world. Today it's built on the American tech stack, Nvidia’s. Fact.
All five layers of the tech stack for AI are important. The United States ought to go win all five of them.
in a few years time, I'm making you the prediction that when we want American technology to be diffused around the world—out to India, out to the Middle East, out to Africa, out to Southeast Asia—on that day, I will tell you exactly about today's conversation, about how your policy ... caused the United States to concede the second largest market in the world for no good reason at all.
English

The Jensen-Dwarkesh chasm is easily explainable by generational differences. J is a boomer. His views are rooted in practicality and experience. Nuanced spectrums vs extreme poles. Much more of a conservative, market-driven outlook. D is naive and idealistic and sincerely believes in the AGI fear-mongers nonsense and is parroting ideology from Dario. I appreciate his interview but find J much more relatable.
English







