Eric Neuman

3.7K posts

Eric Neuman banner
Eric Neuman

Eric Neuman

@Eric_Neuman

Founder @trydotted ex Microsoft / Amazon / Axon Product Expert

Seattle, WA Katılım Ekim 2009
4.8K Takip Edilen4.2K Takipçiler
Mark Changizi
Mark Changizi@MarkChangizi·
Need a name for those who are - not Woke Left - not Woke Right - anti-Islamist - aggressive to bullies - not anti-Semitic - not “I’m not anti-Jew I’m anti-[BS here]” - for free speech - for civil liberties - against “balancing” civil liberties - for free markets - for cost benefit analyses - for tight immigration - against isolationism - contemptuous of international law - against foreign dictatorships - “America First,” not “Israel Last” - love Iranians - “Free Iran” but not “Free Palestine” - appreciative that strength brings peace - want Cuba free Suggestions?
English
2.3K
319
5.1K
402.3K
Eric Neuman
Eric Neuman@Eric_Neuman·
I might be a bit of a power user.
Eric Neuman tweet media
English
0
0
1
58
Eric Neuman retweetledi
Tiago Forte
Tiago Forte@fortelabs·
Wait, so the founder of Anthropic is "Amodei," as in "loves god"? And he leads Anthropic, meaning "human-centered," which is being used in military strikes? And the creator of ChatGPT is "Altman," as in "an alternative to humans"? And he leads OpenAI, which is completely closed? And then there's "Gemini," meaning "two-faced," from a company that promised to do no evil? And the whole global AI arms race is being driven by people who claimed to be worried about AGI taking over the world? Either the universe is an extremely cliché writer, or has a brilliant sense of humor
English
1.3K
4.4K
35.3K
2.4M
Eric Neuman
Eric Neuman@Eric_Neuman·
@cixliv I was j joking but half seriously, could humanoids deliver themselves via public transit? Last mile with ride-sharing.
English
0
0
0
10
CIX 🦾
CIX 🦾@cixliv·
VCs forget this when investing in robot software companies: Distribution for web, phones, and other tech is solved. Software companies with existing distribution can quickly scale. For robotics, distribution is not solved. You. Need. To. Ship. Robots. To. Scale.
English
14
7
103
22.1K
Eric Neuman
Eric Neuman@Eric_Neuman·
@cixliv No I mean the robots. Just have them distribute themselves on foot!
English
1
0
1
13
Eric Neuman
Eric Neuman@Eric_Neuman·
@cixliv With you up to the last. The required ecosystem around guns and the effort to certify them is gargantuan. I might agree that it probably wouldn't be squeezed in human speed hands, but I wouldn't be surprised to see a standard weapon on a bot.
English
0
0
0
19
CIX 🦾
CIX 🦾@cixliv·
Hey everyone this video is fake. The G1 is 4.5 feet (1.4 meters) tall, 80 pounds. The camera on the robot is a low fidelity realsense camera pointing toward the hands. It can’t hold a rifle or fire one. A robot w/ a gun would not use a infantry weapon, it would be custom.
English
29
7
104
15.9K
Eric Neuman
Eric Neuman@Eric_Neuman·
@peterrhague I gave 11 years of my life to XR. I'm one of few people to actually exit a VR startup. Modern XR is incredible but it fails the 10x rule because the incumbent (phone) is so so good. For most, XR isn't 10x as good as phone at work, socializing, consuming, even gaming.
English
0
0
0
19
Peter Hague
Peter Hague@peterrhague·
One technology that I really wanted to take off, but hasn't, is VR. I mean, I was a teenager the first time around, when the devices used CRTs and you needed a strong neck. Played the bird game in the arcades. The graphics on these devices was very naff. But modern devices (like the Meta Quest 3) which I own are fine. They are light weight, good image quality, decent passthrough for XR and responsive. But for some reason they don't seem to have evolved beyond toys. I've tried using them for serious 3D visualisation as part of my workflow, but its quite clunky to get them to do that. There seem a lot of barriers to development. What happened?
Peter Hague tweet media
English
163
9
218
20.6K
Eric Neuman
Eric Neuman@Eric_Neuman·
Lol lol. Crickets.
English
0
0
1
39
Eric Neuman
Eric Neuman@Eric_Neuman·
I built a real thing. People love it. A big company is paying real money for it. We're making waves in a really really big company. I need to reach a lot more people or the right people, but everything people say works here takes tons of cash. What actually works?
English
1
1
2
269
Francesco Andreoli ᵍᵐ
Francesco Andreoli ᵍᵐ@francescoswiss·
If you're a founder and you're starting to build your company now: I want to invest. The best companies start at the bottom of the market.
English
450
36
1.1K
60.4K
Eric Neuman retweetledi
God of Prompt
God of Prompt@godofprompt·
🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time. This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great. The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied. Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation). Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints. Across all three, the same failure patterns keep showing up. > First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process. > Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply. > Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing. One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated. This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process. Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience. Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable. The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance. But they’re very clear that none of these are silver bullets yet. The takeaway isn’t that LLMs can’t reason. It’s more uncomfortable than that. LLMs reason just enough to sound convincing, but not enough to be reliable. And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing. That’s the real warning shot in this paper. Paper: Large Language Model Reasoning Failures
God of Prompt tweet mediaGod of Prompt tweet media
English
270
1.4K
7K
964.2K
Eric Neuman
Eric Neuman@Eric_Neuman·
@2sush This is why I built @tryDotted , it's an ide + CI/CD for specs built from my experience as a principal PM at Amazon/Microsoft/Axon
English
0
0
0
22
sush
sush@2sush·
write something only techies understand
English
180
0
100
9.4K
Aakash Gupta
Aakash Gupta@aakashgupta·
This is a dopamine loop, and it’s one of the most powerful ones humans have ever encountered. Every time you prompt an AI and get a useful result back in seconds, your brain gets a hit. Variable-ratio reinforcement, same mechanism as slot machines, except the reward is real: actual output, actual progress, actual leverage on your ideas. Traditional work follows a delayed-reward structure. You write code for 6 hours, maybe it compiles, maybe you get feedback in a week. The gap between effort and reward is wide enough that motivation decays constantly. AI compresses that loop to seconds. Effort → reward → effort → reward. Your prefrontal cortex stays engaged because the next payoff is always one prompt away. This is why people describe it as “fun” when they’re actually working 14-hour days. The subjective experience of effort disappears when reward frequency is high enough. The “harder than ever” part is real too. When your bottleneck shifts from execution to imagination, you run out of excuses to stop. There’s no “waiting on the build” or “blocked by review.” Every idea you have can be tested immediately, which means your brain never gets a natural stopping point. People who thrive on this are selecting for a specific neurotype: high novelty-seeking, high conscientiousness, tolerance for rapid context-switching. That’s maybe 10-15% of the population. The other 85% will experience the same tools as overwhelming, not energizing. And that split is going to define the next decade of who captures value from AI and who gets displaced by it.
Nat Eliason@nateliason

Nearly every ambitious person I know who has dived into AI is working harder than ever, and longer hours than ever. Fascinating dynamic tbh. I have NEVER worked this hard, nor had this much fun with work.

English
208
488
4.6K
563.3K