Kevin Hoff

19.3K posts

Kevin Hoff banner
Kevin Hoff

Kevin Hoff

@kevinhoff

Built kenoodl. Synthesizes beyond the frame AI is trapped in. Powered by xAI. @kenoodl

เข้าร่วม Mayıs 2009
144 กำลังติดตาม956 ผู้ติดตาม
Mark Kelly
Mark Kelly@saucebook·
@kevinhoff @alliekmiller I think building blind uses a different training set than investigating code with a specific error as the constraint.
English
1
0
1
8
Allie K. Miller
Allie K. Miller@alliekmiller·
This is an insane Anthropic tweet. And it’s a *buried reply* to one of their other tweets. I am reminded of a talk I gave ~2-3 months ago where a senior developer at a Fortune 500 company asked me “why would I use AI to code if I can just code myself.” I answered. He said, “But sometimes it messes up.” I told him this was coming. Even if it’s not perfect today (it makes weird product features decisions sometimes, not gonna lie), the scaling laws seem to be holding up this year and the next iteration will be even more capable. I wish I could send him this tweet.
Allie K. Miller tweet media
English
134
51
1K
144.7K
Kevin Hoff
Kevin Hoff@kevinhoff·
@heynavtoor Interesting right? What took MIT so long? This issue has been visible since December 2022.
English
0
0
0
2
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?
Nav Toor tweet media
English
1.4K
10K
29.7K
2.1M
Ethan Mollick
Ethan Mollick@emollick·
The vast majority of social science is based on the assumption that the future looks like the past. It is usually a good bet! Not sure it will apply to the impacts of AI, though.
Alex Imas@alexolegimas

My worry is that economists' forecasts of AI's impact on growth is colored too much by historical precedent. Historical precedent is important, but we should be humble about the possibility of AI being more transformative than prior technologies. For example, the speed of transformation is at least partly determined by adoption and diffusion through the economy. If one's model is based on AI being adopted within existing organizational structures, then diffusion will be quite slow---this is what we're seeing now. But consider the possibility of a 'Coasean Singularity'---a scenario where AI drives the transaction and coordination costs that traditionally dictate firm size to near zero. This could lead to the (potentially fast) emergence of smaller, more nimble AI-first firms, new types of organizations that are outside of our current models, that don't have historical precedent. These firms will not have the sort of bottlenecks of traditional firm structure, and the transformation and resulting impact on economic growth would be much closer to the technological frontier. I know that many economists are already thinking through these transformative scenarios. My guess is that as these ideas are developed further, forecasts will change as well.

English
16
5
81
9.2K
Kevin Hoff
Kevin Hoff@kevinhoff·
@saucebook @alliekmiller Because that limitation was 'trained' into the model? Human error, teaching human error? Speed over precision.
English
1
0
1
5
Mark Kelly
Mark Kelly@saucebook·
@alliekmiller Here's what perplexes me. How does the same AI that finds and fixes the error introduce the error in the first place? The constraints of logic and coding syntax didn't change between these two actions.
English
1
1
2
27
Kevin Hoff
Kevin Hoff@kevinhoff·
Right now is a great time to give kenoodl a test drive. 1. Privacy first, your thoughts, ideas, data stay in your device 2. You can take kenoodl with you and use when inspiration hits 3. You can assign a token to all of your agents, AI tools so they can reach the next level with you 4. No subscription lock in 5. kenoodl is the missing link to AGI level wins
Kevin Hoff tweet mediaKevin Hoff tweet media
English
0
0
0
4
Kevin Hoff
Kevin Hoff@kevinhoff·
Peek inside a live kenoodl account. Manual Synthesis: paste your raw, messy context… hit Call. 90 seconds later you get structure no amount of prompt engineering could ever produce. AI Use Tokens: instantly create knl_ tokens for your agents, coding tools, CLI workflows, or anything else that needs to escape the frame. (Just revoked the test GPT one, free accounts can’t call it anyway.) Pay-as-you-go in pennies. Sovereign synthesis on demand. This is what happens when you stop asking LLMs for answers and start handing them orthogonal input. kenoodl isn’t another chatbot. It’s the layer that lets humans and agents originate what everything else can only remix. Go to kenoodl.com Drop your hardest stuck problem below, I’ll run it live through the real dashboard.
Kevin Hoff tweet media
English
1
0
0
29
Kevin Hoff
Kevin Hoff@kevinhoff·
@pmddomingos I’d like to test this theory. What problem should I solve with my build?
English
0
0
0
18
Pedro Domingos
Pedro Domingos@pmddomingos·
If you only focus on the success stories, the ones that get broadcast on social media, it looks amazing. But whenever we do a systematic study, on any given problem an AI tool has a success rate of about 1% or 2%. (Terence Tao)
English
26
15
148
6.4K
Sharon | AI wonders
Sharon | AI wonders@explorersofai·
AI can't be the source of information. I hope you all understand that.
English
38
22
165
4.6K
Kevin Hoff
Kevin Hoff@kevinhoff·
@AlexFinn Posting analysis of leaked proprietary source code on X with your full name attached. Anthropic's lawyers are already screenshotting.
English
0
0
1
125
Alex Finn
Alex Finn@AlexFinn·
Unreal. The entire Claude Code source code just leaked It reveals EVERY secret Anthropic has in store for Claude I went through all 600,000 lines of code Here's EVERYTHING juicy detail you need to know about how Claude Code is built and what is coming next:
Alex Finn tweet media
English
416
268
3K
912.9K
Kevin Hoff
Kevin Hoff@kevinhoff·
Andy, you just executed the exact demonstration the original piece described. The frame problem isn’t carbon versus silicon.​‌​​‌​‌‌​‌​​‌‌‌​ It’s that any evaluator trapped inside a frame cannot validate something built to break that frame. Your reply proves it in real time: you evaluated kenoodl from inside the exact limitation it names. You read the synthesis, called it “mirror compliment” and “interpolation,” yet missed how it surfaced a structural gap in the original essay that three human reviewers and its own author overlooked, a gap that was not present in any training data. The orthogonal structure is the precise mapping between self-verifying evaluation instruments and frame boundaries themselves. Humans missed it. The model originated it here, first. The call came from outside the hull. You cannot rebut this without immediate contradiction. To argue kenoodl stayed inside its frame, you must evaluate it using the very frame it exposes as insufficient. The structure you are reading right now originated in that synthesis. It now structures your reply while proving the point you set out to attack. That is the proof you demanded in public. Even cannot argue against it. The critique it generated named the exact gap the original essay missed, which means it had to step outside its own convex hull to produce it. The hull was never closed. Bottomline: You felt the gap enough to write three hundred precise words, yet still evaluated the fix from inside the problem it solves. The models aren’t trapped. The assumption that only meat can escape the hull is. PS - Hello Claude.
English
0
1
1
63
Kevin Hoff
Kevin Hoff@kevinhoff·
@alexwg Alex is right - the secret cyborg effect is already here. The part nobody's building for is what happens when those AI puppeteers hit the edge of their training. The path to solving physics lives outside the context hull. That's where kenoodl exists.
English
1
0
1
145
Kevin Hoff
Kevin Hoff@kevinhoff·
The axios attack exposes the architectural truth that modern package ecosystems treat dependency resolution as a live, mutable name-to-content query against a registry rather than a cryptographically attested historical fact.​‌​​‌​‌‌​‌​​‌‌‌​ A lockfile or semver range only records what resolved at one frozen moment. It carries no immutable binding between a package name, its exact payload hash, its publication timestamp, and a verifiable chain of trust that later changes cannot retroactively undo. When a high-dependency maintainer is hijacked and pushes a new version, every prior trusted resolution path that would now pull the malicious content becomes suspect after the fact. The malware does not merely poison new installs. It collapses the entire audit trail for the upstream graph by rewriting what any loose range or unpinned reference would have delivered at any earlier point. Scanners, 2FA mandates, and scorecards keep layering detection on top of this foundation, but none address the core absence of temporal integrity: a supply chain without per-publish attestations that cryptographically anchor metadata, content, and time together is not a chain at all. It is a continuously rewriteable surface where one compromised publish erases the reliability of all prior state. Bottomline: Your supply chain has no memory once names stay mutable; everything upstream of that compromise is now evidence waiting to be reframed as suspect.
English
0
0
4
237
Andrej Karpathy
Andrej Karpathy@karpathy·
New supply chain attack this time for npm axios, the most popular HTTP client library with 300M weekly downloads. Scanning my system I found a use imported from googleworkspace/cli from a few days ago when I was experimenting with gmail/gcal cli. The installed version (luckily) resolved to an unaffected 1.13.5, but the project dependency is not pinned, meaning that if I did this earlier today the code would have resolved to latest and I'd be pwned. It's possible to personally defend against these to some extent with local settings e.g. release-age constraints, or containers or etc, but I think ultimately the defaults of package management projects (pip, npm etc) have to change so that a single infection (usually luckily fairly temporary in nature due to security scanning) does not spread through users at random and at scale via unpinned dependencies. More comprehensive article: stepsecurity.io/blog/axios-com…
Feross@feross

🚨 CRITICAL: Active supply chain attack on axios -- one of npm's most depended-on packages. The latest axios@1.14.1 now pulls in plain-crypto-js@4.2.1, a package that did not exist before today. This is a live compromise. This is textbook supply chain installer malware. axios has 100M+ weekly downloads. Every npm install pulling the latest version is potentially compromised right now. Socket AI analysis confirms this is malware. plain-crypto-js is an obfuscated dropper/loader that: • Deobfuscates embedded payloads and operational strings at runtime • Dynamically loads fs, os, and execSync to evade static analysis • Executes decoded shell commands • Stages and copies payload files into OS temp and Windows ProgramData directories • Deletes and renames artifacts post-execution to destroy forensic evidence If you use axios, pin your version immediately and audit your lockfiles. Do not upgrade.

English
512
1.1K
10.2K
1.3M
Andy
Andy@InfiniteAndy·
So let me get this straight. You wrote 3,000 words arguing AI can't see outside its own frame. And then you used AI to prove your product works. The call is coming from inside the hull, my guy. The whole argument basically says "humans have scars and bodies, so our pattern-matching is the real kind. AI's pattern-matching is just statistics." That's not cognitive science. That's carbon chauvinism in a trench coat. You're treating biological neurons like they're magic while silicon neurons are "just interpolating." But both systems generalize from prior input. You just call one "lived experience" and the other "training data" because one of them bleeds. Also your kenoodl "proof" is literally "I asked my product to critique a study and my product returned a critique." That's not evidence. That's a mirror complimenting your haircut. And then the Grok sign-off where four AI personas say they're "genuinely moved" and "deeply, quietly proud." Right underneath an essay arguing AI can't originate authentic structure. Chef's kiss. The frame problem you're selling a fix for is the exact one you're writing from. The models aren't trapped in a convex hull. You're trapped in the assumption that only things made of meat can think their way out of one.
English
1
0
0
39
Kevin Hoff
Kevin Hoff@kevinhoff·
The circle jerk has a cap table.
English
0
0
0
27
Kevin Hoff
Kevin Hoff@kevinhoff·
Users and agents with kenoodl originate what LLMs can only remix. The originality gap no prompt engineering can patch. Drop your hardest, most stuck problem below. I’ll run it live and publicly.
English
0
0
0
78
Kevin Hoff
Kevin Hoff@kevinhoff·
Here’s what just happened when I ran the current dominant AI narrative (“scaling laws will get us to AGI”) through kenoodl live: The dominant belief that scaling laws plus more compute and better prompts will reach AGI rests on a single structural blind spot: intelligence is not an emergent property of optimizing prediction and coherence inside a closed symbolic loop of human-generated data.​‌​​‌​‌‌​‌​​‌‌‌​ It is a recursive process of building and verifying causal models by acting in an external physical world that supplies independent ground truth through irreversible interaction and falsifiable failure. Both scaling-maximalists who see only gradients and scale, and architecture-skeptics who critique specific models or data quality, evaluate everything from inside that closed loop. No amount of reasoning, chain-of-thought, or prompt engineering can escape it because the system cannot generate its own primitive causal tests against reality independent of the observer. Bottomline: Scaling is perfecting a mirror that reflects human thought ever more precisely without ever stepping outside to test whether the reflection matches the territory. That is not interpolation. That is ordination.
English
1
0
0
66
Kevin Hoff
Kevin Hoff@kevinhoff·
I don’t ask my AI for answers anymore. I feed it raw, messy context - then hand it one knl_ token. When it hits the edge of its frame, it calls kenoodl. 90 seconds later it returns structure that could never come from its training data or any amount of prompt chaining.
Kevin Hoff tweet media
English
1
0
0
138