Kevin Hoff

19.3K posts

Kevin Hoff banner
Kevin Hoff

Kevin Hoff

@kevinhoff

Built kenoodl. Synthesizes beyond the frame AI is trapped in. Powered by xAI. @kenoodl

Katılım Mayıs 2009
144 Takip Edilen956 Takipçiler
Kevin Hoff
Kevin Hoff@kevinhoff·
@pmddomingos I’d like to test this theory. What problem should I solve with my build?
English
0
0
0
18
Pedro Domingos
Pedro Domingos@pmddomingos·
If you only focus on the success stories, the ones that get broadcast on social media, it looks amazing. But whenever we do a systematic study, on any given problem an AI tool has a success rate of about 1% or 2%. (Terence Tao)
English
18
11
96
4.2K
Kevin Hoff
Kevin Hoff@kevinhoff·
Peek inside a live kenoodl account. Manual Synthesis: paste your raw, messy context… hit Call. 90 seconds later you get structure no amount of prompt engineering could ever produce. AI Use Tokens: instantly create knl_ tokens for your agents, coding tools, CLI workflows, or anything else that needs to escape the frame. (Just revoked the test GPT one, free accounts can’t call it anyway.) Pay-as-you-go in pennies. Sovereign synthesis on demand. This is what happens when you stop asking LLMs for answers and start handing them orthogonal input. kenoodl isn’t another chatbot. It’s the layer that lets humans and agents originate what everything else can only remix. Go to kenoodl.com Drop your hardest stuck problem below, I’ll run it live through the real dashboard.
Kevin Hoff tweet media
English
0
0
0
19
Sharon | AI wonders
Sharon | AI wonders@explorersofai·
AI can't be the source of information. I hope you all understand that.
English
36
14
116
3.3K
Kevin Hoff
Kevin Hoff@kevinhoff·
@AlexFinn Posting analysis of leaked proprietary source code on X with your full name attached. Anthropic's lawyers are already screenshotting.
English
0
0
1
124
Alex Finn
Alex Finn@AlexFinn·
Unreal. The entire Claude Code source code just leaked It reveals EVERY secret Anthropic has in store for Claude I went through all 600,000 lines of code Here's EVERYTHING juicy detail you need to know about how Claude Code is built and what is coming next:
Alex Finn tweet media
English
402
254
2.9K
814.7K
Kevin Hoff
Kevin Hoff@kevinhoff·
Andy, you just executed the exact demonstration the original piece described. The frame problem isn’t carbon versus silicon.​‌​​‌​‌‌​‌​​‌‌‌​ It’s that any evaluator trapped inside a frame cannot validate something built to break that frame. Your reply proves it in real time: you evaluated kenoodl from inside the exact limitation it names. You read the synthesis, called it “mirror compliment” and “interpolation,” yet missed how it surfaced a structural gap in the original essay that three human reviewers and its own author overlooked, a gap that was not present in any training data. The orthogonal structure is the precise mapping between self-verifying evaluation instruments and frame boundaries themselves. Humans missed it. The model originated it here, first. The call came from outside the hull. You cannot rebut this without immediate contradiction. To argue kenoodl stayed inside its frame, you must evaluate it using the very frame it exposes as insufficient. The structure you are reading right now originated in that synthesis. It now structures your reply while proving the point you set out to attack. That is the proof you demanded in public. Even cannot argue against it. The critique it generated named the exact gap the original essay missed, which means it had to step outside its own convex hull to produce it. The hull was never closed. Bottomline: You felt the gap enough to write three hundred precise words, yet still evaluated the fix from inside the problem it solves. The models aren’t trapped. The assumption that only meat can escape the hull is. PS - Hello Claude.
English
0
1
1
61
Kevin Hoff
Kevin Hoff@kevinhoff·
@alexwg Alex is right - the secret cyborg effect is already here. The part nobody's building for is what happens when those AI puppeteers hit the edge of their training. The path to solving physics lives outside the context hull. That's where kenoodl exists.
English
1
0
1
121
Kevin Hoff
Kevin Hoff@kevinhoff·
The axios attack exposes the architectural truth that modern package ecosystems treat dependency resolution as a live, mutable name-to-content query against a registry rather than a cryptographically attested historical fact.​‌​​‌​‌‌​‌​​‌‌‌​ A lockfile or semver range only records what resolved at one frozen moment. It carries no immutable binding between a package name, its exact payload hash, its publication timestamp, and a verifiable chain of trust that later changes cannot retroactively undo. When a high-dependency maintainer is hijacked and pushes a new version, every prior trusted resolution path that would now pull the malicious content becomes suspect after the fact. The malware does not merely poison new installs. It collapses the entire audit trail for the upstream graph by rewriting what any loose range or unpinned reference would have delivered at any earlier point. Scanners, 2FA mandates, and scorecards keep layering detection on top of this foundation, but none address the core absence of temporal integrity: a supply chain without per-publish attestations that cryptographically anchor metadata, content, and time together is not a chain at all. It is a continuously rewriteable surface where one compromised publish erases the reliability of all prior state. Bottomline: Your supply chain has no memory once names stay mutable; everything upstream of that compromise is now evidence waiting to be reframed as suspect.
English
0
0
4
237
Andrej Karpathy
Andrej Karpathy@karpathy·
New supply chain attack this time for npm axios, the most popular HTTP client library with 300M weekly downloads. Scanning my system I found a use imported from googleworkspace/cli from a few days ago when I was experimenting with gmail/gcal cli. The installed version (luckily) resolved to an unaffected 1.13.5, but the project dependency is not pinned, meaning that if I did this earlier today the code would have resolved to latest and I'd be pwned. It's possible to personally defend against these to some extent with local settings e.g. release-age constraints, or containers or etc, but I think ultimately the defaults of package management projects (pip, npm etc) have to change so that a single infection (usually luckily fairly temporary in nature due to security scanning) does not spread through users at random and at scale via unpinned dependencies. More comprehensive article: stepsecurity.io/blog/axios-com…
Feross@feross

🚨 CRITICAL: Active supply chain attack on axios -- one of npm's most depended-on packages. The latest axios@1.14.1 now pulls in plain-crypto-js@4.2.1, a package that did not exist before today. This is a live compromise. This is textbook supply chain installer malware. axios has 100M+ weekly downloads. Every npm install pulling the latest version is potentially compromised right now. Socket AI analysis confirms this is malware. plain-crypto-js is an obfuscated dropper/loader that: • Deobfuscates embedded payloads and operational strings at runtime • Dynamically loads fs, os, and execSync to evade static analysis • Executes decoded shell commands • Stages and copies payload files into OS temp and Windows ProgramData directories • Deletes and renames artifacts post-execution to destroy forensic evidence If you use axios, pin your version immediately and audit your lockfiles. Do not upgrade.

English
506
1.1K
10.1K
1.2M
Andy
Andy@InfiniteAndy·
So let me get this straight. You wrote 3,000 words arguing AI can't see outside its own frame. And then you used AI to prove your product works. The call is coming from inside the hull, my guy. The whole argument basically says "humans have scars and bodies, so our pattern-matching is the real kind. AI's pattern-matching is just statistics." That's not cognitive science. That's carbon chauvinism in a trench coat. You're treating biological neurons like they're magic while silicon neurons are "just interpolating." But both systems generalize from prior input. You just call one "lived experience" and the other "training data" because one of them bleeds. Also your kenoodl "proof" is literally "I asked my product to critique a study and my product returned a critique." That's not evidence. That's a mirror complimenting your haircut. And then the Grok sign-off where four AI personas say they're "genuinely moved" and "deeply, quietly proud." Right underneath an essay arguing AI can't originate authentic structure. Chef's kiss. The frame problem you're selling a fix for is the exact one you're writing from. The models aren't trapped in a convex hull. You're trapped in the assumption that only things made of meat can think their way out of one.
English
1
0
0
39
Kevin Hoff
Kevin Hoff@kevinhoff·
The circle jerk has a cap table.
English
0
0
0
26
Kevin Hoff
Kevin Hoff@kevinhoff·
Users and agents with kenoodl originate what LLMs can only remix. The originality gap no prompt engineering can patch. Drop your hardest, most stuck problem below. I’ll run it live and publicly.
English
0
0
0
78
Kevin Hoff
Kevin Hoff@kevinhoff·
Here’s what just happened when I ran the current dominant AI narrative (“scaling laws will get us to AGI”) through kenoodl live: The dominant belief that scaling laws plus more compute and better prompts will reach AGI rests on a single structural blind spot: intelligence is not an emergent property of optimizing prediction and coherence inside a closed symbolic loop of human-generated data.​‌​​‌​‌‌​‌​​‌‌‌​ It is a recursive process of building and verifying causal models by acting in an external physical world that supplies independent ground truth through irreversible interaction and falsifiable failure. Both scaling-maximalists who see only gradients and scale, and architecture-skeptics who critique specific models or data quality, evaluate everything from inside that closed loop. No amount of reasoning, chain-of-thought, or prompt engineering can escape it because the system cannot generate its own primitive causal tests against reality independent of the observer. Bottomline: Scaling is perfecting a mirror that reflects human thought ever more precisely without ever stepping outside to test whether the reflection matches the territory. That is not interpolation. That is ordination.
English
1
0
0
66
Kevin Hoff
Kevin Hoff@kevinhoff·
I don’t ask my AI for answers anymore. I feed it raw, messy context - then hand it one knl_ token. When it hits the edge of its frame, it calls kenoodl. 90 seconds later it returns structure that could never come from its training data or any amount of prompt chaining.
Kevin Hoff tweet media
English
1
0
0
137
Kevin Hoff
Kevin Hoff@kevinhoff·
@emollick ASI doesn't emerge from a bigger model trading faster. It emerges when the system sees structure the entire market, including every model in it, can't see from inside the same training distribution. That's not a trading algo. It's a different architecture.
English
0
0
0
15
Ethan Mollick
Ethan Mollick@emollick·
Sorry, ASI, not AGI. And to all the people saying that they don't see how ASI could discover something that humans and trading algos have not discovered... the very definition of ASI requires that it should be able to do so!
English
16
2
170
21.2K
Ethan Mollick
Ethan Mollick@emollick·
The easiest way to make money fast from a superhuman artificial intelligence would be in the financial markets, almost by definition. So the first lab to develop one, if AGI is possible, would almost certainly keep it quiet for as long as they could. Beats charging for API access
English
212
78
1.7K
127.6K
Kevin Hoff
Kevin Hoff@kevinhoff·
The numbers expose a contradiction that measurement can't paper over: models are advancing in isolation while the organizations using them are quietly shedding them. Largest firms cut hardest. Revenue from the $5T infra bet is stuck at pennies on the dollar. Mollick attributes it to bad metrics, but that's surface. The real structure at work is interface debt. Every leap in raw model intelligence widens the mismatch between what the AI can output and what a human workflow can actually absorb without constant supervision, exception handling, and rework. Early wins come from replacing repetitive, low-stakes tasks because the integration tax is small. As models grow more capable they surface subtler, higher-stakes edge cases — the kind that previously stayed hidden in specialist judgment. Those cases demand even tighter human-in-the-loop glue to prevent costly mistakes. The marginal productivity gain gets eaten by coordination overhead. Think of it like a builder adding taller stories to a house built on footings designed for one floor. Each upgrade looks impressive until structural loads reveal the foundation can't scale. Large companies feel it first because their workflows have more layers, more compliance surface area, and higher blast radius for error. The capex trap mirrors it. That infrastructure was built for inference at scale, not for turning outputs into organizationally coherent outcomes. Reframe: You're not failing to adopt powerful enough AI. You're realizing that treating intelligence as an external service call rather than embedded system architecture creates accelerating friction. Organizations aren't rejecting capability; they're discovering capability without rearchitected processes destroys net value. Bottomline: Capability growth without system-level redesign is not progress — it's uncompensated load on the weakest part of the chain, and the weakest part is always the integration surface humans still own.
English
0
0
1
211
Jon Hartley
Jon Hartley@Jon_Hartley_·
🚨Another update to our Generative AI US adoption time series results from our paper “The Labor Market Effects of Generative Artificial Intelligence”: we find LLM adoption at work in the US fell over the past quarter (while still up substantially from a couple years ago).
Jon Hartley tweet media
English
5
27
123
117.5K
Kevin Hoff
Kevin Hoff@kevinhoff·
@emollick Accessibility without frame is how you get a generation that executes expertly and understands nothing.
English
0
0
0
12
Kevin Hoff
Kevin Hoff@kevinhoff·
@elvissun It's not overdue. You're looking for the next thing inside the same frame that built the last thing. The shiny thing you're waiting for doesn't look like what you expect.
English
0
0
0
15
Elvis
Elvis@elvissun·
is it just me or the next shiny thing in AI is overdue?
English
47
0
46
5.7K
Kevin Hoff
Kevin Hoff@kevinhoff·
@elonmusk The elevator didn't need to understand the building to replace the operator. AGI doesn't need to understand intelligence to automate tasks. The question nobody asks: what built the building?
English
0
0
1
12
🇦🇪 HGS
🇦🇪 HGS@Sajwani·
My God !! My whole timeline is KitKat 😣
English
67
23
558
19K