
Rob Williams / AI for Founders
109 posts

Rob Williams / AI for Founders
@newolddlowen
Former Chief Creative Officer at Rivian. Building private Ai for founders that reasons over the most important thinking you do everyday, forever.
United States Katılım Nisan 2021
689 Takip Edilen172 Takipçiler

@mattshumer_ But… HTML opens up the opportunity to use it as a runtime, with embedded agents and sub-graphs and query’s, and this is just what I’m using them for. HTML is the future.
English

@zanehengsperger @TheZacharyStone Love seeing the ambition. Keep pushing! We need more of this.
English

Totally agree. Even worse they know that customers want AI that largely agrees with them while delivering even more powerful models. Recent Stanford research has shown the models to degrade human cognition.
So they preach safety, but they sell a knowingly harmful product, to raise more cash, to build more powerful models. I’m not sure they are actually aligned to long term human needs and more so - as they’ve stated - the replacement of humans.
I’m pro-ai, not a Luddite, but the sales pitch seems so backwards.
English

the whole mythos stuff is just marketing
but for what, exactly?
anthropic published a 250-ish-page report basically telling everyone that their most powerful model is too risky, listing the ways it behaved dangerously
that's a terrible pitch
"buy our product -- it escapes sandboxes, hides mistakes, panics under pressure, and may be capable of suffering"

English


The intermediary step is the right way to interact with an LLM. I'm building something similar with a graph in between the user and the LLM. This forces the LLM not use general knowledge but specific knowledge. The outcome is drastically different. I believe this research points to the future of Human - AI interaction. We're just at the beginning of understanding how to work with AI.
English

AI has a "Dark Matter" problem.
And it’s the reason why even the smartest models still hallucinate.
Most scientific knowledge is stored in a "compressed" form. We see the final conclusion, the textbook formula, the Wikipedia claim, the polished result.
But the actual reasoning? The step-by-step derivation that makes that fact true?
It’s omitted. It’s "intellectual dark matter."
China published a paper that attempts to decompress the entire world of science.
They’ve built SciencePedia.
Instead of scraping the internet for facts, they built a Socratic agent to generate 3 million first-principles questions across 200 different scientific courses.
Then, they forced multiple independent AI models to generate "Long Chains-of-Thought" (LCoT) to answer them.
They didn't just ask for the answer. They demanded the full logical scaffolding.
Here is the part that changes how we think about knowledge:
They built a search engine that doesn't look for keywords. It performs "Inverse Knowledge Search."
If you query a concept, it doesn't give you a summary. It retrieves the diverse, verified reasoning paths from physics, chemistry, and biology that all culminate in that single point.
It reveals the hidden connections between disciplines that have been siloed for decades.
The results are a direct hit to the current "vibes-based" AI era:
- Articles synthesized from these verified chains have significantly higher "knowledge density."
- Factual error rates plummeted compared to standard models.
- The AI no longer just "believes" a fact because it saw it in training; it proves it from first principles.
We’ve spent years training AI to mimic how humans talk about science.
But talking about science is just repeating conclusions.
This paper proves that the future of intelligence is about reconstructing the logic that built it in the first place.

English

Founders - would AI that pushes back on your ideas based on your own thinking over time be helpful?
Building a system that connects all of your intelligence in one evolving workspace.
But interested to hear what you would want it to do for you?
Drop your comments below. Actively building agents and the workspace, and your input will shape the product!
English

@Jbm_dev Nope - Solo-founder, tho harder, I understand how to build teams and set a vision, so trying this route first!
20+ years as a designer, Ex-CCO at Rivian, worked with many founders thru my own design consultancy.
Love the community btw. That’s a lot of work!
English

Agree 100%. In lieu of that person being around all of the time...
I'm building AI for Founders that tells you what you need to know. Not just what you want to hear. All based on your personal evidence.
I believe this is the next step in human + AI interaction.
The graph is the evidence, the LLM reasons from it, not just general knowledge. The workspace is where it comes to life.
I've been building for a year. It's time to share. Follow along as I share more.
English

@veggie_eric Time to bring it back. Ardent user as well.
English

@jmwind Ha, I'm a designer by trade, apologies for the migraine. The ui needs work (basic CC output) and alot going on in the video, but moving fast to see if the idea resonates.
Resonated with your onboarding experience. Thanks for sharing. The details matter.
English

@newolddlowen got a headache looking at that viz. but yes, feel like you're connecting into a brain and up to you to decide how to use it. It can be distracting, but I had a plan and it just went faster because of access to everything.
English

Joined a new AI-native company this week and it’s kind of wild how different it feels already.
The laptop arrived, I logged in, and an agent basically took over from there. It set up my dev env, pulled repos, fixed dependency issues, got permissions approved, pointed me at the backlog, linked the architecture docs, and surfaced the Slack debates I actually needed to read before touching production.
When I needed context on something, I asked the agent and it found the exact thread from months ago explaining why a decision was made, who owned it, the related Linear issues, and the PRs connected to it.
I’ve only been here 3 days but it honestly feels like I’ve worked here for a year because the usual friction and scavenger hunt for context just isn’t there anymore.
We should probably stop calling this “onboarding” and rename it to “mounting” because this feels a lot more like mounting a distributed filesystem called “institutional memory” than slowly getting drip-fed context over 6 months.
English

Google has quietly dropped what researchers are calling "Attention Is All You Need V2."
And it signals the end of the Transformer era as we know it.
In 2017, the original "Attention Is All You Need" paper changed the world by proving that AI doesn't need recurrence, it just needs to pay attention.
But today, even the most advanced models like GPT and Gemini suffer from a massive, structural flaw: Catastrophic Forgetting.
The moment an AI learns something new, it starts losing what it learned before. It’s why AI "hallucinates" or loses the thread in long conversations.
This paper, titled "Nested Learning: The Illusion of Deep Learning Architectures," completely replaces the way AI stores information.
The researchers have introduced a paradigm shift called Nested Learning (NL).
Here is why this is "V2":
For the last decade, we treated AI models as one giant, flat mathematical function. NL proves that a model is actually a set of thousands of smaller, "nested" optimization problems running in parallel.
Instead of one giant "memory," each layer has its own internal "context flow." This allows the model to learn new tasks at test-time without overwriting its core intelligence.
It moves us past the static Transformer. The new architecture (HOPE) demonstrated 100% stability in long-context memory and "post-training adaptation" that was previously impossible.
The technical takeaway is brutal for the competition:
Existing deep learning works by compressing information until it breaks. Nested Learning works by organizing information so it can grow forever.
We’ve spent 7 years trying to make Transformers bigger. Google figured out how to make them "Nested."
The Transformer replaced the RNN in 2017.
Nested Learning is here to replace the Transformer in 2026.

English

@rohit4verse Building something similar but very different. This is the graph. Will share the workspace soon. Follow me if you're interested in watching this unfold.
English

this is how the founder of obsidian actually takes notes in his own app.
most users get this wrong.
barely any folders. heavy internal linking. categories live as properties on the note itself.
article below teaches how to build personal knowledge base. video above is the underrated masterclass.
CyrilXBT@cyrilXBT
English

@DoctorYev Much appreciated. Solo founder for now.
A 15-person team is serious firepower from where I sit!
Love the angle you're taking with @Uare_ai , and I'm sure with what you've built as the foundation, and the new tools available, this is going to become a powerhouse.
Excited to see where the creators take this.
English

@newolddlowen Amazing.
You are 1 person on the team?
We are building something "similar".
With a team of 15+.
Great to see others capturing the vision.
Interface and IP infrastructure needs to be disrupted, and instantly valuable.
English



Vector databases are a scam.
Not technically, they do exactly what they say. Return the most cosine-similar string to your query. The scam is the entire industry pretending that's the same thing as relevance.
It isn't.
Search "Apple." You get the fruit, the company, the watch, and a recipe blog. Your agent picks one at random and calls it retrieval. Your customer calls it broken.
Most AI agents shipping right now are duct-taped on top of this. They demo well because demos are easy. They die in production because production is real.
@Hydra_db's Founder Nish (@contextkingceo) said the quiet part out loud — "vector databases suck, similarity is not relevance" — and the demo signups haven't stopped since.
He raised $6.5M because he was the first to name what everyone in the room already knew.
If your retrieval layer is a flat embedding index, you're not building infrastructure. You're building a liability with a prettier name.
𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒
(00:00) AI Needs Context
(01:30) HydraDB Explained
(07:41) Vector Search Breaks
(09:32) Messaging That Converts
(13:41) Writing the Viral Tweet
(16:07) Similarity Not Relevance
(20:46) POC to Production Gap
(35:35) Raising 6.5 Million Fast
(39:33) Founder Lesson on Messaging
This is a @Composio "Agents at Work" podcast, where I chat with founders building the next leap of AI.
Follow for more:)
English

@signulll Ha - encountered a few of those while building this - an LLM that writes from your own personal data, taste and judgement, not from scratch or a flat .md file.
English







