Rob Williams / AI for Founders

109 posts

Rob Williams / AI for Founders banner
Rob Williams / AI for Founders

Rob Williams / AI for Founders

@newolddlowen

Former Chief Creative Officer at Rivian. Building private Ai for founders that reasons over the most important thinking you do everyday, forever.

United States Katılım Nisan 2021
689 Takip Edilen172 Takipçiler
Rob Williams / AI for Founders
@mattshumer_ But… HTML opens up the opportunity to use it as a runtime, with embedded agents and sub-graphs and query’s, and this is just what I’m using them for. HTML is the future.
English
0
0
0
145
Matt Shumer
Matt Shumer@mattshumer_·
Everyone switching from Markdown to HTML is missing the nuance. The optimal approach (most of the time): If it's for a human to read, yeah, use HTML. BUT If an agent is consuming it, use Markdown!
English
108
20
357
43.5K
Zane Hengsperger
Zane Hengsperger@zanehengsperger·
this is your sign to build a factory with your absolute boys
Zane Hengsperger tweet media
English
21
7
349
8.2K
Rob Williams / AI for Founders
Totally agree. Even worse they know that customers want AI that largely agrees with them while delivering even more powerful models. Recent Stanford research has shown the models to degrade human cognition. So they preach safety, but they sell a knowingly harmful product, to raise more cash, to build more powerful models. I’m not sure they are actually aligned to long term human needs and more so - as they’ve stated - the replacement of humans. I’m pro-ai, not a Luddite, but the sales pitch seems so backwards.
English
0
0
1
184
Haider.
Haider.@haider1·
the whole mythos stuff is just marketing but for what, exactly? anthropic published a 250-ish-page report basically telling everyone that their most powerful model is too risky, listing the ways it behaved dangerously that's a terrible pitch "buy our product -- it escapes sandboxes, hides mistakes, panics under pressure, and may be capable of suffering"
Haider. tweet media
English
39
10
73
5.7K
Rob Williams / AI for Founders
This research points the way to what I believe is the proper interaction between humans and AI. We're still so early.
How To AI@HowToAI_

AI has a "Dark Matter" problem. And it’s the reason why even the smartest models still hallucinate. Most scientific knowledge is stored in a "compressed" form. We see the final conclusion, the textbook formula, the Wikipedia claim, the polished result. But the actual reasoning? The step-by-step derivation that makes that fact true? It’s omitted. It’s "intellectual dark matter." China published a paper that attempts to decompress the entire world of science. They’ve built SciencePedia. Instead of scraping the internet for facts, they built a Socratic agent to generate 3 million first-principles questions across 200 different scientific courses. Then, they forced multiple independent AI models to generate "Long Chains-of-Thought" (LCoT) to answer them. They didn't just ask for the answer. They demanded the full logical scaffolding. Here is the part that changes how we think about knowledge: They built a search engine that doesn't look for keywords. It performs "Inverse Knowledge Search." If you query a concept, it doesn't give you a summary. It retrieves the diverse, verified reasoning paths from physics, chemistry, and biology that all culminate in that single point. It reveals the hidden connections between disciplines that have been siloed for decades. The results are a direct hit to the current "vibes-based" AI era: - Articles synthesized from these verified chains have significantly higher "knowledge density." - Factual error rates plummeted compared to standard models. - The AI no longer just "believes" a fact because it saw it in training; it proves it from first principles. We’ve spent years training AI to mimic how humans talk about science. But talking about science is just repeating conclusions. This paper proves that the future of intelligence is about reconstructing the logic that built it in the first place.

English
0
0
1
185
Rob Williams / AI for Founders
The intermediary step is the right way to interact with an LLM. I'm building something similar with a graph in between the user and the LLM. This forces the LLM not use general knowledge but specific knowledge. The outcome is drastically different. I believe this research points to the future of Human - AI interaction. We're just at the beginning of understanding how to work with AI.
English
0
0
3
224
How To AI
How To AI@HowToAI_·
AI has a "Dark Matter" problem. And it’s the reason why even the smartest models still hallucinate. Most scientific knowledge is stored in a "compressed" form. We see the final conclusion, the textbook formula, the Wikipedia claim, the polished result. But the actual reasoning? The step-by-step derivation that makes that fact true? It’s omitted. It’s "intellectual dark matter." China published a paper that attempts to decompress the entire world of science. They’ve built SciencePedia. Instead of scraping the internet for facts, they built a Socratic agent to generate 3 million first-principles questions across 200 different scientific courses. Then, they forced multiple independent AI models to generate "Long Chains-of-Thought" (LCoT) to answer them. They didn't just ask for the answer. They demanded the full logical scaffolding. Here is the part that changes how we think about knowledge: They built a search engine that doesn't look for keywords. It performs "Inverse Knowledge Search." If you query a concept, it doesn't give you a summary. It retrieves the diverse, verified reasoning paths from physics, chemistry, and biology that all culminate in that single point. It reveals the hidden connections between disciplines that have been siloed for decades. The results are a direct hit to the current "vibes-based" AI era: - Articles synthesized from these verified chains have significantly higher "knowledge density." - Factual error rates plummeted compared to standard models. - The AI no longer just "believes" a fact because it saw it in training; it proves it from first principles. We’ve spent years training AI to mimic how humans talk about science. But talking about science is just repeating conclusions. This paper proves that the future of intelligence is about reconstructing the logic that built it in the first place.
How To AI tweet media
English
25
31
186
6.8K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
Ask Claude to map your entire app's architecture into a single HTML page and JSON file. The HTML is for you. The JSON is for the next agent working on a new feature. Your codebase now explains itself.
English
59
184
2.4K
212.3K
Rob Williams / AI for Founders
Founders - would AI that pushes back on your ideas based on your own thinking over time be helpful? Building a system that connects all of your intelligence in one evolving workspace. But interested to hear what you would want it to do for you? Drop your comments below. Actively building agents and the workspace, and your input will shape the product!
English
0
0
2
97
Rob Williams / AI for Founders
@Jbm_dev Nope - Solo-founder, tho harder, I understand how to build teams and set a vision, so trying this route first! 20+ years as a designer, Ex-CCO at Rivian, worked with many founders thru my own design consultancy. Love the community btw. That’s a lot of work!
English
0
0
0
37
Rob Williams / AI for Founders
Agree 100%. In lieu of that person being around all of the time... I'm building AI for Founders that tells you what you need to know. Not just what you want to hear. All based on your personal evidence. I believe this is the next step in human + AI interaction. The graph is the evidence, the LLM reasons from it, not just general knowledge. The workspace is where it comes to life. I've been building for a year. It's time to share. Follow along as I share more.
English
3
0
2
167
Joshua Martin
Joshua Martin@Jbm_dev·
the solo founder thing is romanticized. but nobody talks about how important it is to have someone you respect tell you when your ideas are bad
English
38
3
47
2.1K
Eric Jiang
Eric Jiang@veggie_eric·
genuinely hate that I can no longer use "—" in anything I write anymore
English
667
988
10.3K
618.6K
Rob Williams / AI for Founders
@jmwind Ha, I'm a designer by trade, apologies for the migraine. The ui needs work (basic CC output) and alot going on in the video, but moving fast to see if the idea resonates. Resonated with your onboarding experience. Thanks for sharing. The details matter.
English
0
0
1
1K
Jean-Michel Lemieux
@newolddlowen got a headache looking at that viz. but yes, feel like you're connecting into a brain and up to you to decide how to use it. It can be distracting, but I had a plan and it just went faster because of access to everything.
English
1
0
23
18.5K
Jean-Michel Lemieux
Joined a new AI-native company this week and it’s kind of wild how different it feels already. The laptop arrived, I logged in, and an agent basically took over from there. It set up my dev env, pulled repos, fixed dependency issues, got permissions approved, pointed me at the backlog, linked the architecture docs, and surfaced the Slack debates I actually needed to read before touching production. When I needed context on something, I asked the agent and it found the exact thread from months ago explaining why a decision was made, who owned it, the related Linear issues, and the PRs connected to it. I’ve only been here 3 days but it honestly feels like I’ve worked here for a year because the usual friction and scavenger hunt for context just isn’t there anymore. We should probably stop calling this “onboarding” and rename it to “mounting” because this feels a lot more like mounting a distributed filesystem called “institutional memory” than slowly getting drip-fed context over 6 months.
English
267
394
6K
942.5K
Rob Williams / AI for Founders
This is built from similar concepts, but is local and personal. The workspace is where the graph comes to life. Follow me if this seems potentially valuable. Would love to get feedback and questions to sharpen the idea as I go. Have quietly been building for 9 months but it's time to share.
English
0
0
6
900
How To AI
How To AI@HowToAI_·
Google has quietly dropped what researchers are calling "Attention Is All You Need V2." And it signals the end of the Transformer era as we know it. In 2017, the original "Attention Is All You Need" paper changed the world by proving that AI doesn't need recurrence, it just needs to pay attention. But today, even the most advanced models like GPT and Gemini suffer from a massive, structural flaw: Catastrophic Forgetting. The moment an AI learns something new, it starts losing what it learned before. It’s why AI "hallucinates" or loses the thread in long conversations. This paper, titled "Nested Learning: The Illusion of Deep Learning Architectures," completely replaces the way AI stores information. The researchers have introduced a paradigm shift called Nested Learning (NL). Here is why this is "V2": For the last decade, we treated AI models as one giant, flat mathematical function. NL proves that a model is actually a set of thousands of smaller, "nested" optimization problems running in parallel. Instead of one giant "memory," each layer has its own internal "context flow." This allows the model to learn new tasks at test-time without overwriting its core intelligence. It moves us past the static Transformer. The new architecture (HOPE) demonstrated 100% stability in long-context memory and "post-training adaptation" that was previously impossible. The technical takeaway is brutal for the competition: Existing deep learning works by compressing information until it breaks. Nested Learning works by organizing information so it can grow forever. We’ve spent 7 years trying to make Transformers bigger. Google figured out how to make them "Nested." The Transformer replaced the RNN in 2017. Nested Learning is here to replace the Transformer in 2026.
How To AI tweet media
English
74
386
2.1K
148.2K
Rob Williams / AI for Founders
@rohit4verse Building something similar but very different. This is the graph. Will share the workspace soon. Follow me if you're interested in watching this unfold.
English
1
0
3
1.3K
Rohit
Rohit@rohit4verse·
this is how the founder of obsidian actually takes notes in his own app. most users get this wrong. barely any folders. heavy internal linking. categories live as properties on the note itself. article below teaches how to build personal knowledge base. video above is the underrated masterclass.
CyrilXBT@cyrilXBT

x.com/i/article/2053…

English
31
136
1.3K
208.4K
Rob Williams / AI for Founders
@DoctorYev Much appreciated. Solo founder for now. A 15-person team is serious firepower from where I sit! Love the angle you're taking with @Uare_ai , and I'm sure with what you've built as the foundation, and the new tools available, this is going to become a powerhouse. Excited to see where the creators take this.
English
0
0
3
67
Yev Marusenko, Ph.D.
Yev Marusenko, Ph.D.@DoctorYev·
@newolddlowen Amazing. You are 1 person on the team? We are building something "similar". With a team of 15+. Great to see others capturing the vision. Interface and IP infrastructure needs to be disrupted, and instantly valuable.
English
1
0
0
56
Rob Williams / AI for Founders
Let AI build your graph. Work smart. Not hard. One keystroke → Any AI chat instantly flows into your private graph. Claude, GPT, Gemini, Grok, Perplexity… all of it. No plugins. No extensions. No APIs. Fully local. Multi-modal uploads (PDFs, images, research papers, etc.).
English
4
1
7
818
bubble boi
bubble boi@bubbleboi·
You don’t need to be smart to win the AI computer game you just need to think differently. Stop being a follower. Be bold. Go out there and take a risk that you’re wrong. Christopher Columbus was wrong but look at how that turned out.
English
22
15
405
24.9K
Julia Fedorin
Julia Fedorin@juliafedorin·
Vector databases are a scam. Not technically, they do exactly what they say. Return the most cosine-similar string to your query. The scam is the entire industry pretending that's the same thing as relevance. It isn't. Search "Apple." You get the fruit, the company, the watch, and a recipe blog. Your agent picks one at random and calls it retrieval. Your customer calls it broken. Most AI agents shipping right now are duct-taped on top of this. They demo well because demos are easy. They die in production because production is real. @Hydra_db's Founder Nish (@contextkingceo) said the quiet part out loud — "vector databases suck, similarity is not relevance" — and the demo signups haven't stopped since. He raised $6.5M because he was the first to name what everyone in the room already knew. If your retrieval layer is a flat embedding index, you're not building infrastructure. You're building a liability with a prettier name. 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 (00:00) AI Needs Context (01:30) HydraDB Explained (07:41) Vector Search Breaks (09:32) Messaging That Converts (13:41) Writing the Viral Tweet (16:07) Similarity Not Relevance (20:46) POC to Production Gap (35:35) Raising 6.5 Million Fast (39:33) Founder Lesson on Messaging This is a @Composio "Agents at Work" podcast, where I chat with founders building the next leap of AI. Follow for more:)
English
47
144
562
170K
Rob Williams / AI for Founders
@signulll Ha - encountered a few of those while building this - an LLM that writes from your own personal data, taste and judgement, not from scratch or a flat .md file.
English
0
0
0
236
signüll
signüll@signulll·
turns out the ppl complaining about ai writing are more annoying than the ppl using ai to write lmao. just hit the grok button & move on. you will likely never consume stuff without an ai layer in the middle anyway.
English
49
5
276
15K