
Angelo Valentino
103 posts

Angelo Valentino
@AI__Angelo
AI and the people shaping what comes next. Host, AI Future Circle (London) Most things are better discussed over a good dinner.




Buddhism teaches many things that chime with AI in these “interesting times” one thing that seems analogous to the right now is that they believe between one life ending and a next reincarnation you enter an ungrounded state called Bardo, where you can still remember your old life but your next life is not really come clearly into view… much of their teachings are to help to make that crossing without trauma. open.substack.com/pub/buddhismai…








London AI Weekly Reality Check 🔥 Just wrapped another back-to-back night marathon: AI demos, hack nights, and impromptu pubs full AI powered nerds with whiteboards if you know! Here’s the thing I tell everyone, if you skip the events then you miss the pulse. But, if you hit too many, in six months you’ve: • Met your future cofounders • Accidentally pitched (and half-baked) a startup idea • Landed a job offer over tacos at 11pm • Stayed up till 3am arguing whether agentic systems will take your job… or create a better one This city doesn’t just talk about AI. It builds it — whiteboards, late-night debates, and all. What was your best London AI moment this week? Drop it below 👇could be a demo that blew your mind, random connection, or that one wild 3am conversation!! #LondonAI #AgenticAI #GenAI #AIEngineerEurope #LondonmaxxingAI #AISecurity




If your idea makes immediate sense to everyone You’re probably late

the agi pitch of ‘it will solve cancer’ is unfortunately weak because i would gladly trade having to risk cancer vs me and all my descendants losing all economic utility until the end of time, obviously agi world needs to just miraculously great to counter losing all labor value

Good London VC hearing an AI pitch: “this is very interesting” = no “let’s stay close” = no “we’d like to move forward”= still no "I am happy to tell you..." = delayed no "take a look at these contracts" = we have backed something similar

The CEO of @brexHQ runs his company through a custom AI he built and named Lemon Pie. And I think it's the future of CEO productivity Think about the day job of a Fintech CEO. - Thousands of Slack channels. - Hundreds of emails a day. - Google Docs, WhatsApp, meeting notes What should you pay attention to right now? --- Pedro Franceschi built an autopilot on OpenClaw that answers that for him. It screens everything and organizes his world around two concepts: people and programs. - 25 key people he cares about. - A handful of critical projects. - Everything else is noise. It even automates the stuff he'd do by habit on seeing a message — like reviewing product docs by asking the five questions Pedro always asks. --- And it works in his personal life too. - He sends it voice notes on Telegram while driving. - It once bought him movie tickets — found seats next to his friends, paid with a Brex virtual card. And I sort of love that he's named every bot he's built "Lemon Pie" since he was 12. --- Most CEOs treat being overwhelmed as a discipline problem. Wake up earlier, batch the inbox, hire a chief of staff. Pedro treated it as an engineering challenge. OpenClaw was the unlock that let him scale himself. His system is built around how *he* thinks. And if you think AI is a lot of noise and hype, chances are you're not taking that engineering approach. It's an unfair advantage right now — as a CEO or anyone else — to have that skill set. OpenClaw is letting millions of people remove the daily papercuts without a dev team. Builders are compounding their Return on Effort with AI.

Most AI conversations aren’t about AI They’re about who gets believed...


London AI Weekly Reality Check 🔥 Just wrapped another back-to-back night marathon: AI demos, hack nights, and impromptu pubs full AI powered nerds with whiteboards if you know! Here’s the thing I tell everyone, if you skip the events then you miss the pulse. But, if you hit too many, in six months you’ve: • Met your future cofounders • Accidentally pitched (and half-baked) a startup idea • Landed a job offer over tacos at 11pm • Stayed up till 3am arguing whether agentic systems will take your job… or create a better one This city doesn’t just talk about AI. It builds it — whiteboards, late-night debates, and all. What was your best London AI moment this week? Drop it below 👇could be a demo that blew your mind, random connection, or that one wild 3am conversation!! #LondonAI #AgenticAI #GenAI #AIEngineerEurope #LondonmaxxingAI #AISecurity


For the Builders, People show up in London deserts for fashion weeks. They should see how crowded it is at AI hackathons. Hardcore engineers and amazingly weird deep tech startups are everywhere!!! robots, brain models, agentic stacks, eval frameworks not just that tho, there’s this incredible vibe of figure‑it‑out‑together energy. If London AI had a sound, it’d be clacking keyboards + pints in Shoreditch XD. Btw, Where’s your favourite London AI hangout? Would be checking the comments!! #LondonAI #AgenticAI #AIEngineerEurope #LondonmaxxingAI

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

For the Builders, People show up in London deserts for fashion weeks. They should see how crowded it is at AI hackathons. Hardcore engineers and amazingly weird deep tech startups are everywhere!!! robots, brain models, agentic stacks, eval frameworks not just that tho, there’s this incredible vibe of figure‑it‑out‑together energy. If London AI had a sound, it’d be clacking keyboards + pints in Shoreditch XD. Btw, Where’s your favourite London AI hangout? Would be checking the comments!! #LondonAI #AgenticAI #AIEngineerEurope #LondonmaxxingAI

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.


Anthropic shipped 120+ features in 90 days across Claude Code, Cowork, and Claude. I ranked every single one. S tier, A tier, B tier, C tier, D tier. What to adopt now, what to skip, and 4 workflows that chain them together: 🔗 news.aakashg.com/p/anthropic-q1…

I take photos of interesting smart people that I meet (mostly in AI), ❤️ like if you know them or like the picture !


I take photos of interesting smart people that I meet (mostly in AI), ❤️ like if you know them or like the picture !

I take photos of interesting smart people that I meet (mostly in AI), ❤️ like if you know them or like the picture !


OpenAI acquiring @tbpn makes zero sense to me (an M&A professor).


LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.


