Angelo Valentino

103 posts

Angelo Valentino banner
Angelo Valentino

Angelo Valentino

@AI__Angelo

AI and the people shaping what comes next. Host, AI Future Circle (London) Most things are better discussed over a good dinner.

London 가입일 Ocak 2024
43 팔로잉21 팔로워
Angelo Valentino
Angelo Valentino@AI__Angelo·
@kunalb11 Exactly, what used to scale through headcount now scales through computation and intelligence. Companies that can leverage AI to multiply output per person will pull away, creating value in ways traditional metrics never captured.
English
0
0
0
35
Kunal Shah
Kunal Shah@kunalb11·
The next decade’s dominant companies will show rising revenue per employee and expanding margins. In the past it showed in energy and tech. In future it maybe in compute and AI. When output is decoupled from human labor, large outcomes emerge.
English
58
19
403
12.1K
Angelo Valentino
Angelo Valentino@AI__Angelo·
The real disruption isn’t what the flashy vendors are selling, it’s the power to take the same AI and bend it to your workflow, your documents, your cases. Solo lawyers don’t need permission or enterprise contracts anymore; they need the curiosity and confidence to build tools that actually work for them.
English
0
0
2
609
Ann Srivastava
Ann Srivastava@helloparalegal·
The legal tech industry spent the last 3 years telling solo lawyers and small firms that AI would level the playing field. Then they priced it so only BigLaw could afford it. CoCounsel. $900 a month per seat. For a single user. That is $10,800 a year for one attorney to use an AI research tool that hallucinates 17% of the time according to Stanford's own testing. Lexis+ AI. Integrated into plans that already cost $270 a month whether you use them or not. Locked into annual contracts you cannot pause. And their AI hallucinated more than 17% of the time in the same study. Westlaw's AI-Assisted Research. Hallucination rate above 34%. More than one in three queries returning something that is not real. At premium pricing. Harvey. Raised $300 million. Serves elite firms. If you are a solo doing PI cases in suburban Ohio, Harvey does not know you exist and does not want to. The pattern is the same one legal tech has followed for 20 years. Build for the firms that can write six-figure checks. Let everyone else figure it out. And everyone else is 75% of the profession. There are 1.3 million licensed attorneys in the United States. Roughly 49% are in solo practice. Another 15% are in firms of 2 to 5. That is nearly two thirds of all practicing lawyers in firms where $900 a month per seat is not a rounding error. It is a decision between a tool and a paralegal. These lawyers chose the paralegal. Every time. Not because they do not understand AI. Because the math does not work. If you bill 150 hours a month at $300 an hour and your utilization rate is the national average of 37%, you are collecting maybe $16,000 a month before overhead. You are not spending $900 of that on an AI tool that might make up a case and get you sanctioned. So the state of the art for most American lawyers in 2026 is the same as it was in 2019. Westlaw. Word. A yellow legal pad. Maybe Clio for billing if they are progressive. The AmLaw 100 firms have AI teams. They have prompt libraries. They have custom-trained models for their practice areas. They are running document review that used to take 200 associate hours in 3 days. The solo in Tampa is still copying and pasting from a brief template he wrote in 2017. That is the playing field legal tech "leveled." Here is what nobody in legal tech is talking about because it threatens their entire business model. A solo practitioner with a laptop can now build most of what those $900/month tools do. In a weekend. For the cost of a Claude subscription. I am not being provocative. I am being specific. Take the thing lawyers actually need most. Not a chatbot to ask legal questions to. That is what got people sanctioned. Lawyers need tools that work with their actual files. Their actual cases. Their actual documents. Claude Code runs on your machine. It reads every file in a folder you point it to. It does not go to the internet and generate case law from memory. It reads the documents you already have. Here is what a solo practitioner can build in a single weekend. Client intake processing. Right now you get an email or a phone call, you take notes, you type everything into Clio manually, you send a retainer letter, you open a file. Every step is manual. Set up a folder structure. Put your retainer template in it. Put your conflict check list in it. Tell Claude Code what your intake process looks like and have it build you a system where you paste in the client's details and it generates the retainer letter, the conflict check memo, the new matter checklist, and the initial filing deadlines. All in the format you already use. Not some vendor's format. Your format. Your letterhead. Your retainer language. Or take deadline tracking. You are paying for a calendaring system or worse you are using Outlook reminders and hoping. Pull your active case list. Feed it to Claude Code with every relevant deadline type for your practice area. Have it build a tracker that flags deadlines at 30, 14, 7, and 3 days out. Output to a spreadsheet you already know how to use. Or to your calendar. Or to a daily email. A developer would charge you $5,000 to $15,000 for this. You can build it Saturday morning. Or take the thing that actually moves the needle in litigation. Preparing for a judge you have never appeared before. Download 30 of this judge's orders from PACER. Put them in a folder with your motion and the opposing brief. Have Claude Code read all of it and tell you how this judge has ruled on the exact issues in your case. What arguments she finds persuasive. What she raises sua sponte. How she structures her analysis. Now have it draft your brief to match how this specific judge reads and reasons. Lex Machina charges thousands a year for judge analytics that give you bar charts. You just built a judge-specific brief preparation system in an afternoon using the actual orders instead of summarized data. Or document review. You have 2,000 documents in discovery. A vendor wants $15,000 to run them through their review platform. Put them in a folder. Have Claude Code read them and flag the 200 that are responsive to the RFPs. Have it draft a privilege log for the ones that are privileged. Review its work the way you would review a first-year associate's work. Correct where it gets it wrong. Run it again. This is not hypothetical. Lawyers are doing this right now. The reason the legal tech industry does not want you to know this is because their entire model depends on you believing that you cannot build these tools yourself. That AI is too complicated. That you need their proprietary wrapper around the same foundation models you can access directly. CoCounsel is a wrapper around GPT-4. Lexis+ AI is a wrapper around proprietary models. Harvey is a wrapper around Claude and GPT. You are paying $900 a month for a user interface and a brand name sitting on top of models you can access for $20 to $200 a month. I am not saying these tools are worthless. If you are a 500-lawyer firm with compliance requirements and you need enterprise deployment with audit trails and SSO, you should buy enterprise software. But if you are a solo. Or a 3-person shop. Or a legal aid lawyer who has never had access to any of this. You can build it yourself now. The foundation models are the same ones the expensive tools use. Claude Code gives you direct access. It reads your files, it understands your practice, and it does not lock you into an annual contract. The most expensive legal tech is no longer the best legal tech. The best legal tech is the one you build yourself because it does exactly what you need and nothing you do not. The playing field did not get leveled by the companies that promised to level it. It got leveled by the same AI they are reselling to you at a 40x markup.
English
45
72
455
76.5K
Robert Scoble
Robert Scoble@Scobleizer·
The next two months in AI agents. At Pokee's hackathon in San Francisco today. It makes a very powerful agentic platform. More on that later tonight when the Hackathon concludes. Founder/CEO @ZheqingZhu (Bill) Zhu told me the next move the industry will make (and he'll be driving that) is to get everything working on mobile. He says to expect that in the next month or two. He told me a few other things coming. This industry isn't gonna stand still anytime soon. Oh, and @iruletheworldmo my AI just read thousands of posts and decided to feature yours: alignednews.com/ai If I can have AI read all of of the AI community here on X and write a report and it does it all in a few minutes what's your excuse? Get started. Even non-technical idiots like me can create amazing things with AI now.
English
8
6
48
4.4K
Angelo Valentino
Angelo Valentino@AI__Angelo·
@fortelabs AI is a mirror, not a compass. The people who thrive are those who know themselves well enough to steer it, rather than letting it steer them.
English
0
0
0
20
Tiago Forte
Tiago Forte@fortelabs·
Most people think the way to get better at AI is to learn more about AI I think that's backwards The people getting the most value from AI aren't the ones who know the most about it. They're the ones who know the most about themselves Their psychology, their blind spots, how they communicate, what fills them with energy, their hard-won principles and values. The stuff that takes years of living to accumulate and that no model can guess Without that, AI just gives you the average of the internet. Polished, plausible, completely generic. You accept it because you don't have a clear enough internal compass to push back I learned this the hard way. The more I leaned on AI, the more I lost touch with my own point of view. I couldn't remember what I believed because I'd stopped doing the thinking required to know If you can't articulate what you value, how you make decisions, and what your quality bar is, no amount of computing power will fill that gap. That's not a philosophical point. It's a practical one I run into every single day It's also why the first thing we build in The AI Second Brain isn't a prompt. It's clarity about who you are and what you're trying to do Founding cohort: April 15 to May 1
English
23
5
51
4.3K
Angelo Valentino
Angelo Valentino@AI__Angelo·
@TheChowdhary That’s a killer approach. Roles like that won’t just exist, they’ll define the next generation of high-leverage talent. People who can systematically remove themselves from the equation are effectively scaling themselves across the entire organization
English
0
0
0
19
Abhilash Chowdhary
Abhilash Chowdhary@TheChowdhary·
was at an event yesterday and a founder said this about hiring: "we're hiring an ops person whose job is to eliminate their own job" the role: > find inefficiencies across the company > build the process or automation > make themselves unnecessary > move to the next problem obviously there will always be more problems to solve but this feels like a very real role emerging right now: someone who goes into every department, maps what can be automated, automates it, and keeps repeating very good skill to learn right now if you want to be valuable at almost any company
English
1
0
10
533
Angelo Valentino
Angelo Valentino@AI__Angelo·
@mcuban Spot on. The dilemma for incumbents isn’t hypothetical, It’s already on the horizon. Those who can’t reason through AI-native pivots will get caught between doing too much or too little.
English
0
0
0
386
Mark Cuban
Mark Cuban@mcuban·
Every entrepreneur that knows how to use AI is trying to find ways to build AI native companies that completely displace incumbents. For the incumbents, it’s the “Innovator’s AI Dilemma” If those startups get traction, and they can’t buy them, the CEOs will face multiple huge Dilemmas: 1. Do they tear down their companies and reinvent them as native AI ? 2. How do they explain it to public shareholders ? You will know AI is having a huge impact on public companies when there are two types of lawsuits: - Shareholders that sue the company for tearing down the company and crushing the stock price - Shareholders that sue the company for NOT tearing down the company and crushing the stock price I think most CEOs don’t come close to understanding AI in enough detail to even begin to consider these decisions. Hint: Asking your AI models the best paths from where you are now, to being an AI native version that can achieve the same economics has to be one of your initial steps. If asking your models questions doesn’t make sense to you, you are in deep shit
English
173
126
1.1K
233.1K
Chubby♨️
Chubby♨️@kimmonismus·
Two thoughts: 1) It's only a matter of time before the major streaming services introduce AI content; Seedance has demonstrated how good the quality already is. 2) Netflix is ​​taking the lead in making its own models available and thus bringing people into its upcoming ecosystem. I think we'll see more of this soon.
Chubby♨️ tweet media
English
30
18
332
20.3K
Angelo Valentino
Angelo Valentino@AI__Angelo·
@alexkehr Totally. AI can spit out something functional, but it doesn’t capture the nuance of what actually feels right for your product or audience.
English
0
0
0
19
Alex Kehr
Alex Kehr@alexkehr·
A lot of AI design feels lifeless. Not because the models can’t generate good UI. Because they generate baseline-good according to the internet, not according to you.
English
9
0
20
1.5K
Boaz Barak
Boaz Barak@boazbaraktcs·
Throughout history, people worked because they needed to survive. If a technology both cures diseases and reduces the need to work, that is two positives and not one. The risk with AI is not that it would replace jobs, but that it will lead to less material welfare or political power. It doesn’t have to be this way, but is something we need to watch out for.
will depue@willdepue

the agi pitch of ‘it will solve cancer’ is unfortunately weak because i would gladly trade having to risk cancer vs me and all my descendants losing all economic utility until the end of time, obviously agi world needs to just miraculously great to counter losing all labor value

English
19
12
155
23.3K
Angelo Valentino
Angelo Valentino@AI__Angelo·
@tomfgoodwin This really highlights how context matters. AI shines in some workflows, but for many roles, human judgment, experience, and the right nudge at the right moment remain irreplaceable. I made a post exploring this conversation
Angelo Valentino@AI__Angelo

Good London VC hearing an AI pitch: “this is very interesting” = no “let’s stay close” = no “we’d like to move forward”= still no "I am happy to tell you..." = delayed no "take a look at these contracts" = we have backed something similar

English
0
0
0
13
Tom Goodwin
Tom Goodwin@tomfgoodwin·
I find it really, really, really interesting to see how people are using AI to change how they work. Something that makes very clear to me is how different people's jobs are. We do have to recognize that a lot of the people that we hear from, have pretty atypical jobs. I also know of a lot of people who run big companies that are rather happy to employ a executive assistant. I'm not quite sure why tech folk are so opposed to this. AI is incredible, but quite often it makes me realize just how invaluable human beings are for helping us make the most of our time. The average company is still using American Express travel to do their bookings, It would be a leap for them to do it themselves using kayak, The idea they're going to use agentic AI to do it soon is quite a stretch for me.
Simon Taylor@sytaylor

The CEO of @brexHQ runs his company through a custom AI he built and named Lemon Pie. And I think it's the future of CEO productivity Think about the day job of a Fintech CEO. - Thousands of Slack channels. - Hundreds of emails a day. - Google Docs, WhatsApp, meeting notes What should you pay attention to right now? --- Pedro Franceschi built an autopilot on OpenClaw that answers that for him. It screens everything and organizes his world around two concepts: people and programs. - 25 key people he cares about. - A handful of critical projects. - Everything else is noise. It even automates the stuff he'd do by habit on seeing a message — like reviewing product docs by asking the five questions Pedro always asks. --- And it works in his personal life too. - He sends it voice notes on Telegram while driving. - It once bought him movie tickets — found seats next to his friends, paid with a Brex virtual card. And I sort of love that he's named every bot he's built "Lemon Pie" since he was 12. --- Most CEOs treat being overwhelmed as a discipline problem. Wake up earlier, batch the inbox, hire a chief of staff. Pedro treated it as an engineering challenge. OpenClaw was the unlock that let him scale himself. His system is built around how *he* thinks. And if you think AI is a lot of noise and hype, chances are you're not taking that engineering approach. It's an unfair advantage right now — as a CEO or anyone else — to have that skill set. OpenClaw is letting millions of people remove the daily papercuts without a dev team. Builders are compounding their Return on Effort with AI.

English
2
2
9
3.3K
Bo Wang
Bo Wang@BoWang87·
AI × bio is the only field where: — you're too slow for AI people and too fast for bio people — you're too computational for biologists and too biological for ML folks — tech VCs overprice you and bio VCs underprice you You are permanently miscalibrated in every direction. It's great actually.
English
27
38
411
16.5K
Boring_Business
Boring_Business@BoringBiz_·
Everyone is talking about AI. Nobody is talking about the small businesses that will continue to cash flow regardless of where AI is headed They sell products and services that will keep seeing demand regardless of whether OpenAI or Claude wins the race. They will spit out more cash flow next year regardless of whether AI automates desk jobs or not I am talking about the local laundromats, plumbing services, HVAC and blue collar trades Those are still the most underrated opportunities today, in my opinion In an era where the terminal value of software and internet companies is unclear, these real world trades and service businesses will attract more capital than ever before
English
24
9
181
12.7K
GeniusThinking
GeniusThinking@GeniusGTX·
Andrej Karpathy just mass-shifted his entire AI workflow. Less code. More knowledge. The man who taught the world to build neural networks is now using LLMs to build something nobody expected. This is where it stops being a tech story. Karpathy says a large fraction of his recent token throughput is going less into manipulating code and more into manipulating knowledge. Building personal knowledge bases for various topics of research interest. Think about what that means: One of the most capable engineers alive is choosing to use AI not to write better software. He is using it to think better. The shift is subtle but the implications are enormous. For years, the AI conversation has been about automation. Replace the coder. Replace the designer. Replace the writer. Karpathy is pointing somewhere different entirely. The next phase of AI is not about replacing human work. It is about augmenting human understanding. Using LLMs as thinking partners, not task executors. When the person who literally built the training infrastructure for Tesla's self-driving AI tells you the real value of LLMs is knowledge organization, not code generation, that is a signal worth reading carefully. The tools are the same. The use case just changed. That's a wrap. @GeniusGTX is a gallery for the greatest minds in economics, psychology, and history. Follow if that interests you. We are ONE genius away.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
7
8
83
21K
CG
CG@cgtwts·
Andrej Karpathy is showing a surprisingly simple AI setup that actually works instead of using llms just for code or chat he uses them to build a personal knowledge base >he collects articles, papers, repos, datasets, and images in one place >then an llm turns everything into a clean markdown wiki >it writes summaries, connects ideas, and organizes it all he uses Obsidian to explore it, while the llm handles all the writing and updates as the wiki grows, he starts asking questions and the llm goes through the entire knowledge base to find answers it also creates notes, slides, and visuals which get added back and make the system better he runs checks to fix mistakes, fill gaps, and suggest new ideas so over time it compounds, every question improves the system itself it sounds simple, but it becomes a self improving knowledge system raw information turns into structured understanding. this feels like a new way to think and work.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
6
10
21
3.2K
Angelo Valentino
Angelo Valentino@AI__Angelo·
@aakashgupta That’s wild. The real power isn’t in writing code faster It’s in running parallel experiments, iterating, and ruthlessly killing what doesn’t work. That’s how 120+ features in 90 days actually happens.
English
0
0
0
59
Aakash Gupta
Aakash Gupta@aakashgupta·
Four engineers built Cowork in ten days. They told agent swarms to create a spec, spin up an Asana board, split into tasks, and implement. The swarms spawned a couple hundred agents, made 100 tasks, and built close to the version that shipped. Agent Teams went through hundreds of versions before it shipped. The condensed file view had ~30 prototypes. The terminal spinner saw 50-100 iterations. 80% never shipped. That kill rate is the entire quality process. Anthropic doesn't write better specs. They build more prototypes and kill more of them. The 80% that get thrown away are the reason the 20% that ship feel polished. 120+ features in 90 days stops being surprising when you realize one team is running 5 parallel terminal tabs and round-robining between them. The bottleneck was never "how fast can we code." The bottleneck was "how fast can we evaluate and kill."
Aakash Gupta@aakashgupta

Anthropic shipped 120+ features in 90 days across Claude Code, Cowork, and Claude. I ranked every single one. S tier, A tier, B tier, C tier, D tier. What to adopt now, what to skip, and 4 workflows that chain them together: 🔗 news.aakashg.com/p/anthropic-q1…

English
19
15
146
25.3K
Philipp Keller
Philipp Keller@philkellr·
I'm tired of OpenClaw Every 2-3 days I have a major moment which I show to my friends: "look what AI agents can do", but then the other 90% is pure frustration and me cursing at my own AI agent for which I spent hours choosing a beautiful name and profile picture. I was fun at start: I added telegram, added voice input, then adding skills even with just voice prompting was a bliss. Then dementia hit. Facts from 48+ hours ago were forgotten. I installed a 3 level memory system. It felt like a huge hack, it barely works, I encounter bugs every day which I'm fixing. Then it breaks with every update. Not all of it, but little things. The WhatsApp integration is just insanity. After putting in every markdown file that it shouldn't reply to my friends (IN CAPITAL LETTERS) it happily started to chat with my wife and my goddaughter. And from today even claude subscription stops working. I feel like a failure! I see all the success stories left and right, YT vids and blog posts "I got it to work and here's what the AI agent does for me", and for me I'm still spending 5x the time fixing my agent than just doing the stuff by hand 🤷
English
263
16
515
65K
Angelo Valentino
Angelo Valentino@AI__Angelo·
Absolutely. OpenClaw’s strength isn’t just in the code, it’s in the community and the ecosystem rallying around it. History shows that open platforms often outlast closed alternatives when the network effect and shared innovation are strong. I made a post exploring other conversations You can check it out
Angelo Valentino@AI__Angelo

I take photos of interesting smart people that I meet (mostly in AI), ❤️ like if you know them or like the picture !

English
0
0
0
6
Garry Tan
Garry Tan@garrytan·
A lot of the decacorn AI agent cos and labs other than OpenAI are trying to kill OpenClaw or replace it However: I think community and open source is too strong and the Apple II moment will actually happen for OpenClaw itself, not for some corporate closed source solution
English
86
21
349
34.4K
Angelo Valentino
Angelo Valentino@AI__Angelo·
Absolutely. OpenClaw’s strength isn’t just in the code, it’s in the community and the ecosystem rallying around it. History shows that open platforms often outlast closed alternatives when the network effect and shared innovation are strong. I made a post exploring other conversations You can check it out
Angelo Valentino@AI__Angelo

I take photos of interesting smart people that I meet (mostly in AI), ❤️ like if you know them or like the picture !

English
0
0
0
10
Angelo Valentino
Angelo Valentino@AI__Angelo·
@SebJohnsonUK @tbpn Makes total sense. With AI becoming a crowded space, narrative and perception are just as critical as the tech itself. Smart move to bring in talent that can shape that story.
English
0
0
0
5
Seb Johnson
Seb Johnson@SebJohnsonUK·
It's an acquihire. Anthropic's marketing has been 10x better than OpenAI's. So in response OpenAI has acquihired two of the best marketing minds in tech. OpenAI doesn't need @tbpn's revenue or distribution. It needs better marketing, and paying $100m-$200m for it after raising $122bn makes a lot of sense.
Paul Nary@ProfPaulNary

OpenAI acquiring @tbpn makes zero sense to me (an M&A professor).

English
19
9
159
21.4K
Angelo Valentino
Angelo Valentino@AI__Angelo·
@TukiFromKL If AI can implement ideas at scale, the real advantage moves to clarity of vision and understanding what actually matters. Execution becomes almost automatic.
English
0
0
3
898
Tuki
Tuki@TukiFromKL·
🚨 do you understand what andrej karpathy just quietly published.. karpathy.. founding team at openai, former head of AI at tesla.. just said something that breaks the entire software industry in one paragraph.. in the LLM agent era.. there's less need to share specific code or apps.. instead you share the IDEA.. and the other person's agent customises and builds it for their specific needs.. let me show you why this is the most important thing posted online today.. the entire software industry is built on one assumption: building software is hard.. that's why you pay $49/month for notion.. $99/month for salesforce.. $299/month for whatever SaaS is sitting in your company's tab right now.. the scarcity of building = the value of the product.. it's been that way since 1995.. karpathy invented "vibe coding" in 2025.. the idea that you stop writing code and start describing what you want.. tools like cursor, claude code, and openclaw turned that into reality.. you talk to your computer.. it builds.. it ships.. it runs your workflows while you sleep.. and now he's saying even THAT is the old way.. now you don't share the app.. you share the IDEA FILE.. a document describing what you want to build and why.. and every person's AI agent reads it.. builds their own custom version.. tuned to their exact needs.. for free.. in minutes.. the scarcity of building just hit zero. every SaaS company built for "normal users" is now competing against a blank text file and an agent with 4 hours to spare.. the winners of the next decade won't be the best builders.. they'll be the best thinkers.. the people who know what to build, why it matters, and how it should feel.. that's how paradigm shifts actually arrive.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
54
132
1.2K
326.2K