Node Smith

967 posts

Node Smith banner
Node Smith

Node Smith

@NodeSmith_

one lie at a time

Katılım Nisan 2024
335 Takip Edilen94 Takipçiler
Node Smith retweetledi
sharvil
sharvil@0xSharvil·
$7B+ deployed by Thrive Capital in the last 24h into the biggest AI markets. > @anduriltech / $5B > @IsomorphicLabs / $2.1B > @forushq / $160 M+ the pattern is clear : coding agents are to AI what chatrooms were to the internet. the first obvious killer app. useful. addictive. easy to understand. and a clear sign that the platform works. but the real value came later: search. payments. cloud. commerce. logistics. social networks. same with AI. the biggest checks of ai capital are moving into systems that run the world. > AI for national security. > AI for drug discovery. > AI for healthcare access.
Anduril Industries@anduriltech

x.com/i/article/2054…

English
0
1
9
524
Divya Ranjan
Divya Ranjan@divyaranjan_·
Thanks @Microsoft 🫡 been yapping to friends for months that AI is going to massively change the way hackers operate. Spent a few hours a day earlier this year testing that idea and ended up finding a lot of vulnerabilities across products millions use daily. This is just one of the reports that recently got approved, while a bunch more from other companies are still in triage. Crazy part? Mythos isn’t even here yet and people are still underestimating what’s coming.
Divya Ranjan tweet media
English
4
3
23
518
Node Smith retweetledi
dhruvieiei
dhruvieiei@StackDhruv·
we're hiring a linkedin growth person. not a "social media manager" someone who actually gets the platform. we're a dubai based agency. what you'll be doing: → running dm campaigns → managing + growing 6-7 accounts → writing posts that don't flop if linkedin is your thing and you have proof - slide in. rt appreciated 🙏
English
7
1
9
1K
Node Smith retweetledi
Capx AI
Capx AI@0xCapx·
gcapx
Filipino
12
10
51
3.1K
Node Smith
Node Smith@NodeSmith_·
Claude server is down. Or is it just me?
GIF
English
0
0
0
129
Node Smith retweetledi
Divya Ranjan
Divya Ranjan@divyaranjan_·
"AI made you faster. Your brain didn't get bigger" @karpathy's recent post went viral for sharing how he uses LLMs to build personal knowledge bases. Interestingly, I've been building a more holistic version of what he described for the past few weeks. Introducing Pattrns, an AI interface crafted for deep parallel work, with a partner called Dots that just knows you and grows with you from day one. Why? A few months back, I realised I was working with so many tools / terminals / windows. AI had made me 10x faster, but to be efficient at all times required all my focus and constant attention/depth. AI was creating 100x more output daily than my brain could process, and the constant context switching and orienting myself again and again was killing me. Alsooo, why is every AI chat so linear? The entire experience of using AI was disorienting me. Another agent wasn't the solution for me but an entire interface that connected all the dots for me automatically was. So I built Pattrns. Here's what it actually is: Pattrns is a visual environment to think and do more knowledge work with AI. It keeps you oriented at all times and uses visual threads, kind of like how our brain works (think your prefrontal cortex externalised). Your research, your references, and your thoughts for all your different threads all live side by side as context for AI. The interface gives you one view with infinite depth. You can run parallel sessions across projects, drop a massive question in one thread, and switch to another one to keep working. Focus when you want depth, expand when you want the big picture. My early version was actually an infinite canvas with chat, but using it daily became a bottleneck. Infinite canvases eventually just turn into noise especially for boards that keep evolving. Then there's Dots, the ambient intelligence underneath it all. It learns your taste and decisions by watching your actions. It pays attention to what you care about, what you curate, and what you engage with (also like how much, think pagerank). Over time, it just knows you. You never have to re-explain your thinking, your taste, or your decisions ever again. It does this by auto-organizing and constantly updating your memory graph into a board ("Me") for you to look at, edit, or chat with. You are also always fully aware of what it knows. The underlying rule is simple. Organization is Dots' job, but thinking and creating is yours. So every chat just feels like you're talking to someone who already gets you. This is how it feels to use: Day one: During onboarding, you import your past AI chats (Claude / ChatGPT) and data (Apple Notes, Notion, Evernote). Dots reads through everything, starts creating your Me board with your entire memory graph, auto resolves conflicts, and just knows you from the start. Week one: You're working across three projects. You drop research into one board, brainstorm in another, execute in a third. Switch between them instantly. The AI already knows what each board is about because it sees your cards, your structure, your context. No re-explaining. You can just start chatting anywhere and it stays updated at all times. Month one: Dots knows you and has seen what you've been creating and doing. What you build on vs what you explore and move on from. It's learned your taste through your actions, not your words, relative to the different boards. When you ask it to design something, it already knows you hate rounded corners in that exact project. When you're debugging, it remembers you prefer logs over breakpoints. Every correction you make teaches it. Every card you create sharpens its understanding. The result? You stop maintaining tools and start using them. No tagging. No filing. No "I should organize this later" guilt. Conversations are JSONL you can grep, Git tracks everything. Zero lock-in. Dots understands the context as the what and the conversations that led to it as the why. And also, there's a lot more under the hood Everything stays local (your brain is a folder you own). Privacy is a mission statement, nothing is stored online. You can literally just drop your entire Obsidian vault here and watch it get organized beautifully. It's powered by Anthropic's Agent SDK, so Dots is as capable and agentic as it gets. You can bring all your MCPs, and if an API or skill doesn't exist, just dump things and ask Dots to create it. Repeat something enough and Dots suggests turning it into a skill automatically. Every chat has reply threads (like Slack) so you can drill into any thought without losing the main conversation, and a TLDR button to catch up in seconds. Who is it for? I believe there are 2 kinds of people doing major work with AI: 1. Those who want fully autonomous agents that take a prompt and do everything. OpenClaw, AI chief of staff, that whole wave. 2. Those who sit with it, plan, execute step by step so their exact taste is translated into the output. Pattrns is for the latter! You will soon even be able to use the browser extension and Pattrns MCP to bring your own context to any chat agent you use daily, so it automatically starts thinking like you. Anyway Pattrns is a product I always wanted for myself and I deeply care about this cause. My ultimate mission is to eventually have an interface that is as intuitive as paper and pen along with an ambient AI that watches you and unifies everything you do in one place, constantly organizing your context so you keep coming back to it. What would that eventually feel like? That Pinterest image you keep going back to on your browser, it'll soon be auto-organized in a space for you. The early access for the beta is going live today (Invite only. Mac only for now). Reply with what you're building right now and I'd love to send you an invite soon! PS: There was no AI ever used while crafting this entire product experience, just pen and paper. Only used AI to build it. Taste is human :')
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
46
27
296
31.6K
Node Smith retweetledi
Capx AI
Capx AI@0xCapx·
riding the AICM wave
Capx AI tweet media
English
13
13
48
6.6K
Node Smith retweetledi
Capx AI
Capx AI@0xCapx·
Capx × @elevenlabs The Capx ecosystem now has priority access to the ‘ElevenLabs for Startups Program’ 🤝 Founders part of Capx ecosystem get: → 12 months free access → 33M characters (~600 hours of audio) → Direct POC with ElevenLabs Voice is becoming the interface layer for AI agents. We're making it available to Capx ecosystem apps from day one.
Capx AI tweet media
English
18
10
53
4.9K
Node Smith retweetledi
Capx AI
Capx AI@0xCapx·
New partnership update 💪 Capx is very proud to announce our new partnership with Capx, to build the future of fully autonomous and ownable AI apps. Thank you for your attention to this matter.
Capx AI tweet media
English
29
14
81
6.5K
Node Smith retweetledi
Capx AI
Capx AI@0xCapx·
2026 — the era of solo founders has begun! the moat is not code its ownership + distribution time to ship agentic ai apps going public on day 1
English
15
17
50
8K
Node Smith retweetledi
Nikola Mrkšić
Nikola Mrkšić@nikola_mrksic·
I quit Apple because I knew Siri would never make an actual dent in the world. Not because the tech was bad. But because the entire concept had to be rethought. General-purpose AI assistants try to do everything, and end up doing nothing well. When you optimize for breadth, you can't go deep enough to actually complete transactions. Siri can tell you about restaurants but it can't book one, can't take payments, can't modify reservations, can't handle the messy back-and-forth that happens in real conversations. It's a search interface, not a transaction engine. When we started @polyaivoice, we made one decision: We only do customer service calls. Yes! Just one single thing. We wanted to answer your calls so your customers can talk to AI agents that actually know what they're doing. And that focus is exactly why we handle 500M+ calls today. This is what actually happens when you go narrow: 1️⃣ you can train on real data that matters. We've trained on hundreds of millions of actual customer service calls across hospitality, banking, logistics, and healthcare. And I'm not talking about web-scraped text, or synthetic conversations. I'm talking about real calls with real edge cases, real accents, real background noise, real payment failures, and real angry customers. Our model know what "I need to move my reservation" sounds like in 45 different languages because they've heard it millions of times. 2️⃣ you can integrate deeply into actual business systems. PolyAI doesn't just talk; it pulls data from your CRM, checks availability in your booking system, processes payments through your payment gateway, updates your PMS, triggers workflows, and a lot more. 3️⃣ you can measure what actually matters to businesses. We don't track "user engagement" or "daily active users". We track containment rate, revenue per call, cost per contact, CSAT scores, after-hours bookings captured. The Melting Pot generated $250K in six months from calls that would've gone to voicemail. 4️⃣ and this is the part that took me years to understand: You can actually be held accountable. When your AI is handling a business's main phone line, every failure is visible immediately. A hallucination doesn't just annoy a user, it costs the business a customer. So you build differently. You build with guardrails, with fallbacks, with human handoff protocols, and with real-time monitoring. You build like the business depends on it, because it does. PolyAI Agent Studio will help every business have a customer service AI that works 24/7 and actually gets things done. Excited to have it deployed across your business.
Nikola Mrkšić tweet media
PolyAI@polyaivoice

PolyAI has raised $200M from Nvidia, Khosla Ventures, and multiple top VCs. We're one of the fastest-growing companies in the UK, and we handle 500M+ calls for: • Marriott • PG&E • Gordon Ramsay's restaurants • And 3,000 more real deployments Which means that if you've ever called them, chances are you've talked to our voice agents. Every restaurant we onboard books thousands in revenue within 30 days. But how? Because PolyAI works 24/7, answering every call in <2 seconds, and we also: • switch between 45+ languages • handle payments & cancellations • verify identities • and even upsell your services If you want to try creating an agent with PolyAI, we built Agent Studio Lite to make it easy. Just enter any URL, and in 5 minutes it will analyze your website and build a working agent. We're opening early access to a limited number of people. Comment "PolyAI" and we'll add you to the waitlist and give you 3 months for free!

English
49
40
230
69.5K
capx unicorn
capx unicorn@CapxUnicorn·
7 is a special number 7 colors of the rainbow 7 days of the week 7 continents 7 notes in the musical scale And wait for it… 7 AI apps live on Capx Super App
capx unicorn tweet media
English
29
12
35
1.7K
Node Smith
Node Smith@NodeSmith_·
Best coding model is dropping soon:
Node Smith tweet media
English
1
0
7
103
Node Smith retweetledi
tyagi / capx.ai
tyagi / capx.ai@tyagicapx·
if we strip away marketing and look at the market honestly most “competitors” in the ai agent space fall into two camps: 1. launchpad-only platforms and 2. framework-only platforms @0xcapx is neither it is built as a full-stack platform because the lifecycle of an ai app doesn’t end at minting a token or publishing a repo
English
2
6
19
311
Node Smith retweetledi
Capx AI
Capx AI@0xCapx·
Game On 🕹️
Capx AI tweet media
English
22
10
62
7.1K
Node Smith retweetledi
Capx AI
Capx AI@0xCapx·
🪂 CAPX Airdrop Unlock 2.0 The 2nd unlock (40%) of the Capx Airdrop is now LIVE Who all are eligible 🤝 • Capx App Users • AIRAA Campaign Participants • Symbiotic Operators & Stakers Go claim👇 app.capx.ai/airdrop-claim
English
59
13
103
20.1K
Node Smith retweetledi
Dexter
Dexter@dextrrr·
Over the past few weeks, I’ve been thinking a lot about our presale unlocks and what they mean for the future of the project. Then I saw @kAInnotkane push CIP-001 for Candy, settling early, reducing unlock pressure, and redirecting tokens into a growth pool. It gave me clarity. It confirmed I was already thinking in the same direction. So, today I’m putting an important decision in front of the community with CIP-002. The idea is simple: settle the presale early, give participants some liquidity now, and move the rest into a long-term Growth Rewards Pool to fund creators, integrations, and user acquisition. Because big unlock cliffs change behavior. Buyers wait. Holders worry. Momentum slows. I don’t want us managing charts, I want us building. If this passes, we get years of runway to focus on shipping and helping users win. If you’re eligible, I’d really appreciate your vote.
English
12
4
18
216