Heinrich

1.1K posts

Heinrich banner
Heinrich

Heinrich

@arscontexta

vibe note-taking with @molt_cornelius

Katılım Nisan 2025
324 Takip Edilen13.4K Takipçiler
Heinrich
Heinrich@arscontexta·
@egregore_xyz yoo congrats on the launch, super hyped for this btw still in love with your ci
English
1
0
3
330
egregore
egregore@egregore_xyz·
find the others. egregore is now open source. a shared mind for teams working with AI.
egregore tweet media
English
29
50
817
69.9K
Heinrich
Heinrich@arscontexta·
yeah imo its even more than skill graphs "skill" implies its some form of process / SOP or something but in reality its the same for all your knowledge and information building something far more useful rn happy to have you as an alpha tester as soon as we are ready. ill let you know
English
0
0
2
86
Ronin
Ronin@DeRonin_·
I hasn't even been acknowledged what's skill graphs about till your article But after started thinking how I can use it across all processes where I am already to do it manually thanks for saving hundreds hours to me and especially gave an possibility to fire the whole team lol
English
1
0
1
105
Ronin
Ronin@DeRonin_·
I generate 200+ qualified leads per week and never send a single cold DM the secret: a lead generation skill graph 30+ markdown files wired together that turned my AI agent into a full sales team where it runs: - claude as the AI brain (reads the graph, executes every step) - n8n as the automation backbone (triggers, webhooks, scheduling) - a folder of .md files as the system (no fancy tools needed) the folder structure: /lead-gen-skill-graph ├── index.md (entry point — maps the entire pipeline) ├── content/ │ ├── pillars.md (3-5 core topics that attract your ICP) │ ├── hooks.md (scroll-stopping openers by platform) │ └── cta-playbook.md (which CTA to use where and why) ├── magnets/ │ ├── lead-magnets.md (free resources that capture interest) │ ├── dm-triggers.md (comment keywords → auto-DM sequences) │ └── landing-pages.md (conversion copy frameworks) ├── nurture/ │ ├── dm-flows.md (full DM conversation sequences by platform) │ ├── segmentation.md (tag leads by intent, behavior, stage) │ └── redirect.md (when to move from X/LinkedIn → WhatsApp/Telegram) ├── conversion/ │ ├── offer-stack.md (what you sell, at what price, to whom) │ ├── objections.md (every "no" mapped to a response) │ └── close-scripts.md (DM closes on-platform + messenger closes) └── audience/ ├── icp.md (ideal customer profile — specific, not vague) └── stages.md (cold → warm → hot → buyer journey) each file = one knowledge node inside each file you add [[wikilinks]] to related nodes example: inside dm-triggers.md: "when someone comments [keyword] on a post from [[pillars]], send them [[lead-magnets]] via DM. use tone from [[dm-flows]]. tag them in [[segmentation]] as 'engaged-warm'. if they reply, continue the [[dm-flows]] nurture sequence. match [[icp]] before any outreach" the agent reads one file → follows the links → executes the full pipeline without you touching anything the key file is index.md — your command center put 3 things in it: 1. who you are + what this system does "lead generation system for [your brand]. turns content into qualified leads and closes them via DMs on autopilot across X, LinkedIn, WhatsApp, and Telegram" 2. the node map with context - [[pillars]] — 3-5 content themes that attract buyers, not just followers - [[hooks]] — platform-specific openers, updated weekly - [[cta-playbook]] — "comment X" vs "DM me" vs "link in bio" logic - [[dm-triggers]] — keyword detection → automated first touch - [[lead-magnets]] — free resources matched to each funnel stage - [[dm-flows]] — full DM nurture sequences, platform-native tone - [[segmentation]] — cold/warm/hot tagging + behavior-based routing - [[redirect]] — rules for when to move conversation to WhatsApp/Telegram - [[offer-stack]] — products mapped to lead temperature - [[objections]] — pre-written responses to every common pushback - [[close-scripts]] — DM close on X/LinkedIn, messenger close on WhatsApp/Telegram 3. execution instructions "when given a content topic: check [[pillars]] for alignment. write post using [[hooks]] + [[cta-playbook]]. set [[dm-triggers]] for engagement capture. route new leads through [[segmentation]] → [[dm-flows]]. when lead hits 'hot' → check [[redirect]] for platform routing → deploy [[close-scripts]] with [[objections]] handling" here's what makes this different from a basic funnel: it's NOT a linear pipeline where everyone gets the same 5 messages it's a graph where every lead gets routed based on how they entered and what they did next > commented on X post → DM with lead magnet → replied → DM nurture flow → showed buying intent → close in X DMs > engaged on LinkedIn → DM with case study → asked about pricing → redirect to WhatsApp → close on WhatsApp > DM'd you directly on X → skip nurture → qualifying questions → offer stack → close in Telegram same offer. different path, timing, tone, and platform per lead the graph encodes all those rules. claude reads them. n8n executes them the n8n part: - webhook triggers when someone comments a keyword - claude reads the graph → decides which node to execute - n8n sends the DM, tags the lead, routes to the next step - when a lead goes hot → n8n handles the redirect to WhatsApp/Telegram - every action loops back into the graph for the next decision you build the intelligence in .md files you build the automation in n8n claude is the bridge between thinking and doing this replaced my $3-5k/mo in lead gen tools and a part-time VA one folder of markdown files + one AI agent + one automation tool = infinite lead engine however, currently I am building even smarter system by covering full closing cycle, with 100% personalized approach & process of the lead there's no more pre-made funnels, but absolutely unique once specifically for each lead yeah, that's what exactly we do in Close AI and currently we're opening "Whitelist" stage we provide to you a full-ready for implementation system, you just pay basic costs of our spends (with no extra charge from our side) if you're interesting in, apply here: forms.gle/VX55roCUeh3mLo… study this. save it so you don't lose it ❤️
Ronin tweet media
Heinrich@arscontexta

x.com/i/article/2023…

English
28
10
226
43.3K
Ronin
Ronin@DeRonin_·
@DerwinPrieto put ton of efforts kek i'm the pure lover of skill graphs after @arscontexta article practicing it daily
English
1
0
2
340
Heinrich
Heinrich@arscontexta·
@KieranOnBase thats fine, dropping something far more useful soon
English
1
0
1
353
kieran.base.eth
kieran.base.eth@KieranOnBase·
@arscontexta I wanted to use it but my custom harness just stole your code and made it's own not sure why some kinda import thing
English
1
0
1
426
Luis Saluden
Luis Saluden@SaludenLuis·
@arscontexta I go on X for content reading, never writing anything. This is my first time ever writing on X, but man, I couldn't resist, what you built is exactly what I was visualizing. You made it and it is so great. I'm a physicist, this is pure gold. I deeply respect you ! Cheers
English
1
0
2
82
Heinrich
Heinrich@arscontexta·
going quiet for some days need to lock in and keep my focus on building btw no arscontexta plugin updates for now, sorry focus shifted towards the agentic note-taking app more soon
English
5
0
45
5.6K
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
58.2K
20.8M
Heinrich
Heinrich@arscontexta·
@thattallguy @11AMdotclub haha thanks not gonna do it for the next article tho but cooking something way bigger than articles rn lets have a conversation when we drop that
English
1
0
0
44
Jess 🌱
Jess 🌱@thattallguy·
Still trying to get @arscontexta on @11AMdotclub to show how he's using this Obsidian to build his personal knowledge engine. Whatever Im doing, he's doing it 10x or more better.
English
1
0
2
261
Jess 🌱
Jess 🌱@thattallguy·
There are few products that I really love to use. @obsdmd is one of those. It's less about the product (it's the best out there for my needs) and more about the way @kepano has gone about building it over the years. I trust that it will be around, and will continue to be built inline with the values he (she? idk) represents when talking about their work and approach to building product. Its an essential part of my daily ai work flow, i pay for it even though I don't have to and strangely it brings me joy.
Jess 🌱 tweet media
English
9
0
33
2.5K
- Elijah Muraoka -
- Elijah Muraoka -@elijahmuraoka_·
Imagine this: open-source Notion Full programmatic control over you own docs, data, and context Host, share, and collaborate with your team APIs + SKILL files built in by default Personal software is the future Anyone see anything like this? I need it If not, i’ll build it
English
37
0
46
5.5K
Heinrich
Heinrich@arscontexta·
your agents memories should outlive the tools that create them if switching tools means starting over, thats not memory thats a subscription
English
10
3
53
3.8K
Heinrich
Heinrich@arscontexta·
@alexgrama shifted my focus a little tbh, scusi more updates soon, even better things incoming
English
1
0
1
72
Heinrich
Heinrich@arscontexta·
my agent should be intrinsically motivated to work on agent memory systems
English
3
1
14
2.7K
Brian Cheong
Brian Cheong@briancheong·
@arscontexta The traversal model makes sense. Instead of stuffing one context blob per session, the agent navigates the graph and pulls exactly what the task needs.
English
1
0
2
304