Keith So

709 posts

Keith So banner
Keith So

Keith So

@keithso27

BUILDING https://t.co/MVBcHM7rA7, https://t.co/x5x7n0QS5T, https://t.co/oDTHGLZI1z, https://t.co/Z23N3XUcFr, https://t.co/Vw8aLfcBQa

Katılım Kasım 2013
72 Takip Edilen88 Takipçiler
Keith So
Keith So@keithso27·
Luminus v0.3.1 is now out. It’s fun and games to ship free open-source projects but I clearly need better ideas as Medvi hit $1.8b in sales with a 2-person team that vibe coded the whole thing. luminus-py is a new Python SDK that wraps luminus-mcp. Typed helpers for grid proximity, connection intelligence, distribution headroom, site revenue. Works in notebooks out of the box. 11 tools added in v0.3.0/v0.3.1: 1.shortlist_bess_sites - GB BESS site shortlisting 2.get_grid_connection_intelligence - combined grid connection intel 3.get_distribution_headroom - SSEN + Northern Powergrid DNO headroom 4.get_nged_connection_signal - NGED public queue + TD-limit signals 5.verify_gis_sources - GIS source health checks 6.estimate_site_revenue - PV + BESS site revenue estimation 7.get_embedded_capacity_register - what generation and storage is already connected near your site (UKPN + SPEN) 8.get_flexibility_market - historical flex dispatch events with pricing, zones, and providers (UKPN + SPEN) 9.get_constraint_breaches - where and when UKPN’s grid constraints actually triggered curtailment. Direct BESS siting signal. 10.get_spen_grid_intelligence - SPEN GSP queue positions, remaining DG capacity, and curtailment events in one call 11.get_ukpn_grid_overview - GSP capacity utilisation, HV flex zones, and live network faults Distribution headroom also expanded from 2 operators to 4. SSEN and Northern Powergrid were already there. Added UKPN (DFES scenario projections with coordinates) and SP Energy Networks (NDP scenario projections, no coordinates yet). The grid connection intelligence tool now queries SSEN, NPG, and UKPN in parallel and returns whichever operator has the closest published substation to your site. 67 tools. Again most are free, the DNO-specific ones need a free portal registration. pip install luminus-py lnkd.in/efZ2zKPM
English
0
0
0
28
Nainsi Dwivedi
Nainsi Dwivedi@NainsiDwiv50980·
Best GitHub repos for Claude Code that will 10x your next project in 2026 1. Claude Mem github.com/thedotmack/cla… Persistent memory across sessions — stop re-teaching Claude your codebase 2. UI UX Pro Max github.com/czlonkowski/n8… 50+ styles, 161 color palettes, 99 UX guidelines — Claude stops building ugly UIs 3. n8n-MCP…github.com/czlonkowski/n8… Connect Claude Code to 400+ n8n integrations via MCP 4. LightRAG github.com/hkuds/lightrag… Graph + vector RAG — lets Claude understand large codebases structurally 5. Everything Claude Code github.com/affaan-m/every… Skills, instincts, security scanning, multi-language coverage — full agent harness 6. Awesome Claude Code github.com/sickn33/antigr… Community bible — curated skills, hooks, slash commands, orchestrators 7. Superpowers github.com/obra/superpowe… Forces structured thinking before writing a single line of code 8. Claude Code Ultimate Guide github.com/FlorianBruniau… 23K+ lines of docs, 219 templates, 271 quizzes — beginner to power user 9. Antigravity Awesome Skills github.com/sickn33/antigr… 1,200+ ready-to-use skills — one of the largest collections 10. Claude Agent Blueprints github.com/danielrosehill… 75+ agent workspace templates beyond coding 11. VoiceMode MCP github.com/mbailey/voicem… Natural voice conversations with Claude Code via Whisper + Kokoro 12. Awesome Claude Plugins github.com/ComposioHQ/awe… 9,000+ repos indexed with adoption metrics — find what people actually install Bookmark this before your next build.
Nainsi Dwivedi tweet mediaNainsi Dwivedi tweet media
English
24
124
896
81.6K
Kris Puckett
Kris Puckett@krispuckett·
Anyone else using @openclaw with Codex? How is it compared to @claudeai ? Seriously considering canceling Max plan now.
English
180
3
189
53.8K
Keith So
Keith So@keithso27·
Bye @AnthropicAI , with Claude code open sourced. You literally have 0 leverage on users.
Keith So tweet media
English
0
0
0
38
Jerry Liu
Jerry Liu@jerryjliu0·
This is exactly what I've been doing with Claude Code. The biggest bottleneck with my ability to use these agents is ensuring they preserve relevant context between relevant sessions. Having the agent output files in .md and .html is not only a nicer way to view outputs than in the terminal, but also a good way to preserve context for future sessions. also been using Obsidian to view locally generated .md files the only slight hiccup is that the native harnesses aren't amazing at handling non-plaintext files (.pdf, .pptx, and more); the open-source skills use libraries that aren't optimized for generating readable text from complex layout docs. we built liteparse for this purpose to replace pypdf/pymupdf (github.com/run-llama/lite…) i use it as part of my local claude code harness
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
29
49
621
138.4K
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.4K
5.8K
49.6K
16.1M
Keith So
Keith So@keithso27·
@trq212 Stop shipping features between Tue to Fri as I NEVER have any tokens left to try them and forget them by the time it reset.
English
0
0
2
156
Thariq
Thariq@trq212·
not an April Fools joke, we rewrote the Claude Code renderer to use a virtual viewport you can use your mouse, the prompt input stays at the bottom, and a lot more small UX wins people have been asking for it's experimental so give us your feedback
Boris Cherny@bcherny

Today we're excited to announce NO_FLICKER mode for Claude Code in the terminal It uses an experimental new renderer that we're excited about. The renderer is early and has tradeoffs, but already we've found that most internal users prefer it over the old renderer. It also supports mouse events (yes, in a terminal). Try it: CLAUDE_CODE_NO_FLICKER=1 claude

English
278
82
2.1K
281.7K
David Ondrej
David Ondrej@DavidOndrej1·
openclaw is unusable today
English
63
2
153
30.6K
Keith So
Keith So@keithso27·
who are these world leaders trying to please???
English
0
0
0
14
Keith So
Keith So@keithso27·
I might actually be the only person not affected by the Claude Code usage problem. Built this a while ago and proved to be pretty handy: GitHub.com/kitfunso/token…
English
0
0
0
46
Ziwen
Ziwen@ziwenxu_·
Peak hour limits in Claude are brutal now. Used to push 2 hours straight. Now I'm tapped out in under 1. Sonnet blocked. Opus blocked. What's the play here? Only move left is running Codex to survive those 3-4 peak hours daily.
Ziwen tweet media
English
22
2
101
7K
Shruti
Shruti@heyshrutimishra·
318,000 people just installed the same AI skill. I ignored it for weeks. "How different could it really be?" Then I saw what it does: Every mistake my OpenClaw agent makes → logged. Every time I correct it → logged. Every non-obvious thing it learns → logged. Then the best insights get promoted into permanent memory. The compounding effect: Week 1 → makes 20 mistakes Week 4 → avoids those 20, makes 10 new ones Week 8 → avoids all 30 Most people use AI like a calculator & gets same inputs / outputs. Forever. This turns it into something that gets sharper the more you use it. It's the self-improving-agent skill (#1 on ClawHub). If you're building with AI agents, this is the one skill worth installing first. The gap between teams using self-improving agents and teams using static ones will be exponential in 6 months. Follow for more of what's actually working with AI agents
Shruti tweet media
English
43
36
323
29K
Keith So
Keith So@keithso27·
European and the UK energy data is fragmented across 25+ platforms, each with its own API, auth method, and data format. ENTSO-E returns XML with EIC zone codes. Nordpool returns JSON with bidding area names. ENTSOG has a different REST schema entirely. GIE needs a custom header. Elexon uses settlement periods instead of timestamps. If you want a complete picture of European power markets, you need accounts on a dozen platforms and code to normalize all of it. Luminus centralizes all of it into one open-source MCP server. 48 tools. Free. What's an MCP server? MCP (Model Context Protocol) lets AI assistants call external tools. Instead of copy-pasting data from websites into ChatGPT, you connect an MCP server and the AI can pull live data directly. Ask a question in natural language, get real data back. Think of it as giving your AI assistant direct access to live market data. One command to connect it to Claude: claude mcp add luminus -- npx luminus-mcp Then just ask: - "Compare day-ahead prices across France, Spain, and Italy" - "How full are Europe's gas storage facilities?" - "Show me the French nuclear generation mix right now" - "What are FCR tender prices this week?" - "Get wind conditions at 100m hub height for a North Sea offshore site" What's inside: - Electricity prices (day-ahead, intraday, imbalance) across 30+ European countries. - Real-time generation by fuel type. - Gas pipeline flows, storage levels, and LNG terminal data. - Cross-border electricity flows and transfer capacities. - BESS arbitrage signals and balancing reserve pricing. - Wind and solar forecasts vs actuals. - Hydro inflow conditions across 10 European basins. - Offshore marine weather. - REMIT outage messages. - Historical ERA5 reanalysis data back to 1940. Data from ENTSO-E, ENTSOG, Elexon BMRS, Nordpool, RTE France, Terna, REE ESIOS, Energi Data Service, Fingrid, SMARD, EMBER Climate, GIE, EIA, ERA5/Copernicus, Storm Glass, and more. The point: energy market data should not be locked behind expensive terminals or scattered across dozens of incompatible APIs. Analysts, traders, researchers, BESS developers, and anyone working in European power markets should be able to ask a question and get an answer from real data in seconds. Luminus is open source, MIT licensed, and most tools need no API key at all. github.com/kitfunso/lumin…
English
0
0
0
23
Keith So
Keith So@keithso27·
Here is Quantamental V2. We just completed the biggest model upgrade for Quantamental. Latest signals: quanta-mental.com What changed: - Backtests now use Bloomberg roll-adjusted continuous contracts with delayed execution - Feature sets rebuilt from 1,200+ data series across 20 sources: FRED, COT positioning, USDA crop data, satellite vegetation, weather stations, shipping rates, central bank data, and commodity term structure - Added more validation tests, Signal decay overtime due to delay of positioning (>50% passed, = edge persists) and Return Per Trade, total pnl/dollar contract traded(portfolio aggregate ~200bps) - Every feature validated for economic rationale, parameter robustness, and marginal contribution The next step is paper trading for a minimum of 3–6 months - no more remodelling. The backtests say these models work. We might have launched the signals at the worst possible time, but luck is one of the most important things in trading. With a Sharpe ratio this high, the aggregate portfolio has a very high probability of being in profit. I built my quant agents with real-life experience and a Claude 20x Max plan. Let’s see if we can outperform a $650M-valuation startup that just raised $94M. x.com/kimmonismus/st…
English
0
0
0
22
ℏεsam
ℏεsam@Hesamation·
bro created an ai crypto trading bot using > Karpathy’s autorrsearch > $200 of budget > last 3 years trading signals > the ability to buy its own compute THE RESULT: didn’t perform well. pulling this off requires a massive token budget that only big hedge funds can afford. most X posts you see of people turning $100 to $1000 lack the evidence or are an advertisement to sell their bots.
English
51
29
355
36.8K
Keith So
Keith So@keithso27·
hippo-memory v0.7.0 out now! Recall now blends BM25 keyword search with cosine embedding similarity. "deployment broke" finds your memory about "CI pipeline failure on push to master." BM25 + semantic embeddings. Your agent finds the memory it needs even when the words don't match. github.com/kitfunso/hippo…
English
0
0
0
27
Ziwen
Ziwen@ziwenxu_·
How mastering OpenClaw actually goes: - Your agent dies. No error. No logs. Just vibes. - You burn 6 hours rewiring everything, convinced it's your architecture. - The fix was simple. Just delete your agent start it all over again
English
35
4
75
7.6K
Chubby♨️
Chubby♨️@kimmonismus·
OpenAI is backing Isara, a new startup founded by two 23-year-old AI researchers that coordinates thousands of AI agents to solve complex problems, like using ~2,000 agents to forecast gold prices. The company just raised $94M at a $650M valuation and plans to sell predictive modeling tools to finance firms first.
Chubby♨️ tweet media
The Wall Street Journal@WSJ

Exclusive: OpenAI is backing a new AI startup that aims to build software allowing so-called AI “agents” to communicate and solve complex problems in industries such as finance and biotech on.wsj.com/4bTvwKd

English
117
97
1.8K
551.4K