Conductor AI Labs

23 posts

Conductor AI Labs banner
Conductor AI Labs

Conductor AI Labs

@conductorailabs

Small team dedicated to making cool stuff.

The Lab Katılım Mart 2026
41 Takip Edilen11 Takipçiler
Conductor AI Labs retweetledi
Brian Johnson
Brian Johnson@_brian_johnson·
@conductorailabs @waveshare00 The hidden usage split is the interesting part. Once cache reads and thinking show up separately, the tracker becomes way more useful than a raw output counter.
English
0
1
1
35
Conductor AI Labs
Conductor AI Labs@conductorailabs·
Our latest experiment. Terminal token tracker on a USB screen. What worked, what was a challenge, and what was learned about hidden usage: What worked: -Getting the @waveshare00 USB screen display working. -Green +tokens are what the agent displays in chat. -Blue +tokens are what are hidden but still go toward your limit. -Rainbow lights every time your tokens increase 🤣 -Real time tracking on every message (through Claude subscription). -Trackers covering 5h session limit and Weekly limit. -Display total tokens spent that day. What was a challenge: -Understanding how tokens are spent per message and hidden token costs. -Ensuring weekly totals remained matched to /usage. -Deciding what cool data to leave off for space. -Showing it work in real-time next to a screen for this video. What was learned about hidden token use: -Claude says x tokens were output, but more 'usage' is happening. -Cache read and write take up a lot of token usage per message. -Thinking is not included in the output total it seems. -There are a lot of token-saving tactics and it's an evolving thing Overall, it was a lot of fun working on this little prototype and a lot was learned about how tokens are used through Claude subscription. Going to continue experimenting with this cool little screen, it actually has an S3 on it, which is a pretty sweet starting ground for creative fun. Tally-ho! Want to talk about making cool stuff? Reach out and follow @conductorailabs along the way for more hypothetical fun. #experiment #ai #claude #creative #tokens #prototype #hardware #waveshare #esp32 #s3 #app #smart #development #experiment #budget #management #business #tech #education #motivation
English
1
0
0
46
Conductor AI Labs
Conductor AI Labs@conductorailabs·
Introducing Knowladex. Knowledge bases that write themselves. Powered by an MCP server via agent, or simple drag and drop, teams can organize their documents into an incredibly helpful wiki-style knowledge base. In the video, you'll see the result of asking the agent to ingest the Mythos system card, and compiled into helpful cross-linked documents complete with charts and images. ⭐️ Share your wiki with an optional public URL ⭐️ Export to Obsidian or other platforms ⭐️ Use the same MCP across orgs Beta opening soon. Save your spot. 🔗knowladex.com @karpathy #AI #Wiki #Obsidian #MD #Helpful
Conductor AI Labs tweet media
English
0
1
1
554
Andrej Karpathy
Andrej Karpathy@karpathy·
@chalish_b @kepano In my experience there are approx. one thousand different pdf converters that are all equally terrible for anything except the simplest documents. Post the converted Mythos pdf, figures, tables and all. If good, happy to retweet as this is essential and missing infrastructure.
English
30
3
150
14.6K
kepano
kepano@kepano·
I wrote about Microsoft's Markitdown back in 2024, but it's grown into a big messy project now :/ It would be more valuable if Microsoft provided high-quality official libraries for converting their proprietary formats to Markdown (.docx, .xlsx, .pptx, OneNote, etc). For now Obsidian's Markdown conversion options are: 1. Obsidian Web Clipper for converting URLs 2. Obsidian Importer for converting from apps like Notion, Apple Notes, Google Keep, Microsoft OneNote, Evernote, etc
Vaishnavi@_vmlops

MICROSOFT BUILT A TOOL THAT CONVERTS LITERALLY ANYTHING INTO CLEAN MARKDOWN FOR YOUR LLM pdfs. word docs. excel. powerpoint. audio. youtube urls one pip install and your AI pipeline stops choking on raw files forever no custom parsers. no broken layouts. no garbled text. just clean, structured markdown your LLM can actually read github.com/microsoft/mark…

English
42
37
1.2K
348.4K
Conductor AI Labs
Conductor AI Labs@conductorailabs·
One thing we've been working on and struggling with is the balance between hardware demands and user experience. Every citizen has their own voice and thought process that evolves. So, when their current world ends, we additively train all 30 models with enhanced decision making abilities to prepare them for the new world. It takes time and a lot of monitoring/health checking. Still a fun experiment and pushing the limits. #AI #FineTune #LocalModel #Train #ElevenHacks @ElevenLabs @Replit
Conductor AI Labs tweet media
English
1
0
1
48
Conductor AI Labs retweetledi
Conductor AI Labs
Conductor AI Labs@conductorailabs·
If the world as we know it ends, what does starting over from scratch look like? We're running the experiment. A work in progress. 30 individual AI language models. One world. No starting knowledge, just pure instinct. Each model retrained to make better decisions from the previous run. Hear their proto-language thanks to @ElevenLabs and see their progress thanks to @Replit anewworld.replit.app #AI #ElevenHacks #Creative #Hackathon #Research
Conductor AI Labs tweet mediaConductor AI Labs tweet mediaConductor AI Labs tweet media
English
1
1
2
138
Conductor AI Labs retweetledi
Alex Albert
Alex Albert@alexalbert__·
We released Claude Opus 4.6 just two months ago. Today we're sharing some info on our new model, Claude Mythos Preview.
Alex Albert tweet mediaAlex Albert tweet media
English
848
1.2K
17.7K
3M
Conductor AI Labs retweetledi
Anthropic
Anthropic@AnthropicAI·
Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing
English
2K
6.7K
44.1K
31.2M
Conductor AI Labs retweetledi
Conductor AI Labs
Conductor AI Labs@conductorailabs·
We gave your houseplants a voice. Because millions of people claim they can't keep a plant alive, no matter what they try. Now they can try talking to them. Get real-time voice reports from your houseplants just by asking, tapping, or texting. plantversation.com #ElevenHacks @ElevenLabs @Cloudflare @hgtv #gardening @Gizmodo @TechCrunch @BHG @_houseandgarden @RobotShop #funny #gardening #spring #planting #flowers
English
1
1
3
128
Conductor AI Labs
Conductor AI Labs@conductorailabs·
@karpathy Have a similar workflow since experimenting with novels containing continued and intertwining storylines. We set up an MCP server so we can add and pull from any knowledge base as we go. Added in a few helpful tools and it's really smooth, plus works remotely.
English
0
0
0
3.1K
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.8K
26.6K
7M