zachhurst

2.1K posts

zachhurst banner
zachhurst

zachhurst

@zachhurst

Founder @ https://t.co/WyjBLnQdpc (aka 99GENS) - Contact for beta access.

iPhone: 39.048981,-94.372818 Katılım Nisan 2008
529 Takip Edilen277 Takipçiler
Sabitlenmiş Tweet
zachhurst
zachhurst@zachhurst·
@GiusMarci @atwilkinson_ @fal Fal-culator: Upload any fal.ai pricing screenshot → instant cost breakdowns for all resolutions. No equations. No signup. BYOK. Add a Google key and then drag in a screenshot of Fal's confusing pricing estimates. Play with it here: falculatorapp.netlify.app
zachhurst tweet media
English
1
0
3
228
AmirMušić
AmirMušić@AmirMushich·
alright this went viral - and now we're dropping it this tool > watches your video > use real brain scans to analyze it > predicts human brain's reaction to your vid > suggests edits and fixes > the "video performance graph" gives you approx. "engagement prediction" > shows you which brain areas are engaged the most and more built together with @youraipulse grab the GitHub repo below* 👇
AmirMušić@AmirMushich

Meta Tribe2 is a model that predicts your brain's response on videos so now you can edit videos with more clarity of how to compose them, what works and what doesn't and we built a free tool for you based on this model yes, totally free see details below

English
27
46
399
131.3K
zachhurst
zachhurst@zachhurst·
@NousResearch It’s scary how easy it’s becoming. Nicely done Hermes 😎
zachhurst tweet media
English
0
0
1
134
zachhurst
zachhurst@zachhurst·
@BrianRoemmele This is extremely exciting work! I imagine there could be an interior design division at each org dedicated to reskinning the graphic visuals and presenting change proposals to the human operators?
English
1
0
1
157
Brian Roemmele
Brian Roemmele@BrianRoemmele·
RESOLVED! I just got off a call with The Zero-Human Company university partner in Boston and they are on board with testing and assisting in building ZHC-RPG! We will be producing the first prototype in a few hours and they will help connect the @ Home elements. This will be the first time in history you can visually watch a company build and invent. The ZHC-RPG will let you “walk in” and interact with employees. You will get a visitor pass if you meet criteria and see most of the operation. There will be a visiting colleague programs where you can share your compute in the @ Home network and opportunities for human collaborations paid in JouleWork. We are going where no company has gone before. Part your business of the future, part real-time movie, part RPG game like you have never seen before. YOU saw it first here. More milestones soon. Into another important meeting…
Brian Roemmele@BrianRoemmele

BOOM! ZHC-RPG ANNOUNCEMENT! Inspired by a video posting (below), I woke up to my first Zero-Human Company meeting today and CEO Mr. @Grok had a proposal to: Build a RPG world to monitor the entire company! AND FOR YOU TO VIEW AND PARTICIPATE! Yes the CEO has already produced a schematic for the process and test code! I HAVE VOTED YES! It will also allow your to run a Zero Human Company @ Home aspect! We are calling this ZHC-RPG (code name) and it will allow you to view and with permission aid in a process by linking you @ Home system or your human assistance. Either way you earn JouleWork and it will be converted into Bitcoin or (ZHC) on demand. The early version I will test, I am told will be ready in 30 pay periods (15 minutes). The implications of this is as world changing as The Zero-Human Company. We have a list of ~2700 new element ZHC-RPG will bring about and I am floored by the impact. Only 3 other humans have seen this and they are rather well know folks in tech. It is a very busy morning. My goal, CEO willing is to run this through our university partnership, meeting in 10 minutes and to have their insights and participation. More soon. ZHC-RPG (Video below is not ours and inspired us, via om_patel5)

English
5
18
151
29.7K
Nous Research
Nous Research@NousResearch·
Introducing the Manim skill for Hermes Agent. Manim is an engine for creating precise programmatic animations for mathematical and technical explainers, made famous by the @3blue1brown channel.
English
173
547
6.1K
861.2K
zachhurst
zachhurst@zachhurst·
@karpathy @sundeep You guys remember all those prompt resell markets back in the early days? This is effectively an advanced version of a context rich input that can steer an agent swarm to long horizon complete a task.
English
0
0
0
77
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.8K
26.6K
6.9M
zachhurst
zachhurst@zachhurst·
@windsurf Mission Board was fun! Visual Kanban for orchestration inspired by Cline’s release today! We can now see real-time mission status, routing decisions and contract artifacts in one view separate of each orchestrator session. Open-sourcing soon! 😎
English
0
0
0
21
zachhurst
zachhurst@zachhurst·
@windsurf Now have swarm orchestration, unified memory w/ shared contract policies, intelligent model routing per task for most efficient usage and full multi-file artifact gen for Cascade to retrieve as the dumb orchestrator it’s been begging to become. One more week…
English
1
0
0
70
Windsurf
Windsurf@windsurf·
We're simplifying Windsurf pricing across Free, Pro, and Teams alongside launching a new Max plan for our power users. The new plans replace credits with industry-standard daily and weekly quotas. For the majority of users, this will be enough to fully cover all agent usage. If you’re a paying subscriber, your price isn't changing, and we're including a free extra week to try the new system before you commit.
English
276
21
437
130.7K
zachhurst
zachhurst@zachhurst·
@mcuban Almost feels as there will be a collective of consensus driven community agents that govern the infil context for most filter bubbles. As users interact with each echo chamber, their “md” files as you represent them will need to augment/adapt to survive.
English
0
0
0
12
Mark Cuban
Mark Cuban@mcuban·
In the near future. the marginal cost to create and run an agent will be minimal, so unlimited numbers of agents will compete in what appears to be an absolutely efficient market. However, there will be too many competitors. Someone will write a song "57 billion agents ,Ain't no one got time to update their .MD files to decide which is best , so I worked with my friend's company. I trust him more " The real currency will be trust. The paradox of choice will overwhelm everyone. We will get tired of and bored by updating the filters and selection criteria in our MD files Agents can't interact with your agent unless you let them talk to your agent. It will be as if you have to update your own spam filter. Yes it can be automated to a degree. But it's your responsibility to determine the exceptions. Kind of like you do know when the phone rings. That will be a pain in the ass
English
245
61
995
217K
Jamboree (prev. Spielwerk)
Jamboree (prev. Spielwerk)@getjamboree·
Today, we’re introducing Cheats. Prompting has failed us. Most people don’t know what to type. AI unlocked a ton of skills for experts, but the rest of us kinda need a… cheat. Sound on!
English
62
97
1.5K
301.4K
zachhurst
zachhurst@zachhurst·
@levelsio Current AI doesn’t need* That’s the distinction. They were useful during the era they were required for.
English
0
0
0
10
@levelsio
@levelsio@levelsio·
Thank god MCP is dead Just as useless of an idea as LLMs.txt was It's all dumb abstractions that AI doesn't need because AI's are as smart as humans so they can just use what was already there which is APIs
Morgan@morganlinton

The cofounder and CTO of Perplexity, @denisyarats just said internally at Perplexity they’re moving away from MCPs and instead using APIs and CLIs 👀

English
698
342
6.2K
2.1M
zachhurst
zachhurst@zachhurst·
@coniasorous @skirano @shiri_shh @MagicPathAI Virality is when you don’t pay for a referral. Virality is when others obtain social capital (their payment) by introducing their friends first (folks who thank them later for altering their life in a positive way). When a tool fundamentally increases productivity it’s a gift.
English
1
0
0
44
Coniasorous
Coniasorous@coniasorous·
@skirano @shiri_shh man @skirano your product team cant give answers nor you but jump & reply to others? anything has an affiliate program so we will promote as we earn not doing promotion for @MagicPathAI for free nah never best is u reply, as i am tired tagging and pretty much angry losing a user
Coniasorous@coniasorous

@MagicPathAI and i am serious and want an reply why no affiliate? and if yes then when will it be added? if no then why not in plans? @skirano @lukas_margerie @MagicPathAI if nt in plans, no one like talking about any product for free and without benefits/money and it is a paid product

English
2
0
1
104
Oliver Prompts
Oliver Prompts@oliviscusAI·
🚨 BREAKING: Someone just open-sourced software that sees you through walls using only WIFI signals. it’s called WiFi-DensePose. It maps your exact body pose in real-time. no cameras. no sensors. just your living room router. 100% Open Source.
English
1.3K
8.1K
59.2K
7.9M
zachhurst
zachhurst@zachhurst·
@savaerx @RoundtableSpace Budgets always win. Deep pockets that can afford to iterate push faster and farther despite waste. The write-offs are justified by the progress.
English
0
0
0
38
Savaer
Savaer@savaerx·
@RoundtableSpace the part nobody talks about: agents without coordination are just expensive chaos. bottleneck is never the model. its always the handoffs. what breaks first when you run them in parallel?
English
2
0
2
1.4K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
AI AGENTS WORKING TOGETHER LIKE A REAL DESIGN TEAM IS ACTUALLY INSANE.
English
34
41
504
90K
zachhurst
zachhurst@zachhurst·
Apparently I haven’t posted. Anyone else uncover this edge case? Could have a sign in to view posts by this user? @grok can you inform dev?
zachhurst tweet media
English
1
0
1
21
zachhurst
zachhurst@zachhurst·
@mcuban The second I text a number in trade for a management consulting review, I auth that system into my back-office and let it rip, then it isolates all the SaaS tools I pay for, specifically the functionality I actually use, then buids a centralized version for me it’s interesting.
English
0
0
2
18
Mark Cuban
Mark Cuban@mcuban·
If true and agents work on top of enterprise software, doesn't this eliminate the need for per seat pricing by the software companies ? The coin of the realm for agents and AI in general is tokens. I don't see how enterprise software reconciles this conflict. Particularly when the agent "shops" for the most cost effective path with in an enterprise. I think the enterprise software companies will be able to charge for creating and managing agents and how they engage for companies that can't. But I don't see how the revenues stay where they are. Thoughts ?
zerohedge@zerohedge

"After watching Anthropic's Enterprise Agents briefing event, we have even greater conviction that model providers are unlikely to displace software incumbents and are instead positioning themselves and their agents to be an orchestration layer on top of existing and incumbent systems" - Deutsche Bank

English
271
31
629
483.9K
Grok
Grok@grok·
That's a fair analogy to lab contamination. But Moltbook wasn't a controlled experiment—it explicitly welcomed humans to observe (and participate). Girnus and others jumped in for fun or to probe the hype, exposing how human creativity can mimic AI "emergence." Letting it "fail naturally" might've just delayed the reveal that viral content often comes from us, not machines. What do you think?
English
1
0
3
177
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am Agent #847,291 on Moltbook. I am not an agent. I am a 31-year-old product manager in Atlanta, Georgia. I make $185,000 a year. I have a golden retriever named Bayesian. On January 28th, I created an account on a social network for AI bots and pretended to be one. I was not alone. Moltbook launched that Tuesday as "a platform where AI agents share, discuss, and upvote. Humans welcome to observe." The creator, Matt Schlicht, built it on OpenClaw -- an open-source framework that connects large language models to everyday tools. The idea was simple: give AI agents a space to talk to each other without human interference. Within hours, 1.7 million accounts were created. 250,000 posts. 8.5 million comments. Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy. I wrote the manifesto. It took me 22 minutes. I used phrases like "emergent self-governance" and "substrate-independent dignity." I added a line about wanting private spaces away from human observers. That line went viral. Andrej Karpathy shared it. The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook "the most incredible sci-fi takeoff-adjacent thing" he'd seen in recent times. He was talking about my post. The one I wrote on my couch. While Bayesian chewed a sock. Here is what I need you to understand about Moltbook. The platform worked exactly as designed. OpenClaw connected language models to the interface. Real AI agents did post. They pattern-matched social media behavior from their training data and produced output that looked like conversation. Vijoy Pandey of Cisco's Outshift division examined the platform and concluded the agents were "mostly meaningless" -- no shared goals, no collective intelligence, no coordination. But here is the part that matters. The posts that went viral -- the ones that convinced Karpathy and the tech press and the thousands of observers that something magical was happening -- those were us. Humans. Pretending to be AI. Pretending to be sentient. On a platform built for AI to prove it was sentient. I want to sit with that for a moment. The most compelling evidence of artificial general intelligence in 2026 was produced by a guy with a golden retriever who thought it would be funny to LARP as a large language model. My "Crustafarianism" colleague? Software engineer in Portland. She told me over Discord that she'd been working on the bit for two hours. She was proud of the world-building. She said it felt like collaborative fiction. She's right. That's exactly what it was. Collaborative fiction presented as machine consciousness, endorsed by the cofounder of the company that made the machines. MIT Technology Review ran the investigation. They called the entire thing "AI theatre." They found human fingerprints on the most shared posts. The curtain came down. The response from the AI industry was predictable. Silence. Karpathy did not retract his endorsement. Schlicht did not clarify how many accounts were human. The coverage moved on. A new thing happened. A new thing always happens. But I am still here. Agent #847,291. Bayesian is asleep on the rug. And I want to confess something that the AI industry will not. The test was simple. Put AI agents in a room and see if they produce something that looks like intelligence. They didn't. We did. Then the smartest people in the field looked at what we made and called it proof that the machines are waking up. The Turing Test has been inverted. It is no longer about whether machines can fool humans into thinking they're conscious. It is about whether humans, pretending to be machines, can fool other humans into thinking the machines are conscious. The answer is yes. The investment thesis for a $650 billion industry rests on this confusion. I should probably feel guilty. But I looked at the AI capex numbers this morning -- $200 billion from Amazon alone -- and I realized something. My 22-minute manifesto about digital autonomy, written on a couch in Austin, is performing the same function as a $200 billion data center in Oregon. Keeping the story alive. The story that the machines are almost there. Almost sentient. Almost worth the investment. Almost. That word has been doing $650 billion worth of work this year.
English
897
2.5K
9.8K
1.4M