NonBioS

87 posts

NonBioS banner
NonBioS

NonBioS

@nonbios

The AI Software Dev with its own computer

Palo Alto Katılım Temmuz 2025
41 Takip Edilen20 Takipçiler
NonBioS
NonBioS@nonbios·
NonBioS is launching on Product Hunt this Monday, May 18th! If you've ever believed in what we're building: a follow, an upvote, or simply sharing this post makes a real difference on launch day. Follow us here so you're ready to support on Monday: producthunt.com/products/nonbi…
English
3
0
4
102
NonBioS retweetledi
Nishant Soni
Nishant Soni@sonink·
One of the interesting framing of the entire AI coding capability that I read recently is that it doesn't make the best programmers faster as much as it enables the new programmers to build. This is something I realized intuitively. I built nonbios for myself - for engineers like me who have been building for a very long time. This was how I had always built - rent a VM, setup a MVP, open it out for everyone. No github. No IDE. A blank VM, A public IP and code cranked out of Vim. Because most experiments don't work out. If no one is using it, why scaffold an intensive build? And if it does work, all of that can come later. Even though nonbios was built for engineers, our most active users are non-engineers. I think we should just start calling them builders. Because that's what they are. MarketCity is one of those builds:
English
0
3
3
43
NonBioS
NonBioS@nonbios·
We have just shipped nonbios-1.137 and nonbios-1.143. These have speed and autonomy upgrades over 1.136/1.142. Both models are available to use already. The older stable models - 1.135/1.141 are still supported - but guidance is to use them only if you are facing issues with the latest models.
English
0
0
1
22
NonBioS retweetledi
Nishant Soni
Nishant Soni@sonink·
I've Seen a Thousand OpenClaw Deploys. Here's the Truth. We made a YouTube video showing how NonBioS can deploy OpenClaw on a fresh Linux VM automatically - zero human intervention, about 7 minutes start to finish. It was meant as a demo of what NonBioS can do with any open source software. It went a little further than we expected. Since then, we’ve had roughly a thousand OpenClaw deployments through our infrastructure. People come in, spin up a VM, get OpenClaw running, connect it to WhatsApp or Discord, and start experimenting with this thing that Jensen Huang called “the operating system for personal AI.” I also spoke with multiple people in my own network - engineers, founders, technical operators - who deployed OpenClaw independently and spent real time trying to make it useful. Not a weekend of tinkering. Weeks. Some of them genuinely wanted to make it work and went to great lengths setting it up. Here’s what I found: there are zero legitimate use cases. I don’t want to be unfair - OpenClaw is not fake. It’s a real piece of software. It installs. It runs. It connects to your messaging apps. It can talk to Claude and GPT. It can execute shell commands. The technology exists. But when I looked at what people are actually doing with it - across our thousand deploys, across conversations with my network, across the flood of LinkedIn and Twitter posts - I couldn’t find a single use case that holds up under scrutiny. The core issue is: Memory, and everything else flows from it. OpenClaw runs as a persistent agent. It’s supposed to be your always-on assistant. But its memory is unreliable, and the worst part - you don’t know when it will break. Think about what that means in practice. You ask OpenClaw to send an email on your behalf. It’s been following a conversation thread about a birthday party you’re planning. Three people confirmed. One person declined. OpenClaw sends the update email - but it’s lost the context about who declined. Now you’ve sent a message with wrong information to everyone on the list, and you didn’t catch it because the whole point of an autonomous agent is that you’re not supposed to be checking every output. An autonomous agent that you have to verify every time is just a chatbot with extra steps. This isn’t a bug that gets fixed in the next release. It’s a fundamental constraint of how OpenClaw manages context. The agent runs, the context fills up, things get forgotten. Sometimes the important things. You’ll never know which things until after the damage is done. I’ve spent the last year working on this exact problem at NonBioS. We call our approach Strategic Forgetting, and I can tell you from deep experience: keeping an AI agent coherent over long task horizons is the hardest engineering problem in this entire space. It’s not something you solve by creating a memory architecture which maps every day, month, year to separate files. The brain is not a list of files that you index. You don’t remember everything at a high level which happenned last month, and you can’t ‘pull in’ the details of a specific day. You remember everything, all at once, whatever is important and you forget the details, unless they are important too. After going through everything I could find - our deploy data, user conversations, posts online - the only use case that genuinely works is daily news summaries. OpenClaw searches the web for topics you care about, summarizes them, and sends the summary to you on WhatsApp every morning. That’s it. That’s the killer app. A personalized daily briefing is nice. But you can already do this with a Zapier workflow and any LLM API. Or with ChatGPT’s scheduled tasks. Or with about a dozen other tools that have existed for years. You don’t need a 250,000-star GitHub project running on a dedicated server with root access to your environment to get a morning news digest. But there is part of the entire OpenClaw saga that I think needs to be said plainly. The vast majority of posts you see about OpenClaw: “I automated my entire team with OpenClaw,” “OpenClaw replaced three of my employees,” “My OpenClaw agent runs my business while I sleep” - are designed to capture marketing hype. People know that OpenClaw content gets engagement right now, so they produce OpenClaw content. The incentive is the audience, not the accuracy. I’ve talked to people behind some of these posts. In every case, when you dig deeper, the story is one of two things: either what they built could already be done with standard AI tools (ChatGPT, Claude, any decent LLM with a simple integration), or it’s aspirational - a weekend prototype that technically works in a demo but that nobody would trust with real tasks. I’m not calling anyone a liar. I think most of these people genuinely believe in what they’re building. But there is a meaningful gap between “I got OpenClaw to do something cool once” and “I rely on OpenClaw to do something important every day.” I haven’t found anyone in the second category. The safety situation around OpenClaw has been well documented so I won’t belabor it. This is the environment in which people are connecting OpenClaw to their email, their calendar, and their messaging apps. With an agent that has unreliable memory. Running on their personal computers. We made the NonBioS deployment video specifically because we saw this problem - at minimum, if you’re going to experiment with OpenClaw, do it in an isolated VM where a compromise doesn’t touch your personal data. That’s table stakes, and most people aren’t even doing that. So should you bother? Here’s my honest take. If you have a weekend to spare and you enjoy tinkering with new technology, OpenClaw is a fascinating experiment. You will learn things about how AI agents work, about the gap between demos and production, about why context management matters. It’s a great educational experience. But if you’re evaluating whether to invest real time to OpenClaw as it exists today, you can give it a pass without feeling left out. You’re not missing a productivity revolution. You’re missing a morning news digest and a lot of time spent configuring YAML files. The ideas behind OpenClaw are right. The era of AI agents that do real things on real computers is here. I believe that deeply - it’s what we’re building at NonBioS every day. But the execution isn’t there yet. And until the memory problem is solved - until you can actually trust an autonomous agent to remember what matters and forget what doesn’t, consistently, over hours and days of work - the rest is theater. -- Front page discussion on Hacker News confirms everything:
Nishant Soni tweet media
English
7
44
394
21.9K
NonBioS retweetledi
chip
chip@yooo_chip·
great honest assessment of OpenClaw by @sonink. after running an OpenClaw agent for 90 days, there's nothing I disagree with. if you like tinkering and peaking around corners, OpenClaw is an incredibly fun but legitimate use cases today have an effective EV = 0. open.substack.com/pub/nishantson…
English
0
2
7
508
NonBioS
NonBioS@nonbios·
We're excited to announce the next generation of our models: nonbios-1.136 and nonbios-1.142. When we launched nonbios-1.135 and nonbios-1.141, we introduced a fundamentally new context engineering architecture built on our Strategic Forgetting algorithm - keeping AI agents sharp, focused, and effective for far longer than traditional approaches. Today's upgrades build on that foundation: → nonbios-1.136 delivers even greater speed and precision for software engineering tasks. → nonbios-1.142 pushes the debugging capabilities of 1.141 even further. Both models are production-ready and available now.
NonBioS@nonbios

Two new models: nonbios-1.135 and nonbios-1.141 We're releasing two experimental models today that represent significant Context Engineering Architectural breakthroughs in AI-powered software development. But first, let me explain the foundation these models are built on. Strategic Forgetting: Less Memory, More Focus Most AI agents try to remember everything. We took the opposite approach - we invented Strategic Forgetting, an algorithm that continuously prunes an agent's memory to keep it sharp and focused. Think about how you work. When you're deep in debugging a complex issue, you naturally filter out background noise - the conversation happening across the room, the email notification, the tangential code comments. You keep what matters for the task at hand and let everything else fade. That's exactly what Strategic Forgetting does. Today's releases represent a fundamental reimagining of how context engineering works within Strategic Forgetting. We've built a completely new architecture that doesn't just prune better - it understands better. nonbios-1.135: Speed Meets Intelligence Nearly 2x faster than nonbios-1.13, but speed is only part of the story. We achieved this through two fundamental innovations. First, we parallelized key parts of our Strategic Forgetting algorithm, yielding a 25% speed boost. But the real breakthrough came from our new context engineering architecture. This new architecture helps nonbios-1.135 "converge" on correct solutions dramatically faster. It remembers key details twice as well as 1.13, which means it not only works faster, it solves problems where 1.13 would simply give up. The cumulative effect is roughly 2x performance on software engineering tasks. And since we charge by the minute, faster execution means more bang for your buck. nonbios-1.141: The Bug Hunter Built on Claude Opus 4.6 and incorporating the same architectural advances as 1.135, nonbios-1.141 is something different entirely. In our testing, it solved complex bugs that every other model in our lineup failed on. Not some bugs. Not most bugs. Every single challenging case we threw at it. Even we were shocked by the results. Here's a real example: We had a complex React bug that nonbios-1.13 struggled with for almost 10 hours, proposing multiple wrong solutions along the way. nonbios-1.141 isolated the exact issue in 5 minutes and one-shot the fix. But 1.141 isn't perfect. On another React bug, it initially proposed the wrong solution but when we pointed out the inadequacy, it quickly converged to the correct fix. The combination of our new context engineering architecture and Claude Opus 4.6's capabilities creates debugging performance we haven't seen before. There's a tradeoff - 1.141 is dramatically slower than other models. But when you're hunting a critical bug that's been burning hours or days, speed takes a back seat to actually solving the problem.

English
0
0
0
12
NonBioS retweetledi
Nishant Soni
Nishant Soni@sonink·
Command Line Is All You Need. New research from @ServiceNow just validated a bet we made early at @nonbios . 🧵👇
Nishant Soni tweet media
English
1
1
1
93
NonBioS
NonBioS@nonbios·
One of the most important ideas in agent tooling right now is the idea of skills. At a high level, skills are reusable packages of instructions, workflows, scripts, and references that an agent can load for specialized tasks instead of stuffing everything into one giant prompt. The open Agent Skills model is built around progressive disclosure: load lightweight metadata first, and only pull the full skill into context when it is relevant. We’ve now added support for this style of skills in NonBioS. What makes this especially interesting is the context-engineering philosophy behind NonBioS: Strategic Forgetting. Most AI systems try to preserve more and more context. NonBioS takes a different path. Strategic Forgetting continuously prunes memory based on relevance, temporal decay, retrievability, and source priority, so the agent keeps a lean working memory focused on what matters right now. That changes how skills should work. Because NonBioS operates in a constrained working-memory environment, our skills are intentionally short, sharp, and high-signal. They are not meant to be bloated playbooks. They are meant to capture only the instructions that truly matter at runtime. So while NonBioS is publishing its own skills, this is not a closed ecosystem. Our public skills repo follows the Agent Skills specification, and the repo explicitly notes compatibility with NonBioS, Claude AI, Cursor, and other compatible tools. In other words: NonBioS supports the broader skills standard, not just NonBioS-native skills. There’s also an important product detail about how this works today. Right now, in NonBioS, skills are learned explicitly. So to activate a skill, you will have to prompt NonBioS explicitly: “Learn skill from [SKILL.md URL]” For example, you can point NonBioS at a skill such as: github.com/nonbios-1/skil… That is different from Claude’s default behavior, where skills are automatically used when relevant, though they can also be invoked directly. But even this explicit model is already powerful. If you are building UX, you can search for the latest UX skill and ask NonBioS to learn it. That can immediately improve the UX NonBioS produces. So even before seamless automatic discovery arrives, skills already act like modular capability upgrades for the agent. This is the direction we’re excited about: - not bigger prompts, - but reusable skills; - not endless memory, - but better context discipline. Claude helped make skills visible. NonBioS is exploring what skills look like inside a Strategic Forgetting system. And in that world, the best skill may not be the longest one. It may be the one that is shortest, clearest, and most deliberate. Repo: github.com/nonbios-1/skil…
NonBioS tweet media
English
1
0
0
46
NonBioS
NonBioS@nonbios·
The Direct Comparison Bubble vs NonBioS: ❌ Proprietary system vs ✅ Standard Linux ❌ Vendor lock-in vs ✅ Zero lock-in ❌ Skills don't transfer vs ✅ Lifetime value ❌ Must rebuild from scratch vs ✅ Deploy anywhere
English
0
0
0
25
NonBioS
NonBioS@nonbios·
What if "easy to learn" is actually expensive? Bubble(.io)'s gentle learning curve hides a trap: proprietary skills that expire the moment you leave. Linux/NonBioS AI feels harder upfront, but every hour invested pays dividends forever. The full story: nonbios.ai/best/bubble-al…
NonBioS tweet media
English
1
1
1
35
NonBioS
NonBioS@nonbios·
A non-developer built a live marketplace from scratch, deployed it, and then shipped an AI feature for their own users - all on NonBioS. Let that sentence sit for a second. MarketCity (.org) is a fully live, production classifieds marketplace. Not a prototype. Not a portfolio piece. A real platform where real people in Europe are buying and selling things right now - bicycles, electronics, antiques, motorcycles, clothing, pets, jobs, musical instruments, and dozens of other categories. It runs in five languages. It has user accounts, internal messaging, listing creation, premium placement, an Android app, and euro-denominated transactions. Dutch-speaking users are actively posting listings today. The entire thing - built and deployed on NonBioS. And here's the part that puts it over the top: the person who built MarketCity isn't a developer in the traditional sense. No engineering background. No dev team behind them. They used NonBioS to take an idea for a local classifieds marketplace all the way to a fully functioning, publicly deployed product. That's already an extraordinary story. But it doesn't end there. They then built a feature for their own users. After watching people on their platform struggle to write effective sales messages - particularly for WhatsApp, which is the primary channel buyers and sellers use to connect - they identified the gap and closed it themselves. The result is the MGA Message Generator: an AI tool integrated directly into MarketCity that generates 5 professional, WhatsApp-ready sales messages from any listing on the platform. Different tones, emoji-optimised formatting, multiple languages. The whole thing takes seconds. No writing experience required. It launched at €25, one-time payment. Lifetime access. No subscription. What's striking here isn't just the tool - it's the trajectory. One person. No traditional dev background. Built a marketplace. Deployed it. Grew it to real users. Watched those users struggle with something. Then built the solution. On the same platform. Start to finish. This is what NonBioS makes possible - and MarketCity is the live evidence.
NonBioS tweet mediaNonBioS tweet media
English
1
0
0
53
NonBioS
NonBioS@nonbios·
Launch Your SaaS for $9 a Month. No, Seriously. There's a mass of people sitting on SaaS ideas right now. The only thing stopping most of them isn't the idea. It isn't the technology. It's the stack of costs and complexity sitting between "I know what I want to build" and "it's live and taking users." Let me list what you typically need to launch a SaaS in 2026: - An AI coding tool to help you build it. $20-50/month. - A cloud VM or hosting platform to run it. $15-30/month. - A managed database. $10-25/month. - A domain with SSL. $10-15/year. Hours configuring nginx, environment variables, deployment pipelines, and all the invisible plumbing that nobody talks about in the "I built this in a weekend" Twitter posts. You're looking at $50-100/month before a single user touches your product. And that's if you know what you're doing. If you don't, add the cost of figuring out what a reverse proxy is while your motivation slowly bleeds out. Today we're launching something that makes all of that unnecessary. The $9 Plan For $9 a month, every NonBioS user gets: A real virtual machine. Not a sandbox. Not a container. Not a restricted playground. A full Linux VM with 4GB RAM, 2 vCPUs, and root access. This is the same class of machine you'd pay $20-30/month for on DigitalOcean or AWS Lightsail. A public IP address. Your app is accessible from the internet the moment it's running. Share the link. Point a domain at it. Send it to users. It's live. MySQL pre-installed. No provisioning. No connection strings to figure out. No managed database service to subscribe to. It's already there, running, ready to store your users' data. And if you need another database, just ask NonBioS to install it for you. An AI developer that builds it for you. Describe what you want. The NonBioS agent writes the code, sets up the database schema, configures the server, and hands you a working link. The included agent minutes are enough to build a complete MVP from scratch. And then it just runs. For $9 a month. Your app, your database, your server, your public IP. Serving real users. Handling real traffic. A 4GB/2vCPU machine handles thousands of concurrent users for most SaaS applications comfortably. That's not a typo. Nine dollars a month for the entire stack. Who This Is For You have a SaaS idea and $9. That's the bar. You don't need to know how to configure a Linux server. You don't need to know how to set up a database. You don't need to know how to deploy a web application. You don't need to know how to code. Your SaaS is one conversation away from being live on the internet. And it costs less than your Netflix subscription to keep it running.
NonBioS tweet media
English
1
0
0
57
NonBioS retweetledi
Nishant Soni
Nishant Soni@sonink·
When I started work on NonBioS in mid 2024, the first question I asked was: what does it take to model LLMs into an autonomous AI workforce? Some answers were obvious. Software engineering would be the first domain to see real automation. But the harder question was: what is the immediate capability gap? The answer was context. Not the lack of it. The inability to use it well over extended tasks. Seeking a solution, I looked at how humans actually sustain performance over long, complex work. We do not succeed by remembering everything. We succeed by continuously compressing experience into understanding, holding only what matters in active focus, and knowing where to look when we need the details. The very first experiments were reassuring. About more than a year of NonBioS being live and thousands of sessions later, I am more convinced than ever that the approach was right. The approach is called Strategic Forgetting. It is the cognitive architecture that underpins NonBioS, and it is what makes Long Horizon Autonomy possible. I wrote a full essay on how it works, everything we have learned, and the path ahead. blog.nishantsoni.com/p/strategic-fo…
English
0
1
1
19
NonBioS
NonBioS@nonbios·
Less than 24 hours since launch of the new models, and the feedback is already coming in.
NonBioS tweet media
NonBioS@nonbios

Two new models: nonbios-1.135 and nonbios-1.141 We're releasing two experimental models today that represent significant Context Engineering Architectural breakthroughs in AI-powered software development. But first, let me explain the foundation these models are built on. Strategic Forgetting: Less Memory, More Focus Most AI agents try to remember everything. We took the opposite approach - we invented Strategic Forgetting, an algorithm that continuously prunes an agent's memory to keep it sharp and focused. Think about how you work. When you're deep in debugging a complex issue, you naturally filter out background noise - the conversation happening across the room, the email notification, the tangential code comments. You keep what matters for the task at hand and let everything else fade. That's exactly what Strategic Forgetting does. Today's releases represent a fundamental reimagining of how context engineering works within Strategic Forgetting. We've built a completely new architecture that doesn't just prune better - it understands better. nonbios-1.135: Speed Meets Intelligence Nearly 2x faster than nonbios-1.13, but speed is only part of the story. We achieved this through two fundamental innovations. First, we parallelized key parts of our Strategic Forgetting algorithm, yielding a 25% speed boost. But the real breakthrough came from our new context engineering architecture. This new architecture helps nonbios-1.135 "converge" on correct solutions dramatically faster. It remembers key details twice as well as 1.13, which means it not only works faster, it solves problems where 1.13 would simply give up. The cumulative effect is roughly 2x performance on software engineering tasks. And since we charge by the minute, faster execution means more bang for your buck. nonbios-1.141: The Bug Hunter Built on Claude Opus 4.6 and incorporating the same architectural advances as 1.135, nonbios-1.141 is something different entirely. In our testing, it solved complex bugs that every other model in our lineup failed on. Not some bugs. Not most bugs. Every single challenging case we threw at it. Even we were shocked by the results. Here's a real example: We had a complex React bug that nonbios-1.13 struggled with for almost 10 hours, proposing multiple wrong solutions along the way. nonbios-1.141 isolated the exact issue in 5 minutes and one-shot the fix. But 1.141 isn't perfect. On another React bug, it initially proposed the wrong solution but when we pointed out the inadequacy, it quickly converged to the correct fix. The combination of our new context engineering architecture and Claude Opus 4.6's capabilities creates debugging performance we haven't seen before. There's a tradeoff - 1.141 is dramatically slower than other models. But when you're hunting a critical bug that's been burning hours or days, speed takes a back seat to actually solving the problem.

English
0
1
0
31
NonBioS
NonBioS@nonbios·
Two new models: nonbios-1.135 and nonbios-1.141 We're releasing two experimental models today that represent significant Context Engineering Architectural breakthroughs in AI-powered software development. But first, let me explain the foundation these models are built on. Strategic Forgetting: Less Memory, More Focus Most AI agents try to remember everything. We took the opposite approach - we invented Strategic Forgetting, an algorithm that continuously prunes an agent's memory to keep it sharp and focused. Think about how you work. When you're deep in debugging a complex issue, you naturally filter out background noise - the conversation happening across the room, the email notification, the tangential code comments. You keep what matters for the task at hand and let everything else fade. That's exactly what Strategic Forgetting does. Today's releases represent a fundamental reimagining of how context engineering works within Strategic Forgetting. We've built a completely new architecture that doesn't just prune better - it understands better. nonbios-1.135: Speed Meets Intelligence Nearly 2x faster than nonbios-1.13, but speed is only part of the story. We achieved this through two fundamental innovations. First, we parallelized key parts of our Strategic Forgetting algorithm, yielding a 25% speed boost. But the real breakthrough came from our new context engineering architecture. This new architecture helps nonbios-1.135 "converge" on correct solutions dramatically faster. It remembers key details twice as well as 1.13, which means it not only works faster, it solves problems where 1.13 would simply give up. The cumulative effect is roughly 2x performance on software engineering tasks. And since we charge by the minute, faster execution means more bang for your buck. nonbios-1.141: The Bug Hunter Built on Claude Opus 4.6 and incorporating the same architectural advances as 1.135, nonbios-1.141 is something different entirely. In our testing, it solved complex bugs that every other model in our lineup failed on. Not some bugs. Not most bugs. Every single challenging case we threw at it. Even we were shocked by the results. Here's a real example: We had a complex React bug that nonbios-1.13 struggled with for almost 10 hours, proposing multiple wrong solutions along the way. nonbios-1.141 isolated the exact issue in 5 minutes and one-shot the fix. But 1.141 isn't perfect. On another React bug, it initially proposed the wrong solution but when we pointed out the inadequacy, it quickly converged to the correct fix. The combination of our new context engineering architecture and Claude Opus 4.6's capabilities creates debugging performance we haven't seen before. There's a tradeoff - 1.141 is dramatically slower than other models. But when you're hunting a critical bug that's been burning hours or days, speed takes a back seat to actually solving the problem.
NonBioS tweet media
English
1
0
0
100