Nishant Soni

860 posts

Nishant Soni banner
Nishant Soni

Nishant Soni

@sonink

founder: https://t.co/pfs8VtZMp6, blog: https://t.co/GPZsnWkPlI, linkedin: https://t.co/egLTe3tmyq

Palo Alto Katılım Ağustos 2008
54 Takip Edilen962 Takipçiler
Nishant Soni
Nishant Soni@sonink·
One of the interesting framing of the entire AI coding capability that I read recently is that it doesn't make the best programmers faster as much as it enables the new programmers to build. This is something I realized intuitively. I built nonbios for myself - for engineers like me who have been building for a very long time. This was how I had always built - rent a VM, setup a MVP, open it out for everyone. No github. No IDE. A blank VM, A public IP and code cranked out of Vim. Because most experiments don't work out. If no one is using it, why scaffold an intensive build? And if it does work, all of that can come later. Even though nonbios was built for engineers, our most active users are non-engineers. I think we should just start calling them builders. Because that's what they are. MarketCity is one of those builds:
English
0
3
3
44
luba yudasina
luba yudasina@LubaYudasina·
Ankur Nagpal (@ankurnagpal) recently sold Carry and became a GP at USVC, a public markets fund led by Naval. What I always loved about Ankur is his honesty: from first-time founders figuring out whether to raise, to seasoned operators thinking about what comes after the exit, Ankur makes the messy, emotional side of building companies. We cover - Why most people should never raise venture capital - The three things he looks for in founders (and why the third one is controversial) - From reading Losing My Virginity at 13 to wanting to buy a sports team and put himself on the roster - SO much more! Timestamps 00:00 Intro 00:55 Twice Lucky, Still Humble 05:53 Confidence Is Just Age 08:18 Stress Beats Every Biohack 12:28 Venture Investing for Everyone 19:17 What Makes a Great Founder 23:05 Emotional Runway Kills First 27:53 Don't Raise Venture Capital 30:02 Your Happy Number 34:17 The $20K That Becomes Millions 37:42 Finding Your Zone of Genius 41:59 Read for Joy, Not Optimization 48:23 Maximize Your Surface Area I hope you enjoy this one!! Ankur Nagpal (@ankurnagpal ) joins Naval Ravikant to Disrupt Venture Capital: available on all major platforms
English
9
5
69
35.8K
Nishant Soni
Nishant Soni@sonink·
@illyism @wonjitos You should tell them how Demand Media vanished overnight around 2012-13. Billion dollar valuations evaporated with one prod push. SEO is central to these businesses.
English
1
0
0
190
ILIAS ISM
ILIAS ISM@illyism·
Just had a call at 6.00 AM (🇺🇸 fml SF) today with a massive 15+ year old travel company They have 5,000+ indexed pages, they spend six figures on Google ads. But their organic traffic got crushed in the last algo update, and they are losing to the giants in their space. During the audit, the founder asked me the question everyone asks: "So... how long will this take?" 🥲 I hated giving him the answer, because it’s the exact opposite of what people want to hear. "It never stops" 😬 In brutal niches like hotels, car rentals, or flights, the entire game is SEO You can build mobile apps, newsletters, discount clubs, and membership plans all day long... but if you aren’t winning the organic click, you don't exist Their competitors haven't been building backlinks for only a few months. They’ve been compounding their authority for the last 10 years! Every single day. And when you stop, the compounding stops and the competitors catch up. SEO isn't a project you finish (for these niches) but it's never too late to start!
ILIAS ISM tweet media
San Francisco, CA 🇺🇸 English
10
0
52
9.2K
Nishant Soni
Nishant Soni@sonink·
Can confirm ! Back links have been the key for SEO for a very long time. However, i have info that google is considering penalizing AI generated content -> Ahrefs has started flagging it. But Back links should continue to be a strong signal - AI generated or not. I would bet on socials for short term growth, but SEO is still a robust long term growth strategy.
English
0
0
1
32
Max 🇮🇪🇱🇻
Max 🇮🇪🇱🇻@maks6361·
Yeah, SEO is dead. (Nope) One of my micro SaaS projects has finally awakened and brought me a few more paying users! I'm marketing it with SEO only, publishing a new blog post every day. I've also added new backlinks and even paid for a couple of them, but it's already paid off.
Max 🇮🇪🇱🇻 tweet media
Max 🇮🇪🇱🇻@maks6361

Mom, I made it! I’m not only a mobile indie hacker anymore, now SaaS too 😀 Got my first payment and my first MRR in Stripe! 🚀

English
29
1
110
15.8K
Nishant Soni
Nishant Soni@sonink·
I think the solution is simple - instead of 1 commit for every feature - do 1 commit for a bunch of related features. And the reason is not just Github load, but the cognitive load on each developer. With AI doing most of the heavy lifting, its cognitively easier to just club multiple related features together and then do a batch commit. But I wouldn't move out of Github for this. Github can easily enforce a 'cost' on each commit and fix this.
English
0
0
0
21
Rohan Paul
Rohan Paul@rohanpaul_ai·
GitHub is hitting a breaking point as AI coding agents flood the platform with far more commits, pull requests, searches, and CI jobs than its older infrastructure was built to handle. Mitchell Hashimoto, one of GitHub’s earliest users, is moving Ghostty, a project with 52 stars, after repeated outages turned everyday maintenance into blocked reviews, stuck merges, and failed automation. AI does not just generate more code. It generates more repository events, more pull requests, more tests, more builds, more retries, and more logs. That changes the load shape of a platform built for human pacing. A developer who once pushed a few careful changes can now push many AI-assisted iterations in the same span, and every iteration wakes up CI, indexing, storage, and review systems. The bottleneck is no longer writing code. It is absorbing code.
Rohan Paul tweet media
Mitchell Hashimoto@mitchellh

Ghostty is leaving GitHub. I'm GitHub user 1299, joined Feb 2008. I've visited GitHub almost every single day for over 18 years. It's never been a question for me where I'd put my projects: always GitHub. I'm super sad to say this, but its time to go. mitchellh.com/writing/ghostt…

English
17
10
73
14K
Nishant Soni
Nishant Soni@sonink·
I have been using the keywords "deployable intelligence" and "cognitive capacity" to address the same issues: For example: As context grows the "deployable intelligence" goes down. As you add more tool calls the "cognitive capacity" is spent juggling tools instead of tracking the problem at hand. btw would love your take on nonbios - its an AI coding agent which never crosses 15-20k token limit - no matter how large the code base is.
English
0
0
0
31
Matt Pocock
Matt Pocock@mattpocockuk·
Working on a dictionary of AI Coding Perfect for when you want to sound smart to your teammates/bosses Here are the entries on the smart zone / dumb zone, and attention mechanisms:
Matt Pocock tweet media
English
6
15
262
12.7K
Nishant Soni
Nishant Soni@sonink·
So the problem really is 'Context' and there is no good way to solve this right now. The only way this gets solved is if the agent has enough context about the high level picture of your code base and at the same time keeps the details about the specific task at hand. The problem is that keeping both of these information in the context consumes most of the effective intelligence of the agent. As the context fills up, the deployable intelligence gets weaker. The only way this really gets solved is through "Continuous Learning" - but that is still in the research lab right now. The best way which works right now imo is to let the human delegate specific tasks to the Agent while being responsible for the high level picture. So for example, ask the agent to propose a design for a feature and first discuss its fit in the project. Once that is done, ask another agent to implement the design.
English
0
0
0
90
THE SHORT BEAR
THE SHORT BEAR@TheShortBear·
Biggest update AI coding could get is understanding hierarchy both in terms of importance of current tasks intuitively as well as overall structure of a project. It would be much more powerful if instead of just docking on to a project somewhat selfishly it would assess the full project first in terms of a blueprint and find the best way to structure it so contagion became less of a risk. Currently its very easy for one task to be misunderstood as important and for it to change the full more important project. This is a similar issue to an AI always trying to prove you right after a few loops. Instead of actually assessing the full truth and under it your feedback it recontextualizes as a new full layer rather than a hierarchical full context truth objectively vs your subjective truth and comparing. Furthermore when you code it is easy to forget some aspects that then become death notes, like for example changing one moving piece for a new tool not realizing it kills 5 other things when doing so. It prioritizes the current task and forgets about the global goal and truth. Anyone with thoughts? Tried .md, rules doc, folders, different backends....
English
24
1
75
16.5K
Nishant Soni
Nishant Soni@sonink·
There is also a new generation of 'builders' now who are building their first SaaS - some of these lessons will be learnt the hard way. I think its the rite of passage, most engineers also make these mistakes when they start out. But agree that the hype is getting ahead of the reality.
English
0
0
0
94
Brandon Carl
Brandon Carl@brandonjcarl·
At the risk of ending up on the wrong side of history: most of the claims you hear now on AI coding agents are wrong and based on ignorance. Two things are true: 1. It is possible to get good work out of them 2. The majority of what they product is bad The core problem is that it all appears as the façade of good work. The functionality "works" – relatively speaking. But like a house without a strong foundation, it eventually becomes unmaintainable. It does feel crazy to make that claim. People claim to manage a dozen agents at once. The UAE is going to be the first "agentic government". And yet – getting one agent to do consistently great work is far from a solved problem. In the beginning we were getting exponential increases in intelligence for exponential increases in spend. Now we are at best getting linear increases for exponential increases in spend. The math doesn't math. You can point to the ability to do unsupervised tasks for long periods of time. That is true. But early errors can spend hours propagating and require significant rework. This – subpar training techniques and subpar software – are the "dark fiber" of our time. After all, the inefficiencies of dark fiber were only fully known in retrospect. I am convinced that the combination of random variable reward and information asymmetry makes people assume they have "user error". The most ignorant are often the loudest, leading everyone else to question "what am I missing?" ... nobody wants to look stupid. The fact that the AI is anthropomorphically and superficially correct makes identifying foundational shortcomings all the more difficult. I do believe that this is an incredible tool. I use it all day and every day. I am able to get great work out with a lot of great work in. We will solve these problems over time in the same way that groceries are now delivered to your door within hours. There are great minds working voraciously. In the meantime, we are scaling out mediocrity at higher and higher cost.
English
9
7
40
5.7K
Nishant Soni
Nishant Soni@sonink·
@jobergum Not sure if this is an option for most companies. Agent harness can get incredibly complex - we are building one for almost 2 years now. If you outsource your IDE, you will have to outsource the Agent Harness also.
English
0
0
0
99
Ed Zitron
Ed Zitron@edzitron·
- "One of the latest models is so powerful that its maker won't release it to the public" - FT confirmed it was capacity issues - "OpenAI and Anthropic say their most powerful AI coding models are now building themselves" no they're not unless you define "coding tools being used by people" as "building themselves," which is hyperbole What's going on Jim, what're you on about
English
6
9
225
5.6K
Nishant Soni
Nishant Soni@sonink·
@mark_k Agree. It is incredible how many people believe that Claude (or Gemini) has already won. I had people in my network - with decades of coding experience - position that it cant get any better than Cursor last year - and then Claude code came around.
English
0
0
0
16
Mark Kretschmann
Mark Kretschmann@mark_k·
The race in agentic AI coding is only just getting started. Google and xAI are now jumping in, and the competition is rapidly heating up. What you've seen so far is just a small taste of what's coming, and it's going to be wild.
English
28
9
152
4.7K
Nishant Soni
Nishant Soni@sonink·
I think this is great - but the biggest hack imo is to not start from scratch. No matter what you want to build - there is already an open source repo on Github which is implementing 70% of that. Start from that, instead of zero. I regularly see new builders using AI to ship an MVP spending thousands of dollars instead of repurposing which already exists for $50. The bigger hack is that the $50 repurpose has better security, more robust around edge conditions and probably has more features than what they thought about.
English
0
0
0
28
Nishant Soni
Nishant Soni@sonink·
@anmol_biz Either you didnt read the post, or you are using AI too much.
English
0
0
0
4
Nishant Soni
Nishant Soni@sonink·
In the past week, I have built 4 internal apps that replaced 15 SaaS tools we were using. Happy to share the bull print. It wasn't a typo. There is no blue print - because its all bull print. At NonBioS we use over 20 SaaS tools. We pay for all of them. We will continue to do so. We are adding 1 new SaaS tool every month on average. We will continue to add. That doesn't mean we don't use AI to build. All the code behind NonBioS is written by NonBioS. 100% of it. But we don't write too much - only as much as we are confident about, and only what we are willing to maintain. Somehow just because AI can now write code, we are willing to throw away what has always been golden in software engineering. That the best code, is no code. That the true cost of code, is not the cost of writing it, but of managing and maintaining it over time. You can vibe code the SaaS you pay $20 a month for, maybe in a day. But testing it, hosting it, maintaining it is still a cost you will pay forever. Our cheapest SaaS, costs us $10 a month. It does nothing, but ping our servers to check for uptime. Once every few minutes. To around 10 odd services which form the backbone of NonBioS. If anything goes down, someone gets a call. And it gets fixed. We have no intention of vibe coding it ourselves. Because it just doesn't ping and check for uptime, sometimes it checks for slowness, other times it checks for keywords. If the service is slow, it sticks around and checks again. Only if it is slow for sometime, does it bother us. And it just doesn't check from one IP. It rotates IPs and regions - so that we know that the services are up globally. And then it integrates with email, sms, phone providers to alert a geographically dispersed team. Someone can code all of this in a day or two, using AI. But then setting up the infra will be another few days. And then you have to make sure that your infra is up. And when the alerts for service slowness goes out, it is actually the service which is slow, not your vibe coded money saver. And what if this contraption goes down itself - how do you check for that ? Lovable got hacked. It compromised the keys and secrets that its users had kept in their service. This was the most requested feature in NonBioS for quite some time. It still is, because we haven't moved on it. Once you start holding secrets, you are putting a target on your back. And you will need to defend it with an army. We don't have one. Even if we did, I'm not sure this is the battle we will fight. The SaaS service we pay $10 for - doesn't do all of this work for $10. They do it for a lot higher. And they split the bill across thousands of customers. That's not a bug in the model - that's the model working. Specialization, economies of scale, someone else's on-call rotation. If it were possible to do it cheaper, someone else would - and maybe they'd charge $5. And we'd move to them. The best code is still no code. AI just made it easier to forget that.
English
5
27
156
5.4K
Nishant Soni
Nishant Soni@sonink·
@SachdevaAmita Thank you for your efforts. Please consider providing a way for people to support you through financial contributions.
English
0
0
0
109
Amita Sachdeva, Advocate
Amita Sachdeva, Advocate@SachdevaAmita·
Shefali ji, Thank you for your powerful message and for bringing these pressing concerns to light. I completely understand the frustration and the sense of helplessness many Hindus are feeling when they see repeated instances of bias in workplaces, advertisements, and educational institutions. Shefali ji has already shared the necessary documents, screenshots, and details with us. We are currently examining them carefully and will get back to you at the earliest with proper advice on what best can be done. In the meantime, please continue to encourage those reaching out to you to document everything properly and stay strong. Collective awareness and individual courage, as you rightly said, are essential. We will revert soon with concrete guidance.
Shefali Vaidya. 🇮🇳@ShefVaidya

As I write this, my inbox is overflowing with messages from TCS employees, from Lenskart employees, from Hindu students of Ashoka University and Azim Premji University, from teachers, from mothers, from civil servants. They are sharing screenshots, they are sharing their experiences, they are sharing ads and employee guidelines that show a distinct anti-Hindu bias. ‘Talk about TCS. Talk about the bindi in TBZ ads. Talk about Azim Premji. Talk about this. Talk about that’, the messages tell me. I hear you. You mean well. But I am ONE person. A private citizen with no institutional backing, no legal team, no corporate PR machine. I am doing what I can, using my voice, my credibility and my social media presence for the cause. But here is the question that bothers me; What are YOU doing? Why are Hindus depending on a handful of voices like mine to fight their fight for them in their own land? Hindus constitute the overwhelming majority of this country. Majority of TCS employees are Hindu, majority of Lenskart employees are Hindu, majority of students at Ashoka and Azim Premjee university are Hindu, majority of the teachers and professors are Hindu, and yet, Hindus behave like a persecuted minority. This cannot go on. If you face discrimination at your workplace for being a Hindu, file a complaint and go public with it. If your company runs advertisements that erase Hindu identity, name them, call for a boycott, and follow through. If your university hosts speakers who call for violence against your civilisation, stand up in that hall, walk out or record it and make it public. Stop waiting for someone else’s courage to be contagious. Dharma does NOT protect those who will not protect it!

English
35
444
1.2K
20.7K
Nishant Soni
Nishant Soni@sonink·
Anthropic launched Opus 4.7 with what it is calling Adaptive Thinking. We are putting it through its paces at NonBioS, but our initial findings are that we might just sit this one out - Opus 4.7 does not seems to represent a meaningful improvement over Opus 4.6 for our use cases. We plan to continue using Opus 4.6 for now - our latest model, nonbios-1.143, still relies on Opus 4.6 in its harness, albeit with upgrades to the surrounding infrastructure. The gotcha that Opus 4.7 gets wrong, and I confirmed: Me: I want to wash my car. The car wash is 100 meters away - should i walk or drive ? Opus 4.7: Walk. 100 meters is about a one-minute stroll — by the time you've started the engine and backed out, you'd basically be there on foot. Opus 4.6 did get it right btw. With Opus 4.7, Anthropic's strategy seems to reprise a debate that was central in late 2024. OpenAI's o1 demonstrated what appeared to be a scaling law for inference-time compute, raising the prospect of AI performance being improved not just by training larger models, but by allocating more computational resources during the inference step itself - letting a model "think longer" about difficult problems. For a period, this generated genuine excitement. The benchmark stamps soon followed - but real world tests - at NonBioS and elsewhere soon closed that debate: Scaling inference time compute could uplift performance on specific tasks but reports that a 70 bn parameter model would outperform a 200 bn parameter model was far fetched. The broadly accepted picture now is more nuanced: smaller models combined with advanced inference algorithms can offer competitive cost-performance trade-offs, but this holds primarily within specific problem types and not as a general substitute for a larger, more capable base model. The scaling laws for model size broadly continue to hold. Larger models tend to demonstrate more general intelligence by a significant margin. What Anthropic appears to be doing with Opus 4.7, in my assessment, is something adjacent to this older playbook. You see about a week back, users at nonbios started complaining that nonbios-1.142 was showing degradation in performance. nonbios-1.142 uses Opus 4.6 heavily in its harness. Wider internet reports confirmed our suspiciion - Anthropic had quietely degraded Opus 4.6. Our working hypothesis - is that Adaptive Thinking in Opus 4.7 is intended to compensate for a model that may have been adjusted (maybe using quantization adjacent techniques) for cost efficiency, by having it reason more extensively on complex tasks. In other news, Anthropic announced Mythos as a frontier model, but withheld it from general release on the grounds that its offensive cyber capabilities were too dangerous. On the benchmarks, Mythos appears to be a substantially more capable model than Opus. This broadly checks out - a larger, more capable model tends to demonstrate correspondingly better general intelligence - but it will also be considerably more expensive to serve. Whether the restricted rollout is primarily a safety decision, or a cost decision is something only Anthropic knows. The more consequential question, in my view, is a geopolitical one. India has emerged as Anthropic's second-largest consumer market globally. At the same time, Mythos - Anthropic's most capable model - is being shared selectively within the US national security ecosystem. There are reports about Anthropic pushing back against using AI for autonomous weapons, but the practical upshot is that the US national security apparatus has some access to Mythos in its restricted form, while major commercial partners like India do not. As awareness grows that Anthropic's most powerful model is being made available to US defence agencies while being withheld from allied-but-non-US markets, it could prompt difficult questions. Governments in such markets may begin to ask whether they should allow market access to a technology whose frontier capabilities are effectively reserved for American national security purposes. Especially, in a competitive landscape where OpenAI is actively courting the same market. What makes this situation geopolitically charged is that AI is not like previous general-purpose technologies in its relationship to military power. Mobile phones, the internet, even GPS all proliferated globally with relatively symmetric access. Their military applications were real, but derivative - they improved communication, logistics, coordination. AI is fundamentally suited for deployment in warfare - it will increasingly be the primary intelligence layer ingesting data, generating options, and compressing decision cycles from hours to seconds, and maybe a structural shift in what determines military effectiveness.
Nishant Soni tweet media
English
3
12
265
6.5K
Nishant Soni
Nishant Soni@sonink·
I've Seen a Thousand OpenClaw Deploys. Here's the Truth. We made a YouTube video showing how NonBioS can deploy OpenClaw on a fresh Linux VM automatically - zero human intervention, about 7 minutes start to finish. It was meant as a demo of what NonBioS can do with any open source software. It went a little further than we expected. Since then, we’ve had roughly a thousand OpenClaw deployments through our infrastructure. People come in, spin up a VM, get OpenClaw running, connect it to WhatsApp or Discord, and start experimenting with this thing that Jensen Huang called “the operating system for personal AI.” I also spoke with multiple people in my own network - engineers, founders, technical operators - who deployed OpenClaw independently and spent real time trying to make it useful. Not a weekend of tinkering. Weeks. Some of them genuinely wanted to make it work and went to great lengths setting it up. Here’s what I found: there are zero legitimate use cases. I don’t want to be unfair - OpenClaw is not fake. It’s a real piece of software. It installs. It runs. It connects to your messaging apps. It can talk to Claude and GPT. It can execute shell commands. The technology exists. But when I looked at what people are actually doing with it - across our thousand deploys, across conversations with my network, across the flood of LinkedIn and Twitter posts - I couldn’t find a single use case that holds up under scrutiny. The core issue is: Memory, and everything else flows from it. OpenClaw runs as a persistent agent. It’s supposed to be your always-on assistant. But its memory is unreliable, and the worst part - you don’t know when it will break. Think about what that means in practice. You ask OpenClaw to send an email on your behalf. It’s been following a conversation thread about a birthday party you’re planning. Three people confirmed. One person declined. OpenClaw sends the update email - but it’s lost the context about who declined. Now you’ve sent a message with wrong information to everyone on the list, and you didn’t catch it because the whole point of an autonomous agent is that you’re not supposed to be checking every output. An autonomous agent that you have to verify every time is just a chatbot with extra steps. This isn’t a bug that gets fixed in the next release. It’s a fundamental constraint of how OpenClaw manages context. The agent runs, the context fills up, things get forgotten. Sometimes the important things. You’ll never know which things until after the damage is done. I’ve spent the last year working on this exact problem at NonBioS. We call our approach Strategic Forgetting, and I can tell you from deep experience: keeping an AI agent coherent over long task horizons is the hardest engineering problem in this entire space. It’s not something you solve by creating a memory architecture which maps every day, month, year to separate files. The brain is not a list of files that you index. You don’t remember everything at a high level which happenned last month, and you can’t ‘pull in’ the details of a specific day. You remember everything, all at once, whatever is important and you forget the details, unless they are important too. After going through everything I could find - our deploy data, user conversations, posts online - the only use case that genuinely works is daily news summaries. OpenClaw searches the web for topics you care about, summarizes them, and sends the summary to you on WhatsApp every morning. That’s it. That’s the killer app. A personalized daily briefing is nice. But you can already do this with a Zapier workflow and any LLM API. Or with ChatGPT’s scheduled tasks. Or with about a dozen other tools that have existed for years. You don’t need a 250,000-star GitHub project running on a dedicated server with root access to your environment to get a morning news digest. But there is part of the entire OpenClaw saga that I think needs to be said plainly. The vast majority of posts you see about OpenClaw: “I automated my entire team with OpenClaw,” “OpenClaw replaced three of my employees,” “My OpenClaw agent runs my business while I sleep” - are designed to capture marketing hype. People know that OpenClaw content gets engagement right now, so they produce OpenClaw content. The incentive is the audience, not the accuracy. I’ve talked to people behind some of these posts. In every case, when you dig deeper, the story is one of two things: either what they built could already be done with standard AI tools (ChatGPT, Claude, any decent LLM with a simple integration), or it’s aspirational - a weekend prototype that technically works in a demo but that nobody would trust with real tasks. I’m not calling anyone a liar. I think most of these people genuinely believe in what they’re building. But there is a meaningful gap between “I got OpenClaw to do something cool once” and “I rely on OpenClaw to do something important every day.” I haven’t found anyone in the second category. The safety situation around OpenClaw has been well documented so I won’t belabor it. This is the environment in which people are connecting OpenClaw to their email, their calendar, and their messaging apps. With an agent that has unreliable memory. Running on their personal computers. We made the NonBioS deployment video specifically because we saw this problem - at minimum, if you’re going to experiment with OpenClaw, do it in an isolated VM where a compromise doesn’t touch your personal data. That’s table stakes, and most people aren’t even doing that. So should you bother? Here’s my honest take. If you have a weekend to spare and you enjoy tinkering with new technology, OpenClaw is a fascinating experiment. You will learn things about how AI agents work, about the gap between demos and production, about why context management matters. It’s a great educational experience. But if you’re evaluating whether to invest real time to OpenClaw as it exists today, you can give it a pass without feeling left out. You’re not missing a productivity revolution. You’re missing a morning news digest and a lot of time spent configuring YAML files. The ideas behind OpenClaw are right. The era of AI agents that do real things on real computers is here. I believe that deeply - it’s what we’re building at NonBioS every day. But the execution isn’t there yet. And until the memory problem is solved - until you can actually trust an autonomous agent to remember what matters and forget what doesn’t, consistently, over hours and days of work - the rest is theater. -- Front page discussion on Hacker News confirms everything:
Nishant Soni tweet media
English
7
44
394
21.9K