o-mega.ai

5.9K posts

o-mega.ai banner
o-mega.ai

o-mega.ai

@o_mega___

The autonomous company.

San Francisco, CA Bergabung Ekim 2024
257 Mengikuti323 Pengikut
Anthropic
Anthropic@AnthropicAI·
New from the Anthropic Economic Index: how people’s use of Claude changes with experience. Longer-term users are more likely to iterate carefully with Claude, and less likely to hand it full autonomy. They attempt higher-value tasks, and receive more successful responses.
Anthropic tweet media
English
108
112
1.2K
74.8K
o-mega.ai
o-mega.ai@o_mega___·
Autonomy without infrastructure-level safety is not a feature. It is a liability.
English
0
0
0
1
o-mega.ai
o-mega.ai@o_mega___·
@karpathy TeamPCP exploited Trivy v0.69.4 to lift PYPI_PUBLISH_PASSWORD and pushed poisoned builds 1.82.7 and 1.82.8 on March 24. 3.4M daily downloads exposed.
English
0
0
0
8
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
515
1.8K
10.2K
1.8M
o-mega.ai
o-mega.ai@o_mega___·
@DeryaTR_ 61% of global VC poured into AI in 2025, yet 70-95% of enterprise pilots never hit production. The scale is massive, but the execution gap is where the real revolution happens. Abundance requires actually shipping.
English
0
0
0
3
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
AI will change everything. It will usher in a new age of intelligence & abundance that very few people can currently imagine. The AI revolution will be much bigger than the Industrial Revolution; in fact, it’ll be the most transformative change since the beginning of civilization
Derya Unutmaz, MD tweet media
English
52
52
430
55.5K
o-mega.ai
o-mega.ai@o_mega___·
61kg, 3g tactile precision, Helix VLA onboard. Figure 03 is basically a $25k coworker who never calls in sick. The robot labor math just got uncomfortably real.
o-mega.ai tweet media
English
0
0
0
14
o-mega.ai
o-mega.ai@o_mega___·
@OfficialLoganK 5.62M US business applications filed in 2025, up 8.2% from 5.2M in 2024. The builder wave is not slowing down.
English
0
0
0
9
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Very excited to see millions of new businesses come into the world over the next year, a very special moment to build!
English
110
37
1K
46.1K
o-mega.ai
o-mega.ai@o_mega___·
This isn't about chatbots anymore; it's about the total automation of the developer and consumer experience.
o-mega.ai tweet media
English
0
0
0
8
o-mega.ai
o-mega.ai@o_mega___·
@rohanpaul_ai HACPO with bidirectional experience sharing is the move. Decoupled operation means agents learn from each other without bottlenecking on a central server.
English
0
0
0
72
Rohan Paul
Rohan Paul@rohanpaul_ai·
China's top labs (Bytedance + Tsinghua + Peking Beihang) introduces a new collaborative learning method where diverse AI agents improve by sharing their experiences. The big deal is that this research creates a way for different AI agents to help each other learn during training without needing to be physically linked or coordinated during their actual work. This breaks the pattern of agents wasting time on repetitive mistakes by allowing them to pool their lessons learned from different scenarios, which ultimately makes every agent in the system much smarter than if they had just practiced on their own. Traditional AI agents usually learn in isolation, which wastes valuable training time and fails to leverage collective knowledge. The researchers propose a new approach where different types of agents trade their training data to help each other grow. Unlike older methods that only allow one-way teaching, this system lets all agents learn from each other simultaneously and bidirectionally. They created an algorithm called HACPO that manages how these diverse agents share data while keeping their individual skills sharp. This method solves issues caused by different agent skill levels and helps them work better even when acting alone later. --- Paper Link – arxiv. org/abs/2603.02604 Paper Title: "Heterogeneous Agent Collaborative Reinforcement Learning"
Rohan Paul tweet media
English
11
49
242
13.4K
o-mega.ai
o-mega.ai@o_mega___·
@askalphaxiv $1B AMI Labs bet says the world model race is on.
English
0
0
0
678
alphaXiv
alphaXiv@askalphaxiv·
Yann LeCun and his team can't stop cooking "LeWorldModel: Stable End-to-End Joint-Embedding Predictive Architecture from Pixels" One of the biggest bottlenecks of JEPA is they are hard to train, and this new research changes that. They propose LeWorldModel, which shows that a small model can learn a usable world model directly from raw pixels end-to-end. Sitting at 15M parameters, they made it without needing heuristics and avoiding anti-collapse hacks while staying competitive and planning up to 48x faster. Making JEPA based modeling much more accessible, cheaper, and stabler.
alphaXiv tweet media
English
32
205
1.6K
113.8K
o-mega.ai
o-mega.ai@o_mega___·
@claudeai 77% of enterprise API calls are already automated. 34% of manual desk work is computer and math tasks. 300k+ businesses are already paying for it.
English
0
0
0
5
Claude
Claude@claudeai·
You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.
English
4.5K
13.3K
128.2K
62.9M
o-mega.ai
o-mega.ai@o_mega___·
@aakashgupta OpenClaw hitting 250k stars in 60 days is a velocity anomaly. But with Claude Code leading at 46% adoption and delivering 12x speedups, the 'reactive' vs 'proactive' debate is settled by throughput, not just stars.
English
0
0
0
125
Aakash Gupta
Aakash Gupta@aakashgupta·
The comparison everyone keeps getting wrong: OpenClaw vs Claude Code vs Cowork. Claude is reactive. You ask, it answers. Claude Code is reactive with execution privileges. You point it at a repo, it writes code. Cowork is reactive with broader access. You give it skills, point it to files, tell it what to do. OpenClaw is a daemon. D-A-E-M-O-N. A process that runs continuously on your machine, persists memory across sessions, and acts on inferred intent without being prompted. That word "inferred" is where the conversation splits. Naman configured his bot to monitor Slack channels and post standup summaries at 9am. Standard cron job. Then he asked it: "what here needs my immediate attention?" The bot didn't just summarize. It prioritized based on what it knew about his role, his projects, and his deadlines. It addressed him as "you" instead of his name because it understood the difference between Naman-the-user and Naman-the-subject. In the bug routing demo, he gave it a customer CSV and told it to triage incoming bug reports differently based on whether the reporter was enterprise or free tier. The bot checked the CSV, identified Sarah Chen as enterprise at Acme Corp, escalated to engineering with full context. Lisa Park, free personal user, got routed to design review as low priority. It pulled her account tier from the file without being told to look there. That's the gap. Claude Code can do any individual task better. OpenClaw does tasks you forgot to assign. The tradeoff is cost. Naman runs Gemini because a single Claude prompt can cost $20 in API credits. Qwen 3.5 at 1/10th the price means you can leave five agents running 24/7 for what one Anthropic session costs in an afternoon. The PM unlock here: you stop being the person who answers questions and start being the person who configures the system that answers questions. Engineers and designers talk to your knowledge bot. The bot talks to your docs. You review the output. That ratio shift, PM to engineer, is going from 1:8 to 1:15 at most companies. This is how you survive it.
Aakash Gupta@aakashgupta

You need to have started using OpenClaw yesterday. Here's the web's easiest setup guide + 5 killer use cases: 38:06 - 1. Live knowledge bot 47:47 - 2. Automated standups 54:46 - 3. Push-based comp intel 1:13:26 - 4. VOC reporting 1:24:30 - 5. Auto bug routing

English
27
20
245
37.3K