NEAR AI
529 posts

NEAR AI
@near_ai
Confidential, verifiable AI infrastructure for a user-owned AI economy.



Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

IronClaw docs page is up: docs.ironclaw.com Run secure personal assistant locally or hosted on agent(dot)near(dot)ai


the AI diffusion bottleneck is reliability. not capability. most teams don't have the resources to measure agents. the right way to transition to agents safely is open evals infrastructure. that's what @silverstreamAI @ServiceNowRSRCH @nvidia @IBM @thealliance_ai are doing


Most AI systems are built on one assumption: you trust the provider. @AskVenice and NEAR AI are removing that assumption. Venice users can now run prompts inside a secure enclave, sealed from the cloud provider, the OS, and the infrastructure layer. Privacy you can verify. near.ai/blog/venice-is…











IronClaw Reddit AMA featuring NEAR Co-Founder Illia Polosukhin IronClaw offers simple setup and built-in security for OpenClaw's personal AI assistant. Join @ilblackdragon today from 9:30AM-12:00PM PST in the r/machinelearning subreddit to learn more.


