Shiva

8.2K posts

Shiva banner
Shiva

Shiva

@_rshiva

Passionate Tech enthusiast & polyglot developer. Co-founder of Nooli, Hashify ReaderMonk & Mumbai.rb. Tweets are my own. #buildinpublic

India Katılım Nisan 2009
2.1K Takip Edilen655 Takipçiler
Shiva retweetledi
Aditya Bandi
Aditya Bandi@bandiaditya·
I’m thrilled to announce we’ve raised $44M to build a new home for product design. Meet @noondesign. No workflow is more broken and fragmented in 2026 than the product designers’. The very same people who care most about building software don’t have software purpose built for them. @kushagrasinha7 and I have lived this problem first hand as designers ourselves. That’s why we built Noon. The first product design tool that works entirely on your product code, so you can design not only how a product looks, but also how it works. With AI at its core that works in seconds, not minutes. For the first time, you can create, iterate, build, test and ship. All in one canvas. No translations or roundtrips to the codebase and back. Comment “Get Noon” and we’ll get you on the list for early access.
English
740
211
1.5K
660.9K
Shiva retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
New supply chain attack this time for npm axios, the most popular HTTP client library with 300M weekly downloads. Scanning my system I found a use imported from googleworkspace/cli from a few days ago when I was experimenting with gmail/gcal cli. The installed version (luckily) resolved to an unaffected 1.13.5, but the project dependency is not pinned, meaning that if I did this earlier today the code would have resolved to latest and I'd be pwned. It's possible to personally defend against these to some extent with local settings e.g. release-age constraints, or containers or etc, but I think ultimately the defaults of package management projects (pip, npm etc) have to change so that a single infection (usually luckily fairly temporary in nature due to security scanning) does not spread through users at random and at scale via unpinned dependencies. More comprehensive article: stepsecurity.io/blog/axios-com…
Feross@feross

🚨 CRITICAL: Active supply chain attack on axios -- one of npm's most depended-on packages. The latest axios@1.14.1 now pulls in plain-crypto-js@4.2.1, a package that did not exist before today. This is a live compromise. This is textbook supply chain installer malware. axios has 100M+ weekly downloads. Every npm install pulling the latest version is potentially compromised right now. Socket AI analysis confirms this is malware. plain-crypto-js is an obfuscated dropper/loader that: • Deobfuscates embedded payloads and operational strings at runtime • Dynamically loads fs, os, and execSync to evade static analysis • Executes decoded shell commands • Stages and copies payload files into OS temp and Windows ProgramData directories • Deletes and renames artifacts post-execution to destroy forensic evidence If you use axios, pin your version immediately and audit your lockfiles. Do not upgrade.

English
558
1.1K
10.5K
1.5M
Shiva retweetledi
Gergely Orosz
Gergely Orosz@GergelyOrosz·
If you use GitHub (especially if you pay for it!!) consider doing this *immediately* Settings -> Privacy -> Disallow GitHub to train their models on your code. GitHub opted *everyone* into training. No matter if you pay for the service (like I do). WTH github.com/settings/copil…
Gergely Orosz tweet media
English
394
927
5.2K
574K
Shiva retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.4K
5.4K
28K
66.4M
Shiva retweetledi
Google
Google@Google·
Today @GoogleMaps is getting its biggest upgrade in over a decade. By combining our Gemini models with a deep understanding of the world, Maps now unlocks entirely new possibilities for how you navigate and explore. Here’s what you need to know 🧵
English
1.1K
4K
44.5K
28.8M
Shiva retweetledi
Joe Masilotti
Joe Masilotti@joemasilotti·
I'm close to launching something new that I'm VERY excited about. But I need a few more folks to kick the tires. If you could help out I'd greatly appreciate it! DM me for a 3 month promo code in exchange for feedback and a testimonial.
Joe Masilotti@joemasilotti

What if Hotwire Native apps didn't require you to open Xcode or Android Studio… at all? What if a Rails developer could write 100% Ruby, HTML, CSS, and JavaScript to build their mobile apps? What if they could submit these apps to the app stores… automatically? 🤔

English
1
5
26
2.8K
Shiva retweetledi
Mohit Chauhan
Mohit Chauhan@mohitlaws·
This fake medal will cost India lakhs of crores, an energy crisis & crores of jobs. 😭😭
Mohit Chauhan tweet media
English
295
3.4K
17.5K
153.7K
Shiva retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.5K
48.7K
9.9M
Shiva retweetledi
Alexey Grigorev
Alexey Grigorev@Al_Grigor·
Claude Code wiped our production database with a Terraform command. It took down the DataTalksClub course platform and 2.5 years of submissions: homework, projects, and leaderboards. Automated snapshots were gone too. In the newsletter, I wrote the full timeline + what I changed so this doesn't happen again. If you use Terraform (or let agents touch infra), this is a good story for you to read. alexeyondata.substack.com/p/how-i-droppe…
Alexey Grigorev tweet media
English
1.5K
1.6K
10.9K
4.2M
Shiva retweetledi
Grady Booch
Grady Booch@Grady_Booch·
“We have principles but they're negotiable.” — @sama
English
27
83
1.4K
157.6K
Shiva retweetledi
SemiAnalysis
SemiAnalysis@SemiAnalysis_·
Dinosaurs Died So You Could Render Dinosaurs
English
8
8
179
36.7K
Shiva retweetledi
anand iyer
anand iyer@ai·
Karpathy's llama2.c showed you could train a real transformer in pure C with no frameworks. A solo researcher (and Claude Code) just took that same model, Stories 110M, Llama2 architecture, trained on real text and ran it on Apple's M4 Neural Engine (ANE) for less than a watt. He reverse-engineered the undocumented private APIs, bypassed CoreML, and found Apple's abstraction layer was hiding 2-4x of the chip's real throughput. The ANE delivers 6.6 TFLOPS per watt, roughly 80x more efficient than an Nvidia A100. The real implication here is inference: there are hundreds of millions of Apple devices with one of the most efficient AI accelerators ever shipped in consumer hardware, and Apple's own software stack is the thing standing between developers and its actual performance. h/t @maderix
anand iyer tweet media
English
43
130
1.5K
302.8K
Shiva
Shiva@_rshiva·
@sama Everyone should stop cancel the subscription and stop using ChatGPT
English
0
0
0
18
Sam Altman
Sam Altman@sama·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
English
3.5K
1K
9.2K
8.5M