GullyCoder
1.3K posts

GullyCoder
@bhamboo
“Stumbled into coding to bootstrap my startup, now I’m weaving dreams with code. From operations to innovation, loving the journey! #AccidentalCoder
India Katılım Ekim 2009
560 Takip Edilen173 Takipçiler

16,000+ investors in one searchable database. If you are raising, this is for you:
We hit 16k VCs, angels, and family offices on OpenVC.
All investing. All searchable. All free.
👉 Comment "16k" for free access
Here's what you’ll get:
→ 16,000+ investors worldwide
→ Search by stage, vertical, geography, or check size
→ Find intros through LinkedIn + Gmail connections
→ Submit your deck directly to investors who match your criteria
Like + Comment "16k" and I'll DM you the link.
Make sure you are following me to receive my message.
Retweet this for other founders in your network.
English
GullyCoder retweetledi
GullyCoder retweetledi

Few months ago, I set up a small AI hacker team at @Razorpay
2 people. Today, they are 100x builders.
With AI, people aren’t the constraint.
Org structure is.
So now I’m scaling this.
If you’ve spent the last few months deep in
Claude Code / OpenClaw / agents
(or anything similar)
and feel like you’ve seen the future - this is for you.
What you’ll do:
Review workflows.
Rebuild them with AI.
Ship fast.
Perks:
• Unlimited tokens. Any model. Any tool.
• Real problems at massive scale
• No hierarchy. Direct access across the org
No compensation ceiling.
Pay scales with output, not title.
Outperform the org, out-earn it.
No resume.
Send me what you’ve built with AI. (Form below)
Bangalore | Full-time | Builders only.
English
GullyCoder retweetledi

- Drafted a blog post
- Used an LLM to meticulously improve the argument over 4 hours.
- Wow, feeling great, it’s so convincing!
- Fun idea let’s ask it to argue the opposite.
- LLM demolishes the entire argument and convinces me that the opposite is in fact true.
- lol
The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
English
GullyCoder retweetledi
GullyCoder retweetledi

One of the most important things about this new age is you have to use tokens aggressively to create something remarkable
You have to let it rip. If you do, and you have agency and taste, the result will be remarkable.
So token credits for AI is a big part of making startups accessible regardless of where you grew up or whether your family has money
Y Combinator@ycombinator
Every student accepted into Startup School India now gets $25k+ in AI and cloud credits. Apply, get in, and start building: events.ycombinator.com/yc-sus-india
English
GullyCoder retweetledi

Software horror: litellm PyPI supply chain attack.
Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords.
LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm.
Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks.
Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages.
Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda
LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below
English

Tweet 7:
We're entering an era where:
— Junior devs mass accept AI suggestions without questioning
— Senior devs lose confidence in their own judgment because "the AI said so"
— Working production code gets rewritten to fix imaginary bugs
The most important developer skill in 2026 isn't prompting.
It's knowing when to say "no, you're wrong, prove it."
English
GullyCoder retweetledi



