Sunny

9.7K posts

Sunny banner
Sunny

Sunny

@sunnypause

I build the employees that never sleep, complain, or quit. AI agents for real businesses.

dark side of the moon 가입일 Temmuz 2011
582 팔로잉297 팔로워
Sunny 리트윗함
Harris
Harris@HarrisAuthority·
🚨 THE DOMINOES ARE FALLING. IN DAYS, NOT WEEKS. Here is the global oil crisis — country by country — as of today: 🇱🇰 Sri Lanka — RATIONING. 4-day work week. Schools closed. 🇵🇰 Pakistan — CRISIS. Overnight price surge. Long queues. 4-day work week. 🇮🇳 India — 9 DAYS of reserves left. Emergency suppliers being hunted. 🇰🇷 South Korea — 50 DAYS left. Clock ticking. 🇯🇵 Japan — LIED. Said 254 days. Actual usable reserves: 95. 🇬🇧 UK — Shell CEO warning: SHORTAGE STARTS IN APRIL. 🇩🇪 Germany — Gas up 30%. EU emergency plan launched. 🇫🇷 France — Paying 30% more at pumps than 8 weeks ago. 🇿🇦 South Africa — Government says "stable." Citizens photograph empty pumps. 🇹🇷 Turkey — Stocks crashed. Inflation exploding. Currency under pressure. 🇧🇷 Brazil — Watching nervously. Own oil helps but supply chains hurting. 🇦🇺 Australia — Tanker delays. Import dependent. Hoping IEA coordination holds. 🇺🇸 USA — Gas taxes suspended in states. SPR drawn down. Iran sanctions quietly paused. 🇨🇳 China — 1.4 BILLION barrels stockpiled. Banned exports. Still getting Iranian oil. This is DAY 26 of the Hormuz blockade. 20% of global oil — GONE. 8 million barrels per day — GONE. OPEC's response: +206,000 barrels. That is 2% of the hole. Here is what nobody is telling you: They're showing you "reserves are sufficient." They're NOT showing you Japan overstated reserves by 3x, South Africa has empty pumps, and India has 9 days. The real energy crisis hasn't even started yet. RT so this doesn't get buried by the algorithm.
Harris tweet media
English
421
5.6K
12.1K
2.4M
Sunny 리트윗함
Dicky
Dicky@pandiikar·
he is speed talking now 🤣
English
481
972
7.1K
2.6M
Sunny 리트윗함
Interesting things
Interesting things@awkwardgoogle·
A young girl practicing martial arts while waiting for the bus. Sometimes, you need less than you think to stay active.
English
534
4.6K
51.2K
2.1M
Sunny 리트윗함
cumi
cumi@sambellcumi·
😵‍💫
QME
888
3.6K
13.5K
2.1M
Sunny 리트윗함
Anthropic
Anthropic@AnthropicAI·
New on the Engineering Blog: How we designed Claude Code auto mode. Many Claude Code users let Claude work without permission prompts. Auto mode is a safer middle ground: we built and tested classifiers that make approval decisions instead. Read more: anthropic.com/engineering/cl…
English
228
419
3K
564.1K
Sunny 리트윗함
😺かずみん😺
😺かずみん😺@Kaz1717999Q·
中国・北京🇨🇳から流出した映像。
日本語
101
350
1.5K
265.2K
@levelsio
@levelsio@levelsio·
Okay let's see who can reply to this
English
2.5K
17
2.1K
947K
BridgeMind
BridgeMind@bridgemindai·
I just hit my Claude Code 5 hour limit insanely fast. What is happening?
BridgeMind tweet media
English
119
12
518
37.9K
Sunny
Sunny@sunnypause·
@sudoingX Dont think its better than 27b
English
0
0
1
161
Sudo su
Sudo su@sudoingX·
it's fast and smart. let me dive deeper and write for you all.
English
4
0
30
2.8K
Sudo su
Sudo su@sudoingX·
Woooo
QST
1
0
15
3.1K
Sunny 리트윗함
Sarcastic Sharma
Sarcastic Sharma@sarkasticsharma·
Hey @grok bring this picture back into Life
Sarcastic Sharma tweet media
English
557
190
7.5K
8.6M
Sunny 리트윗함
Jacob in Cambodia 🇺🇸 🇰🇭
Casual Bugatti Chiron sighting on Cambodian TikTok. Starting price around $3 million. Cambodia’s GDP per capita is about $2,800. Via @monyy88887 on TikTok
English
15
15
150
72K
Sunny 리트윗함
Luis Vercetti
Luis Vercetti@97Vercetti·
“Are you free after work?” me after work:
Luis Vercetti tweet media
English
353
16.7K
82K
2M
Sudo su
Sudo su@sudoingX·
first impressions of qwen 3.5 27B dense on a single RTX 3090. 35 tok/s. from 4K all the way to 300K+ context. no speed drop. hermes 4.3 started at 35 and degraded to 15 as context filled. qwen dense holds. MoE held 112 flat. 3x faster but only 3B of 35B active per token. architecture tradeoff. Q4_K_M on 16.7GB. native context 262K. pushed past training limit to 376K before VRAM ceiling on 24GB. tried q8 KV cache at 262K, speed collapsed to 11 tok/s. q4_0 KV is the sweet spot. flash attention mandatory. built in reasoning mode. the model thinks step by step before it answers. full chain of thought surviving Q4 quant. 1,799+ token thinking chains with self correction loops. on a single consumer GPU. gave it one prompt: "build a realtime particle galaxy simulation in one HTML file." 3,340 tokens. 95 seconds. one shot. ran on first load. full reasoning and coding in the video below. optimal config if you want to skip the hours of testing: llama-server -ngl 99 -c 262144 -fa on --cache-type-k q4_0 --cache-type-v q4_0 this is just the warmup. octopus invaders is next: 10 files, 3,400+ lines, zero steering. the prompt hermes quit at 22%. already more impressed than expected. full results coming soon.
Sudo su@sudoingX

last time this qwen 3.5 MoE one shotted a full space shooter game. 3,483 lines across 10 files. ran on first load. zero steering. 112 tok/s on a single 3090. then i ran the same prompt on hermes 4.3 36B dense. similar size model, completely different architecture. it wrote 1,249 lines, declared done with empty files, needed three steering interventions, and the game didn't work. used 22% of available context and quit. nine posts and two GPU configs later the conclusion was clear: the bottleneck wasn't hardware. but that leaves a question. was that a dense architecture problem or a hermes 4.3 problem? qwen is the only family that ships both. 35B MoE with 3B active per token. and a 27B dense with all 27B active per token. same team. same training pipeline. different architecture. downloading qwen 3.5 27B dense now. Q4_K_M. same quant. same single RTX 3090. same octopus invaders prompt. if it finishes the game clean, hermes was the problem. if it fails the same way, dense architecture doesn't have the endurance for autonomous coding on consumer hardware regardless of who builds it. the tiebreaker.

English
39
57
756
118.3K
Sunny 리트윗함
Nature is Amazing ☘️
Nature is Amazing ☘️@AMAZlNGNATURE·
Animal voice prank compilation 😂😂
English
78
1.3K
8.9K
258.9K
Sunny 리트윗함
Anthropic
Anthropic@AnthropicAI·
We’re launching with two new posts. Can AI do theoretical physics? Harvard physicist Matthew Schwartz led Claude Opus 4.5 through a graduate-level calculation. AI can’t yet do original work autonomously, but it can vastly accelerate it. Read more: anthropic.com/research/vibe-…
English
48
165
1.1K
593.1K
Sunny 리트윗함
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.3K
5.4K
27.8K
64.8M
Sunny 리트윗함
Daniel Hnyk
Daniel Hnyk@hnykda·
LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below
English
301
2.3K
9.4K
5.5M