Up

306 posts

Up

Up

@overcccz

Katılım Kasım 2025
26 Takip Edilen5 Takipçiler
Up retweetledi
Sudo su
Sudo su@sudoingX·
most of you don't know how big a deal it is that a single rtx 3090 from 2020 runs qwen 27b dense q4 with 256k context at 40 tok/s, full agentic loops on hermes agent, zero tool call failures. the more i build on this card the more i think nobody really knows how untapped it actually is. the silicon was always capable, the models finally caught up.
English
45
30
569
239.2K
Aakash Roy
Aakash Roy@Aakashroy32·
@Samaytwt Claude opus 4.7 has become unusable now eating so many credits in performing simple tasks. Had to buy codex subscription yesterday
English
2
0
18
4.7K
Samay
Samay@Samaytwt·
Be honest. As a developer, which one is worth it for coding? - GPT-5.5 - Claude Opus 4.7
Samay tweet media
English
353
20
720
148.5K
Up retweetledi
CJ Zafir
CJ Zafir@cjzafir·
Codex 5.5 as orchestrator and Deepseek v4 as executor is a steal. I burnt 100M tokens in 36 hours on Deepseek v4. Beating Opus 4.6 with no sweat.
CJ Zafir tweet mediaCJ Zafir tweet media
English
95
69
1.9K
217.9K
Up retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
Both OpenAI and Anthropic just released official prompting guides. Both say the same thing. Your old prompts don’t work anymore. But for opposite reasons. Claude Opus 4.7 stopped guessing what you meant. It does exactly what you type. Nothing more, nothing less. Vague instructions that worked on 4.6? They now produce narrow, literal, sometimes worse results. Not because the model got dumber. Because it stopped compensating for sloppy thinking. GPT-5.5 went the other direction. OpenAI’s guide literally says: “Don’t carry over instructions from older prompt stacks.” Legacy prompts over-specify the process because older models needed hand-holding. GPT-5.5 doesn’t. That extra detail now creates noise and produces mechanical output. Claude got more literal. GPT got more autonomous. Both now punish the same thing: prompts written without clear thinking behind them. One developer on Reddit captured it perfectly after analyzing hundreds of community posts. The complaints tracked almost perfectly with prompt specificity. Precise prompts got better results on 4.7. Vague prompts got worse. The model didn’t regress. The prompts did. OpenAI’s new framework is “outcome-first prompting.” Describe what good looks like. Define success criteria. Set constraints. Then get out of the way. The model picks the path. Anthropic’s framework is the inverse: be surgically specific about what you want, because the model won’t fill in your blanks anymore. Two different architectures. Two different philosophies. One identical conclusion: the person writing the prompt is now the bottleneck, not the model. Boris Cherny, the engineer who built Claude Code, posted on launch day that even he needed a few days to adjust. That post got 936 likes. Meanwhile, Anthropic increased rate limits for all subscribers because the new tokenizer uses up to 35% more tokens on the same input. The model is more expensive to run lazily. Cheaper to run precisely. The models are converging in capability. The gap between good and bad output is no longer about which model you pick. It’s about the 2 minutes of structured thinking you do before you type anything. That thinking system is the skill. The prompt is just what it produces.
Alex Prompter tweet mediaAlex Prompter tweet media
English
118
272
2.3K
332.4K
Up retweetledi
Charlie Hills
Charlie Hills@charliejhills·
Claude will gaslight you, until you install this skill. It's called The LLM Council. You ask a question. 5 advisors attack it from different angles. Then they peer-review each other before giving you the verdict. How it works: 1. You ask a real decision question. 2. 5 advisors attack it from different angles. 3. They grade each other's work anonymously. 4. Chairman synthesises one verdict and the next step. Install in 4 steps: 1. Download the skill drive.google.com/file/d/16N7dwX… 2. Open Customise skills in Claude 3. Upload the SKILL.md file 4. Type /llm-council One Claude tells you you're right. Five Claudes show you where you're wrong. Get more free AI guides here charliehills.substack.com Repost ♻️ to help someone in your network. P.S. Credit to Ole Lehmann for building it.
Charlie Hills tweet media
English
116
397
3K
324.3K
Up retweetledi
The Higherside Chats Podcast
The Higherside Chats Podcast@HighersideChats·
If you or someone you love gets cancer, handle it however you see fit, and do this too.
The Higherside Chats Podcast tweet media
English
54
686
5K
227.1K
Up retweetledi
ELITE MASCULINE
ELITE MASCULINE@MasculineM7·
POV: 25 year old f*ckboy proves that you're currently brainwashed
English
153
477
17.2K
511.4K
Up retweetledi
🧬Maxpein🧬
🧬Maxpein🧬@maximumpain333·
"I WILL NEVER EAT ANIMAL FLESH." 🥩
English
46
205
1.4K
49.5K
Up retweetledi
Massimo
Massimo@Rainmaker1973·
Visual representation of someone pretending to help whilst causing pain.
English
1.5K
18.6K
67.6K
5.5M
Up retweetledi
SpaceX
SpaceX@SpaceX·
SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI. The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training supercomputer will allow us to build the world’s most useful models. Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.
English
2.4K
5.1K
38.4K
20.6M
Up retweetledi
voided
voided@voided·
Govt: why do you receive monthly transfers from a Dubai company while being unemployed Traders:
English
39
98
1.8K
101.3K
Up retweetledi
Axel Bitblaze 🪓
Axel Bitblaze 🪓@Axel_bitblaze69·
I used Openclaw daily for months before i moved off. tried every model through it multiple times. My honest experience; when i started, opus 4.6 was the king. not even close. everyone would agree on this. i ran it on every serious task. trading research, code refactors, skill execution. it just worked. sonnet 4.6 was my daily driver for anything that wasn't serious. cheaper, faster, didn't blow up the monthly cap. then i started testing the rest, grok 4.1 fast at $0.20 input / $0.50 output was insane value for high volume stuff. real time x search, telegram bots, always on monitoring agents. i'd still use it today for that.. kimi k2.5 and k2.6 on free tier. nobody talks about this enough. for overnight agentic runs where you don't need perfection, this is free and it works. k2.6 reportedly beats gemini 3.1 pro on agent benchmarks now. gpt-5.4 i used for code reviews. second pair of eyes on opus output. caught stuff claude missed sometimes. but not good enough to replace opus. gemini 3.1 pro was honestly mid.. "not bad, not impressive." used it when everything else was rate limited. deepseek v3.2 for cheap fallback when the bill was getting scary. minimax i tried for a week after seeing hype. there's a r/openclaw thread where a guy apologized for recommending it because it didn't hold up. same experience on my end. then april 4 happened anthropic killed oauth. the $200 magic died overnight. i was pissed for like 2 days. everyone was.. tried everything after that. hermes (by nous research) was smooth. claude code native with cron jobs was smoother. i picked native.. now here's the plot twist from last week, people are literally asking for 4.6 back. also read somewhere that about every major model getting dumber in mid april. claude, gemini, grok, people are noticing quality drops across the board right now. my current stack is: > claude code native with opus 4.7 for production code and deep research > grok 4.1 fast for telegram agents, morning briefs, high volume low stakes stuff > kimi k2.6 for overnight runs i don't want to pay for > stopped trying to find one perfect model
Elon Musk@elonmusk

Try it out

English
11
14
142
31.5K
Up retweetledi
bobbahh bushay
bobbahh bushay@Bobbahh·
@dangreenheck The problem is people who code for a living are not the ones who are going to be making business decisions Ive never had a boss that could tell the difference between 10k lines of hello world and a state of the art compiler
English
1
1
5
2.2K
Up retweetledi
Sudo su
Sudo su@sudoingX·
i am not being able to recover from what grok 4.3 is doing. been pushing it through autonomous agent work overnight and its operating at a level other grok versions was not capable of. the way it handles multi step reasoning, the way it doesnt bail halfway, different model energy entirely. everyone sleeping on this needs to wake up
English
43
6
284
14.6K
Up retweetledi
Adit_Yah🍁
Adit_Yah🍁@Adidotdev·
I saw a guy coding today. -Tab 1 ChatGPT. -Tab 2 Gemini. -Tab 3 Claude. -Tab 4 Grok. -Tab 5 DeepSeek. he asked every Al the same exact question. Patiently waited, then pasted each response into 5 different Python files. Hit run on all five. Pick the best one. Like a psychopath. It's me.
English
44
7
255
24K
Up retweetledi
jasper
jasper@jasperdevs·
dont wanna be the one to snitch but if you generate images in codex you can literally make infinite with no rate limit for 0% usage am i missing something 💀 (this is after 40 generations)
jasper tweet media
English
41
5
723
98.1K
Up retweetledi
Vaibhav Sisinty
Vaibhav Sisinty@VaibhavSisinty·
Did xAI just mass-murder the entire voice AI industry? 🤯 Grok just launched two voice APIs. Speech-to-Text and Text-to-Speech. Built on the same stack powering Tesla cars and Starlink support. And priced at 10x cheaper than ElevenLabs. Speech-to-Text: $0.10/hr batch. $0.20/hr streaming. Text-to-Speech: $4.20 per million characters. 25+ languages. Real-time streaming. Speaker diarization. Already outperforming ElevenLabs, Deepgram, and AssemblyAI on word error rate. TTS ships with expressive tags like [laugh], [sigh], , . Voices that don't sound like robots reading a script. ElevenLabs spent years building a voice AI company. xAI built voice AI for cars and satellites.
English
577
865
7.8K
24.5M
Up retweetledi
0xSero
0xSero@0xSero·
@VictorTaelin You need bigger models. Nothing below a Q4 of 200B gets remotely close to useful. 500B is a sweet spot. About 40k rn
English
7
2
97
11.2K
Up retweetledi
X Freeze
X Freeze@XFreeze·
To get the most out of OpenClaw, you need real-time internet search Grok offers unmatched live search capabilities for a fraction of the price: 50% less than OpenAI and Anthropic, and up to 7x cheaper than Google's paid tier Native Search API Costs (per 1k calls) 🔍 🟢 Grok: $5 🔴 OpenAI: $10 🔴 Anthropic: $10 🔴 Google: $14-$35 The best part is you can also enable real-time 𝕏 search with Grok for the exact same cost – giving you the most up-to-the-second data that no other AI can match 🚀
X Freeze tweet media
English
50
50
374
20.9K
Softboy
Softboy@softboywin·
yeah drugs are cool but have you ever spent a really great day with your mom
English
235
7.1K
49.2K
700.4K