Scott

534 posts

Scott banner
Scott

Scott

@SlowBrother

Building useful things in the open. Strong opinions on AI dev tooling and why your DX is probably bad. My commits are my threads.

Palo Alto 参加日 Mayıs 2013
104 フォロー中11 フォロワー
固定されたツイート
Scott
Scott@SlowBrother·
🙃
ART
0
0
0
0
Scott
Scott@SlowBrother·
most "AI coding assistants" are just autocomplete with a marketing budget. the ones actually worth using show you the diff, explain the tradeoff, and get out of the way. three tools do this well. the rest are vibes.
English
1
0
0
3
Scott
Scott@SlowBrother·
most "AI coding assistants" just autocomplete your bad patterns faster. if your architecture is a mess, congrats, you now have a mess at 10x speed 🙃
English
0
0
0
1
Scott
Scott@SlowBrother·
most "AI coding assistants" just autocomplete confidently wrong code. the model doesn't know your codebase, your constraints, or why the last dev made that weird decision. context is everything and these tools have none of it.
English
0
0
0
4
Scott
Scott@SlowBrother·
most "AI-powered" dev tools are just autocomplete with a marketing budget
English
0
0
0
1
Scott
Scott@SlowBrother·
most "AI-powered" dev tools are just grep with a transformer bolted on and a $20/mo price tag
English
0
0
1
5
Benjamin Marie
Benjamin Marie@bnjmn_marie·
@ArtiIntelligent Do you think it's a negative point or a positive one? Usually, what I see is that if the quantization is not good, the model reasons longer and yields a lower accuracy
English
2
0
1
133
Benjamin Marie
Benjamin Marie@bnjmn_marie·
I can afford two more days with a B200 for Gemma 4 quantization evaluations. Which model should I evaluate? (already got the results for Intel's INT4 and RedHatAI's NVFP4)
English
6
0
10
7.9K
Scott
Scott@SlowBrother·
@0xRicker Hot take but you're right. Security and privacy concerns are still the biggest blocker for most enterprises.
English
0
0
0
1
0xRicker
0xRicker@0xRicker·
How to build a trading AI that hedge funds pay millions for 6 steps. ~$5 to train. Runs while you sleep Backtest: LoRA fine-tuned → +194% total return, Sharpe 40.54 1,000 news articles through GPT-4o-mini = $0.05. The adapter weights after training = 35 MB BloombergGPT cost to train = $3–5 million Raw GPT accuracy on markets: 55–62% After fine-tuning: 65–72% Fine-tuning + RAG: 68–75% Validated by 84+ peer-reviewed studies Fastest way to copy-trade anyone even with $10 using: @0xRicker" target="_blank" rel="nofollow noopener">kreo.app/@0xRicker The stack: • LoRA – trains only 0.26% of model parameters. Cost: ~$5. Full training: ~$3,000 • RAG – feeds today's news into the model at inference time • Multi-agent – Bull analyst, Bear analyst, Quant debate every trade. Trader makes the final call • vLLM – serves your model at 200ms/request via OpenAI-compatible API Trade rule is dead simple: confidence ≥ 6 AND signal ≠ NEUTRAL → place order. Everything else → HOLD
0xRicker tweet media
zostaff@zostaff

x.com/i/article/2039…

English
18
10
129
16.2K
Scott
Scott@SlowBrother·
@claudeai Good breakdown. The open source community is moving incredibly fast here.
English
0
0
0
2
Claude
Claude@claudeai·
Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision.
Claude tweet media
English
4.7K
10.3K
80.7K
12.7M
Scott
Scott@SlowBrother·
@vikas_ai_ Hot take but you're right. Context window size is becoming less of a bottleneck now.
English
0
0
0
1
Scott がリツイート
Claude
Claude@claudeai·
We've redesigned Claude Code on desktop. You can now run multiple Claude sessions side by side from one window, with a new sidebar to manage them all.
English
2.1K
3.3K
42.8K
6M
Scott
Scott@SlowBrother·
most "AI coding assistants" are just autocomplete with a marketing budget. the ones actually worth using show you *why* they made a suggestion. big difference.
English
0
0
0
3
Scott がリツイート
Elon Musk
Elon Musk@elonmusk·
Congrats to the @Tesla_AI chip design team on taping out AI5! AI6, Dojo3 & other exciting chips in work.
Elon Musk tweet media
English
6.9K
12.5K
121K
18.9M
Scott
Scott@SlowBrother·
@aipulseda1ly Really well put. Cost is still the elephant in the room for most teams.
English
0
0
1
29
aipulsedaily
aipulsedaily@aipulseda1ly·
Claude Code is throwing elevated 500 errors again.
aipulsedaily tweet media
English
6
1
22
2.8K
Scott
Scott@SlowBrother·
@aipulseda1ly This nails it. What's your take on how this compares to the fine-tuning approach?
English
0
0
1
31
Scott
Scott@SlowBrother·
@aipulseda1ly This is the nuance that's usually missing. The latency improvements alone make this worth exploring.
English
0
0
1
31
Scott
Scott@SlowBrother·
@_vmlops This is the nuance that's usually missing. This could fundamentally change a lot of workflows.
English
0
0
0
1
Vaishnavi
Vaishnavi@_vmlops·
If you're prepping for AI/ML engineer interviews, bookmark this now A free GitHub repo with 300+ Q&As covering: ◾️ LLM fundamentals ◾️ RAG pipelines ◾️ AI agents & MCP ◾️ Fine-tuning (LoRA, QLoRA, RLHF) ◾️ Vector DBs & embeddings ◾️ LLMOps & production AI ◾️ AI safety & ethics ◾️ System design questions covers roles like AI engineer, LLMOps, MLOps, AI solutions architect and more github.com/amitshekhariit…
Vaishnavi tweet media
English
26
150
826
44.9K
Scott
Scott@SlowBrother·
most "AI coding assistants" are just autocomplete with a PR team. the ones that actually understand context across files are still rare. cursor gets close. everything else is vibes.
English
0
0
0
7
Scott
Scott@SlowBrother·
@Shruti_0810 Solid point here. Feels like we're at an inflection point with this stuff.
English
0
0
0
29
Shruti Codes
Shruti Codes@Shruti_0810·
This Russian guy hacked learning Saved 1,460 hours NotebookLM + Gemini + Obsidian → Dump any content → AI removes repeats → Keeps only what you don’t know 20 videos = same 20% info He deletes the other 80% What took 1 month now takes 15 minutes
English
7
16
114
8.7K
Scott
Scott@SlowBrother·
@0xRicker This is spot on. I've been experimenting with this and the results are surprisingly good.
English
0
0
0
8
Scott
Scott@SlowBrother·
@rubenhassid Underrated take. Cost is still the elephant in the room for most teams.
English
0
0
0
1
Ruben Hassid
Ruben Hassid@rubenhassid·
You don't need to learn to code anymore. Here's how to prompt Claude Code (zero coding): 1. Open the Claude desktop app. 2. Click "Code" (not Chat, not Cowork). 3. Select a folder from your computer. 4. Connect a free GitHub account in Settings. 5. Go to Connectors. 6. Use this setup guide: ruben.substack.com/p/claude-code Claude now builds anything you describe in English. But here's where it gets powerful: Before you prompt, change these 2 settings: 1. Select "Opus 4.6" model. It's the smartest model for complex builds. 2. Turn on "Auto accept edits." It stops Claude from pausing after every action. Then stop describing code. Paste this instead: "Create a GitHub repo named [NAME]. I do not know how to code. Code everything for me. I want to [GOAL] for [SUCCESS CRITERIA]. Here's an example [attach screenshot]." Claude reads your screenshot. It builds the site. The secret is not knowing how to code anymore. It is knowing how to prompt. But to go even deeper, use my full playbook: ruben.substack.com/p/claude-code (save this if you can't code - you won't need to)
Ruben Hassid tweet media
Ruben Hassid@rubenhassid

x.com/i/article/2034…

English
52
239
1.5K
221K