stridell

96 posts

stridell banner
stridell

stridell

@emilstridell

I like bmws, AI, and OSS. Disgusting. Building ai agents, but not very good ones.

Katılım Nisan 2023
66 Takip Edilen5 Takipçiler
trish
trish@_trish_xD·
first time you wrote hello world - what language did you use?
English
2.1K
178
2.9K
220.6K
stridell retweetledi
Bleap
Bleap@BleapApp·
Giveaway time! Win a Claude Max 20x subscription for one month. To enter > follow @BleapApp > RT and like this post Winner will be selected in 48 hours. (fyi, you get 20% cashback on your claude, chatgpt and gemini subscriptions when using a Bleap card) Download the app via the link in our bio today and activate your virtual card in minutes.
Bleap tweet media
English
144
439
549
26.5K
Eth
Eth@EtherCoins·
@QuixiAI @__tinygrad__ The board is useless. No CUDA, no competitive bandwidth, no adoption. You're welcome.
English
4
0
7
919
Eric Hartford
Eric Hartford@QuixiAI·
Intel B70 finally makes a truly competitive move. 32 GB vram for < $1000 No matter how bad the software stack is, the sheer vram / dollar ratio will drive the community to fill in the gaps @__tinygrad__ Intel tinybox?
English
15
6
246
23.7K
stridell
stridell@emilstridell·
@kimmonismus Yeah, no. It absolutely sucks. Tried it but literally couldn’t even finish basic tasks.
English
0
0
2
184
定
@de3dsoul·
What game is this?
定 tweet media
English
1.8K
508
21.4K
2.8M
Celebi
Celebi@callmecelebi·
@TeksEdge @intel It impacts my heart 😢 Joking aside, hope they will consider CUDA for next line or model.
English
2
0
1
345
David Hendrickson
David Hendrickson@TeksEdge·
🚨 Exciting Local Inferencing News! @intel just dropped the Arc Pro B70 a serious new opportunity for local AI inference! 🔥💥 💰 Price: $949 📆 Available: Starting today (March 25, 2026) Key Specs: 32GB GDDR6 VRAM 📦 (608 GB/s bandwidth ⚡) 32 Xe2 cores + 256 XMX engines 🧠 Up to 367 peak TOPS 🚀 TDP: 160–290W (Intel version ~230W) ⚡ Intel claims massive gains 📏 2.2x larger context windows ⚡ 85% higher token throughput 🏎️ 6.2x faster Time-to-First-Token 💵 Better token-per-dollar vs RTX Pro 4000 Equivalent to? 🧩 Strong competitor to the NVIDIA RTX 5000 Ada / Blackwell (32GB class) for local LLM inference — especially on Linux setups Excellent value under $1k for AI/agent workloads 👀 Worth it over used 3090s? Or still sticking with NVIDIA? Drop your thoughts below 👇
David Hendrickson tweet media
VideoCardz.com@VideoCardz

Intel launches Arc Pro B70 at $949 with 32GB GDDR6 memory videocardz.com/newz/intel-lau…

English
28
29
295
47.8K
TRAE
TRAE@Trae_ai·
GPT-5.4 is now available in TRAE. With upgraded reasoning abilities and enhanced image comprehension, @OpenAI GPT-5.4 delivers more precise and context-aware outputs. Elevate your development with improved efficiency and deeper web search capabilities in TRAE now.
TRAE tweet media
English
14
5
150
15.7K
stridell
stridell@emilstridell·
@zeddotdev Did you also generate the marketing with it? We didn’t just x. We y.
English
0
0
0
143
Zed
Zed@zeddotdev·
Zeta2 is here. 30% better acceptance rate than Zeta1. 200x more training data, LSP-powered context, faster predictions, open weights. Try it now in Zed. We didn't just improve the model. We rebuilt the entire data pipeline behind it: zed.dev/blog/zeta2
English
32
58
1.1K
54.3K
stridell
stridell@emilstridell·
@testerlabor Grok 4 has 3 trillion params? why is it so stupid then
English
0
0
1
79
Testlabor
Testlabor@testerlabor·
Grok 5 is training on Colossus 2, the world’s largest Supercluster and is expected to have 6 trillion parameters - roughly double that of Grok 4. The most exciting and most powerful outcome is most likely.
English
35
44
546
26.6K
Revealing Skunk
Revealing Skunk@RevealingSkunk·
@SOSOHAJALAB Nah it's unbelievably stupid, I asked it to replace a refactored method call in tests, it simply substituted the name without inferring parameters and updating mocks. Real claude did all of that easily
English
3
0
27
7.7K
stridell retweetledi
Daniel Hnyk
Daniel Hnyk@hnykda·
LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below
English
301
2.3K
9.4K
5.5M
stridell
stridell@emilstridell·
Great
stridell tweet media
English
0
0
0
6
stridell
stridell@emilstridell·
@DavidOndrej1 it’s a bug, unless you’re trying to dictate a message >10 min long
English
0
0
0
39
David Ondrej
David Ondrej@DavidOndrej1·
i’m paying $200 a month for ChatGPT pro yet I have 10 minutes of dictation only?! what the fuck
David Ondrej tweet media
English
81
8
537
74.7K
stupid tech takes
stupid tech takes@stupidtechtakes·
do people really believe this lol
stupid tech takes tweet media
English
79
71
5.3K
151.2K
Yam Peleg
Yam Peleg@Yampeleg·
Current stack: - Pi for everything. - GPT-5.4 for everything code. - Gemini-3.1 for design/brainstorming. - Sonnet 4.5-no-thinking for openclaw. - GLM-5 for parallel swarms. - Opus 4.6 for everything else. Currently Testing: Minimax-2.7.
English
69
33
805
54.5K
Kartik
Kartik@1kartikkabadi1·
@0xSero Shit. I might just have to cancel yt premium finally
English
3
1
26
3.1K
0xSero
0xSero@0xSero·
First they removed dislikes from YouTube, now they removed Likes. No more public signal on what matters. Making another braindead prediction, in 12 months we won't see views anymore. In 24 months we won't subscriber counts anymore.
0xSero tweet media
English
86
38
1K
54.7K
stridell
stridell@emilstridell·
@Amank1412 Seen so many people disliking the chart. What’s wrong with it?
English
0
0
0
63
Aman
Aman@Amank1412·
>be cursor >fork vscode >cross $1B ARR faster than ever >openai tried to replace you with codex (and bought astral just in case) >drop your own model anyway >make your own benchmark >rank yourself no.1 on it >make a criminally bad chart (and we still don't know what cursorbench is)
Aman tweet media
English
23
3
115
14.2K
BentoBoi
BentoBoi@BentoBoiNFT·
Why would anyone choose OpenClaw vs Claude Code? Claude now has: • Discord/Telegram integration • Cron Jobs (/loop) • 1M token memory • Webhooks to phone • Can run 24/7 on any Computer or Mac Mini This covers 95% of what people actually use OpenClaw for with better security and easier setup The only reason to stick with OpenClaw is if you want a multi-agent setup. That's the only difference I could think of Going to stick with OpenClaw for now because of this, but the gap is almost at zero
BentoBoi tweet media
English
310
40
673
96K