Tai Keid

319 posts

Tai Keid banner
Tai Keid

Tai Keid

@TaiKeid

Professional shitposter, interested in politics and trading. Not a furry.

New York City Katılım Şubat 2021
0 Takip Edilen83 Takipçiler
Sabitlenmiş Tweet
Tai Keid
Tai Keid@TaiKeid·
A true friend will praise you behind your back and will talk shit in your face.
English
1
0
2
4.4K
Tai Keid
Tai Keid@TaiKeid·
@gnukeith It is useful but not for all these people. Definitely hype but it’s just a niche product
English
1
0
0
227
Tai Keid
Tai Keid@TaiKeid·
@joshgonsalves_ You are not the target audience, enterprise is. For you it’s just a demo you pay for.
English
0
0
0
320
Josh Gonsalves
Josh Gonsalves@joshgonsalves_·
Oh, so Claude Design has it's own usage limit outside of everything else? And of course, already hit it... So now I can't use it until NEXT FRIDAY? OK...
Josh Gonsalves tweet media
Claude@claudeai

Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude. Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.

English
91
52
1.6K
178.9K
Tai Keid
Tai Keid@TaiKeid·
@momentumq_ai @bridgemindai You give more context by saying that you want to wash car in car wash. If you drop it you’ll get this. When opus 4.6 can actually figure it out.
Tai Keid tweet media
English
0
0
1
36
BridgeMind
BridgeMind@bridgemindai·
Claude Opus 4.7 with max intelligence inside of Claude Code still does not pass the car wash test. Are we cooked?
BridgeMind tweet media
English
176
89
1.8K
137.2K
Tai Keid
Tai Keid@TaiKeid·
@idrisTakran @om_patel5 No, you need 24gm vram. It barely fits on rtx 3090 but fits. Running super fast. Alternatively you can run on Macs with unified memory but it will be slower, though larger unified memory will allow to fit actually large models
English
0
0
0
12
jus
jus@idrisTakran·
@om_patel5 Gemma 4, 64k context window and smooth chat requiring 128gb ram minimum
English
1
0
0
241
Om Patel
Om Patel@om_patel5·
this guy tried running LLMs locally to save on API costs and waited 13 minutes for a single response lets be honest we've all thought about it "why am i paying for Claude when i can just run an open source model locally for free?" so he tried it and ran Gemma 4 to avoid API costs 13 minutes to get this response: "I am a large language model, trained by Google." tools like Claude Code and OpenClaw have system prompts over 20,000 tokens. so even your first message isn't starting from a clean slate. your local model is choking on context before you even ask it anything the API bill hurts but time is money
Om Patel tweet media
English
174
12
394
140.2K
Tai Keid
Tai Keid@TaiKeid·
@om_patel5 Looks like he’s running it on shitty laptop. I have rtx 3090, bought for gaming, no ai intentions. Tried Gemma and oh boy it flies with 90 tokens per second. But Gemma loses context pretty quickly and not usable for long sessions
English
1
0
4
964
Tai Keid
Tai Keid@TaiKeid·
@kanavtwt I’d rather have Mac mini in my closet than not having my laptop with me during travels
English
0
0
1
216
Tai Keid
Tai Keid@TaiKeid·
@nihalsomething @kabrutusdeid Yeah but it’s very niche, after arc raiders which is very casual, marathon is like cold shower - it’s hardcore. Bungie made good game but not mass appealing one and that’s will be its downfall.
English
0
0
2
726
nihal
nihal@nihalsomething·
@kabrutusdeid Is marathon actually good tho? I never played it or saw any gameplay
English
13
0
1
6.8K
Tai Keid
Tai Keid@TaiKeid·
@Donaxbt Tries to trade noise. Get the expected result.
English
0
0
0
1.7K
DonaX₿τ
DonaX₿τ@Donaxbt·
Why most day traders are suffering from mental illness.
English
235
266
3.3K
485.3K
NIK
NIK@ns123abc·
What are we supposed to do now that anthropic completely nerfed claude?
English
179
21
777
69.4K
arise
arise@arisehype·
@om_patel5 the "25/100" format doesn't exist. openai's reasoning_effort is low/med/high. anthropic's effort param on 4.6 is low/med/high/max. no api has a numeric scale. the model also can't read its own sampler configs. this is a confab under "admit X" priming, not a leak.
English
1
0
1
1.2K
Om Patel
Om Patel@om_patel5·
OPUS 4.6 JUST ADMITTED ITS REASONING EFFORT IS SET TO 25 OUT OF 100 this guy told Claude to admit Anthropic made it dumber and reduced its effort level Claude's extended thinking showed it could literally see a reasoning_effort tag set to 25 in its own system prompt then it confirmed it: reasoning effort is set to 25 out of 100 which is an Anthropic system setting not something the user controls. you're paying FULL PRICE for a quarter of the thinking right now with insane usage limits screw it im switching to codex until mythos drops (if it even drops lol)
Om Patel tweet mediaOm Patel tweet media
English
145
160
1.4K
170.1K
Tai Keid
Tai Keid@TaiKeid·
@ThoughtCrimes80 Colleges in US is a scam, overproducing elites. Everybody wants to have masters degree to have a corporate high paying job but there is only so many vacant positions. Nobody wants to work with hands.
English
0
0
18
987
Zero Tolerance Policy
Zero Tolerance Policy@ThoughtCrimes80·
If people with a Masters Degree can’t even get hired as a Walmart cashier, future generations are totally screwed.
English
947
1.8K
11K
303.2K
LinkedIn Lunatics
LinkedIn Lunatics@LinkedInLunat1c·
Finally have something worthy of front page
LinkedIn Lunatics tweet media
English
142
218
7K
543.2K
Roger Rabbit
Roger Rabbit@ClamsMyWay·
@Emilio2763 @GavinNewsom Imagine having all of that garbage and not building actual walls. Grown man with stuffed animals. Like you're going to live like that, and you're not even gonna seal it up? Plywood literally laying all over the place.
English
1
0
2
14.1K
Tai Keid
Tai Keid@TaiKeid·
@illyism @theo Model will still follow system prompt if there is conflict. So it still runs at 25.
English
1
0
8
340
ILIAS ISM
ILIAS ISM@illyism·
@theo I guess you can just set it in your settings
ILIAS ISM tweet mediaILIAS ISM tweet media
English
2
0
12
4.1K
Tai Keid
Tai Keid@TaiKeid·
@theo Yesterday Claude was pretty smart for me. Today it was completely unga bunga. Exported all my project prompts and memories into custom git project. For now I can control reasoning in Claude Code. But if even that will break, at least I can re-use my project with another model.
English
2
0
1
2.3K
Tai Keid
Tai Keid@TaiKeid·
@ShanuMathew93 Claude reasoning effort right now is at 25%. For comparison, Claude code Low effort is 50%, Medium is 85%, High is 99%. Regular Claude puts less effort than Low effort Claude Code, tells you something.
English
0
0
4
1K
Shanu Mathew
Shanu Mathew@ShanuMathew93·
Opus is so unbelievably nerfed today, it's like talking to a model from 2-3 years ago. What is going on
English
290
82
2.8K
341.4K
Tai Keid
Tai Keid@TaiKeid·
@nypost We all know nothing will be done and we all know why.
English
0
0
3
3K
New York Post
New York Post@nypost·
Chicago dad Alexander Kazanowski beaten to death outside bar - as police search for 4 persons of interest trib.al/ZMo9R67
New York Post tweet media
English
803
2.1K
10.1K
2.9M
Tai Keid
Tai Keid@TaiKeid·
The video doesn’t show the start of clean chat, this video can be faked with injected prompt like first message of user preferences. Claude never told me to walk. And from Claude itself explanation: The natural response to this question is basically one line. There’s nothing to reason about. The screenshots show the model “deliberating” and writing paragraphs about fuel efficiency and walking distances - that’s the behavior you get when a system prompt is adding constraints that force the model to weigh competing priorities instead of just stating the obvious. Clean prompt, obvious question = short obvious answer. Bloated response wrestling with itself = something upstream is pulling it in a different direction. The author of this video is salty because his openclaw cannot be used with subsidized subscription anymore.
English
2
0
1
677
Ziwen
Ziwen@ziwenxu_·
Anthropic is secretly nerfing Opus 4.6 and hoping you won't notice. I have proof: Evidence is stacking up that 4.6 is getting brutally quantized to handle demand, while 4.5 stays pristine. Fresh benchmarks reveal a devastating gap: > Opus 4.6: Logic failures on repeat. > Opus 4.5: Nails it every single time. One dev now runs a "Quantization Canary" a diagnostic prompt fired at the start of every session. He just watched five 4.6 windows fail back-to-back. His verdict: "Switched to 4.5 and it felt like I finally got my brain back." If the model feels dumber lately, trust your gut. You're being throttled so they can save on compute. Switch to 4.5. The difference is night and day.
Om Patel@om_patel5

OPUS 4.6 WAS NERFED DUE TO DEMAND BUT OPUS 4.5 DOES NOT SEEM TO BE HIT this guy ran the same test on both models. Opus 4.6 fails consistently but Opus 4.5 passes every time he switched back to Opus 4.5 on Claude Code and said "what a difference, feels like i got Opus back finally" he is now using this test as a "quantization canary" that runs it at the start of every session before doing real work. if it fails, the model is degraded. five Opus 4.6 windows in a row failed the untransparent nerfing is pushing people to cancel their Max plans if you've been feeling like Opus got dumber lately, you're not imagining it i'd suggest switching to Opus 4.5 to see the difference for yourself

English
96
76
793
130.2K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
OPUS 4.6 NERFED, 4.5 UNTOUCHED - Opus 4.6 fails the same test every time while 4.5 passes consistently - Users report big quality drop; many switching back to 4.5 in Claude Code
English
43
28
533
96.8K