erik@try.works banner
erik@try.works

@trydotworks

https://t.co/IX65PyRGXS & https://t.co/ugILkj5ZYu. 🇸🇪 pm and indie dev in 🇨🇳 Shanghai. Prev: @dji @oppo @oneplus.

Shanghai Katılım Mart 2026
447 Takip Edilen169 Takipçiler
Sabitlenmiş Tweet
erik@try.works
[email protected]@trydotworks·
recursive-mode is an installable skill package for coding agents. It gives your agent a file-backed workflow for requirements, planning, implementation, testing, review, closeout, and memory, instead of leaving the whole process scattered in context.
erik@try.works tweet media
English
9
2
17
4.4K
锐汉街评
锐汉街评@ruihanmeimei01·
32岁外卖员凌晨下班,买了只平时舍不得吃的鸭腿庆祝生日,“三年不敢歇一天,存下29万,终于还清外债”!!你身边有没有这样的朋友?
中文
216
49
816
204.4K
Psychohistorian
Psychohistorian@Z_Rex2017·
一个月以后,懂王也访问完中国了。东升西降大势已定,未来没有太多变数了。 老中精准的看出了熬就行,老美会自爆的。 从老美熬死苏联,到老中熬到老美自废武功,新的大国竞争范式已经出来了:熬死竞争对手就是最正确的做法。如果后来大国政客不瞎,大概也是沿着这个路径走了。
中文
3
0
14
1.6K
Yoni Braslaver
Yoni Braslaver@YoniBraslaver·
Cloudflare's Code Mode post argued agents are more efficient with code than a menu of MCP tools. We ran the experiment on monday's GraphQL API. SDK: 1 step, 15k tokens. Real MCP server: 4 steps, 158k tokens. 8.4× the cost, same output. yonibraslaver.pages.dev/posts/code-mod…
English
10
16
177
40.3K
Jintao Zhang 张晋涛
Jintao Zhang 张晋涛@aiandcloud·
今天最热门可能是这个, 英文圈发现了闲鱼和淘宝,可以以极低价格买到 Claude 和 GPT 🤣 他们这发现的太晚了,不过如果说想要质量稳定可靠,以及在意隐私和安全,还是官方渠道更好一些
aditya@adxtyahq

Chinese students are buying GPT-5.4/5.5 and Claude API access from Xianyu/Taobao proxy sellers for almost 96-97% cheaper People are apparently burning 100M+ tokens a day for like $1 and vibecoding nonstop.

中文
1
0
4
921
erik@try.works
[email protected]@trydotworks·
It's so annoying seeing these types of posts because it's the wrong way to collect user feedback. Unstructured, out of context, completely subjective and with unknowable environment variables. Run proper user research.
Toven@pingToven

When do you reach for other gateways instead of OpenRouter? What can we do better? Hit me with all of your frustrations. dms open. If you can give me detail (e.g. specifics/transcipts) - it'll help a lot in finding out exactly what we need to do to improve the API

English
0
0
1
52
Tierra Partners
Tierra Partners@tierrapartners·
Call centers are so walking dead
Tierra Partners tweet media
English
2
1
15
1.9K
mark bissell
mark bissell@MarkMBissell·
insane fact from @collision: "I feel like Booking.com is a very underappreciated success story in tech... If you invested a dollar in Booking.com and a dollar in Google 20 years ago, you made much more money as a Booking.com shareholder."
English
2
9
226
59.5K
Milkyray🪽🥛DOKOMI
i just found out my American friend doesn't know what hotdog sauce is. It literally has an american flag on the bottle of hotdog sauces wdym they don't have it and never heard of it? Germany just made that shit up!???😭😭 HUHHH
English
1.7K
530
40.7K
3.6M
market participant
market participant@undrvalue·
What’s the bear case for Upwork at 5x forward PE, growing earnings with a stable topline. Is it not free money here? $UPWK
English
10
0
16
4.8K
Behnam
Behnam@OrganicGPT·
DeepSeek v4 Pro is a huge let down. Maybe V4Flash is better but Pro keeps forgetting things, re-edits files it just edited, gaslights the user, has the most erratic reasoning, skips planning, etc. @Kimi_Moonshot (K2.6) has been better in that regard! Both in Claude Code harness
English
5
0
3
657
Sid
Sid@chatsidhartha·
@trydotworks I’ll get a studio M5 ultra when it comes out. Hopefully with more memory.
English
1
0
1
182
Sid
Sid@chatsidhartha·
I just ordered a maxed out MacBook Pro M5 Max with 128 GB of memory Chat, am I going to regret this
English
53
0
93
22.4K
Michael Guo
Michael Guo@Michaelzsguo·
1d 13h 20m, 3,596,831 tokens. Goal achieved? Not quite. It was a hard problem. The agent tried its best and went through 20 full model/eval rounds. In the end, the agent talked itself out of the original contract and declared the goal achieved. I probably would have stopped it anyway, since I could also see from the sidecar that it was struggling. Still, it was a good experiment. My 14" MacBook Pro held up well under a sustained run, with no throttling or heating issue. Qwen3.6 35B A3B OptiQ 4-bit running locally on MLX also held up well. It generated thousands of training data samples, averaging around 50 tps with reasonably good quality. Very impressive. DeepSeek 4 Pro was a good teacher for the training, though there are still areas for improvement. The end result: we LoRAed an expert model, Qwen3-4B-Instruct-2507 + MLX LoRA. We produced a compact 56 MB LoRA adapter on a 4B Qwen base that reaches ~59% three-way decision agreement on the original eval slice, ~91% violation recall, and ~98% valid JSON, but with a high false-positive rate. It is deployable, but probably not quite usable yet. Still, it gives me a clear direction for where to go next. I’ll write more about the whole process later. Stay tuned.
Michael Guo tweet mediaMichael Guo tweet media
English
1
1
3
1.3K