0

4.3K posts

0 banner
0

0

@floAngel0

가입일 Mart 2014
1K 팔로잉79 팔로워
0
0@floAngel0·
@FanSince09 His arms move like whips or tentacles, deeply disturbing
English
0
0
0
280
0
0@floAngel0·
@BillDA @tszzl I wonder how this sort of thing matriculates into the workplace. I think it could wind up increasing employment for the same reason, that one extra person can help you accelerate so much more than before
English
0
0
1
16
roon
roon@tszzl·
“fake work” and “bullshit jobs” has been fantastically wrong and misleading for understanding the modern world. a much better understanding is of a global economy where minor skill differences and improvements lead to monumentally different outcomes, and the marginal hour of work has never been more measurable or useful after the advent of even moderately effective talent allocation systems and the variability of reward based on effort and skill, people have engaged much harder in a red queen rat race across the world. this is why the Chinese ‘cram schools’ exist and why ‘yuppie striverism’ is a thing and why people trade off later family formation for working more so often. while overall work hours are slightly down, they are actually up for high earners (nber.org/digest/jul06/w…) I see it in the marginal effect with my friends now after the advent of claude and codex: they are actually working harder now than they ever have before. this is due to a personal Jevon’s paradox where they see that the value of their time has increased dramatically, that they can get a lot more visible work done towards goals they care about than they used to after requests from their customers the labs are doing things like inventing dispatch which lets you monitor work and manipulate your computer from your phone, on top of prior changes like having always on communications (slack). You hear about people launching codex jobs from their phone the moment they have an idea and reviewing them later no clue how long this lasts but the most immediate impact of co-existing with the machine state is higher productivity and higher visibility which leads to more work hours
English
130
150
2.5K
305.6K
Dave
Dave@GamewithDave·
For those who used a computer between 1995 and 2001, what's the computer game from that time that sticks with you the most, and why?
English
12.2K
148
3.9K
2.1M
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
FWIW, accelerate or die applies to your career, too I wouldn’t have said that a year ago. But I am watching people be 50-100-200% more productive than their peers by using AI I’m not trying to be a doomer. Anyone can do it. But you have to start shipping. Now
English
78
42
1.6K
170.3K
0
0@floAngel0·
@buccocapital @TheStalwart Stalking Facebook marketplace is also way more fun and convenient than before. The research and specific knowledge gap is much smaller
English
0
0
0
94
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
Here’s a very simple example: All Clad has four lines of pans, D3, D5, Copper core and graphite. Figuring out what the hell is going on here would take forever in another lifetime. This prompt basically answered all my questions with a couple follow ups “I’d like to better understand the difference between the all clad cookware lines. I don’t really care about price. Help me think through and understand the pros and cons to copper core vs graphite vs their other options. What tradeoffs should I consider? How much do these materials impact performance and usage? Can a home cook even tell or is this ultimately just marketing? Please supplement your insights with real consumer feedback and be mindful of integrating paid advertising which is worthless to me” Plenty of other examples but that’s a concrete one where I was like “oh this is just so much easier to understand than it was beforehand”
English
32
1
294
33.4K
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
One of the most positive changes from AI: I feel more informed as a consumer than I ever have Shopping is SO much better. The best products are easier to find, and marketing is easier to debunk I think AI will ultimately force more companies to compete on product quality
English
50
20
841
105.6K
0
0@floAngel0·
@matt_slotnick Hire 40-50 researchers and let them cook
English
0
1
1
384
Matt Slotnick
Matt Slotnick@matt_slotnick·
if you raise a billion dollar seed you're instantly very cash flow positive on interest alone. it's an infinite money glitch. why don't more people do this?
English
10
2
171
23.6K
0
0@floAngel0·
@tbpn He’s describing Google no?
English
0
0
0
188
TBPN
TBPN@tbpn·
John: why aren't LLMs caching answers to common questions to increase response time? "I've asked ChatGPT, ‘When was OpenAI founded?’ three different times. It's the exact same query." "It doesn't need to light the GPUs on fire for that question. So cache those results and give them to the user instantly." "LLMs felt slow for a really long time. They actually got slower once the reasoning models came out. It was like: close your phone and come back in 20 minutes. That doesn't have to be the end state."
English
77
5
360
301K
Jesse Livermore
Jesse Livermore@Jesse_Livermore·
The irony in all this is that bullshit jobs may actually be the hardest ones for AI to supplant.
English
13
8
140
20.9K
0
0@floAngel0·
@aakashgupta If they’re serving it profitably what’s the difference
English
0
0
0
33
Aakash Gupta
Aakash Gupta@aakashgupta·
This data doesn’t say “Kimi is the best model.” OpenClaw burns through tokens like nothing else in the AI ecosystem. The platform sends your entire conversation history with every single API call. Users report hitting 200,000+ tokens of cached context on routine queries. One developer burned $500 in a weekend. Another watched a single cron job consume $128/month in tokens. So what happened? OpenClaw users did what any rational economic actor does when the meter is running at 5.7 million tokens overnight: they switched to the cheapest model that still works. Kimi K2.5 costs $0.60 per million input tokens. Claude Opus 4.5 costs $15. That’s 25x cheaper. When your AI agent is re-sending 200K tokens of session history on every heartbeat check, every status ping, every “what’s on my calendar” query, that 25x multiplier is the only number that matters. The OpenRouter leaderboard doesn’t measure which model developers love most. It measures which model developers can afford to leave running 24/7 inside the most token-hungry platform ever built. Kimi is winning the “my agent won’t bankrupt me” war. Quality is secondary when your platform re-sends 200K tokens on every ping. The entire OpenClaw community knows it. The top thread in their GitHub discussions is literally titled “Burning through tokens” with users begging each other for cheaper alternatives. Moonshot AI is celebrating a vanity metric. The real story is that OpenClaw created a token-burning architecture so aggressive that it single-handedly reshuffled the model leaderboard by cost, and Kimi happened to be standing in the right spot.
Kimi.ai@Kimi_Moonshot

Kimi is now the #1 used model on OpenClaw (via OpenRouter) 🏆 Real usage data doesn't lie. Developers are voting with their tokens.

English
113
35
575
98.1K
0
0@floAngel0·
@TheMindScourge This works when labor costs are less than the cost of the food
English
0
0
0
26
The Mind Scourge
The Mind Scourge@TheMindScourge·
Note how small the portions are. Even the “big boy plate” on the menu: very small by modern standards. The primary explanation for why obesity was nearly non-existent mid-century is because just ate less One way to cut down on food prices in this country would be to reduce portion sizes, but people don’t seem to want to do this
English
120
56
2K
746.8K
0
0@floAngel0·
@TheStalwart Either these models are very high margin to serve or there’s subsidies within subsidies for the open source Chinese models
English
0
0
0
44
Joe Weisenthal
Joe Weisenthal@TheStalwart·
"Kimi K2.5 lands at $371 in Cost to Run Artificial Analysis Intelligence Index, more than 4x cheaper than Claude Opus 4.5 and GPT-5.2,"
Artificial Analysis@ArtificialAnlys

Moonshot’s Kimi K2.5 is the new leading open weights model, now closer than ever to the frontier - with only OpenAI, Anthropic and Google models ahead Key takeaways: ➤ Impressive performance on agentic tasks: @Kimi_Moonshot's Kimi K2.5 achieves an Elo of 1309 on our GDPval-AA evaluation, behind only OpenAI and Anthropic models. Kimi K2.5 leaps ahead of GLM-4.7, DeepSeek V3.2 and Gemini 3 Pro. GDPval-AA is our leading metric for general agentic performance, measuring the performance of models on realistic knowledge work tasks such as preparing presentations and analysis. Models are given shell access and web browsing capabilities in an agentic loop via our reference agentic harness called Stirrup. ➤ Native multimodality for the first time: Kimi K2.5 is the first flagship model from Moonshot to support multimodal (image and video) inputs. This is the first time that the leading open weights model has supported image input, removing a critical barrier to the adoption of open weights models compared to proprietary models from the frontier labs. It represents significant differentiation for Kimi K2.5 compared to other open weights leaders including DeepSeek V3.2, GLM-4.7, MiniMax M2.1 and MiMo-V2-Flash. Kimi K2.5 scores 75% on the MMMU Pro visual reasoning benchmark, slightly behind Gemini 3 Pro but in line with GPT-5.2 and Claude Opus 4.5. ➤ Moderate cost to run Artificial Analysis Intelligence Index: Kimi K2.5 lands at $371 in Cost to Run Artificial Analysis Intelligence Index, more than 4x cheaper than Claude Opus 4.5 and GPT-5.2, but more than 5x more expensive than DeepSeek V3.2 and gpt-oss-120b. ➤ Moderate token usage: Kimi K2.5 demonstrates token usage comparable to other models in the same intelligence tier, using ~82M reasoning tokens across the Artificial Analysis Intelligence Index evaluation suite. This is slightly lower than Kimi K2 Thinking (~95M reasoning tokens) and much lower than GLM 4.7 (~160M reasoning tokens). ➤ Open weights: Kimi K2.5 is an MoE model with 1T total parameters and 32B active. Similar to Kimi K2 Thinking, Kimi K2.5 has been released in native INT4 precision rather than FP8/BF16. This means the model is only ~595GB. ➤ Hybrid reasoning: Kimi K2.5 unifies Moonshot’s reasoning and non-reasoning models into a single model. We have evaluated K2.5 with reasoning on (and will share results soon with reasoning off). ➤ Low hallucination rate: Kimi K2.5 scores -11 on the AA-Omniscience Index, our knowledge evaluation measuring both accuracy and hallucination rate. This score is primarily driven by a comparatively low hallucination rate of 64% (reduced from Kimi K2 Thinking’s 74%), indicating a slightly greater tendency to abstain rather than fabricate knowledge when the model is uncertain.

English
10
16
153
28.5K
0
0@floAngel0·
@TheStalwart Tech bro Joe has hot takes
English
0
0
1
90
Joe Weisenthal
Joe Weisenthal@TheStalwart·
The primary reason why the tech boom is a contributor to financial measures of inequality is because these companies reach extraordinary valuations with relatively few workers, and the few workers are extremely well paid. The public/private distinction is a sideshow IMO.
Vlad Tenev@vladtenev

Thanks for the thought-provoking piece. My main critique is that you are overemphasizing flashy but low probability events like “left-handed bacteria,” while merely giving lip service to the risk of extreme economic concentration of power, which is very real and materializing as we speak. Anthropic is reportedly raising funds at a $350B valuation, and the wealth created thus far has been concentrated into a few hundred (perhaps more like dozens) high net worth individuals / institutions. It’s looking increasingly likely to me that none of the leading AI labs will IPO until they reach valuations in the trillions, at which point retail investors will finally be able to get shares. In order for retail to get a 100x return on these investments, which was achievable for Apple, Microsoft, Amazon, and Google, the valuations of the AI labs will need to reach hundreds of trillions of dollars, meaning it’s likely too late for a more equitable redistribution of wealth. Simply put, you are currently exacerbating the problem. The consequences of this are that voters may take matters into their own hands and push for either or both 1) more aggressive / nonsensical forms of redistribution — the CA Founders’ Tax is just the beginning or 2) a drastic knee-capping of the AI industry in America, which make the CCP dominance scenario more likely. The solution is to enable retail ownership now, increasing the number of Americans with economic exposure to Anthropic and other AI labs from hundreds of people to millions.

English
25
33
442
132K
0
0@floAngel0·
@probaaron @TheStalwart By then the Chinese models will have caught up and whole new subsidy scheme will take over
English
0
0
1
16
aaron
aaron@probaaron·
@TheStalwart nahh, you'll get sucked into another rabbit hole sooner or later
English
2
0
22
5.7K
Joe Weisenthal
Joe Weisenthal@TheStalwart·
I don't really care if the subsidy goes away, and vibecoding becomes unsustainable. I'm almost done with my project, and I have zero other ideas or ambitions.
English
44
11
1.1K
227.2K
0
0@floAngel0·
@eliasstravik Any reason it can’t be an old Intel Mac?
English
0
0
0
766
Elias Stråvik
Elias Stråvik@eliasstravik·
setting up claude code on a mac mini feels like you just got $1M in funding and 10 employees scary
English
38
34
1.5K
90.3K
Pink Polo Shorts
Pink Polo Shorts@PinkPoloShorts·
@resetbasis @mattmfm Tbf an allocator who invested with him told me “sounded super smart in meetings, couldn’t make a dollar for LPs to save his life.”
English
6
0
22
2.6K
Jon Tweets Sports
Jon Tweets Sports@jontweetssports·
“The agent communicated that if we were to add a 2nd year at $1M to the already agreed upon deal with Luke they would gladly give us whatever we need in order to turn Ole Miss in.” Watch all 8.5 min. It’s a wild ride.
English
172
412
6.6K
1.5M
0
0@floAngel0·
@round Skills feels like a really hacky way to jump start context. I just write a prompt to “teach” the ai
English
0
0
1
62
Maxim Leyzerovich
Maxim Leyzerovich@round·
subagents & skills won’t matter in 6 months
English
160
24
788
140.1K
0
0@floAngel0·
@bubbleboi I wonder how this plays out. If it’s an opening for ai first incumbents that won’t need the staff and thus won’t need margin and lock in these vendors look for.
English
0
0
0
16
bubble boi
bubble boi@bubbleboi·
SaaS is cooked. AI reduces the demand of any bullshit SaaS, a team of 3-4 engineers can recreate most of these products in less than 3 months lol.
Connor Bates@ConnorJBates_

Software!

English
262
273
2.6K
749K
Aaron Makelky
Aaron Makelky@theaaron·
@tristanbob @antigravity I’m on phase 4: hit Gemini 3 Pro limit Result: many users realize Gemini 3 Flash is faster and sometimes better than the premium models so they just use it for everything and don’t have to worry about rate limits
English
13
2
135
13.6K
Tristan Rhodes
Tristan Rhodes@tristanbob·
I just realized that Google @antigravity has an unbeatable customer acquisition strategy. Phase 1: Strategy: Unlimited Claude for FREE Result: Get people to install and use Antigravity Phase 2: (current phase) Strategy: Limited Claude for FREE Result: Users hit those limits and are forced to try Gemini 3 Pro. They discover it's quite good. Phase 3: Strategy: No Claude Opus for FREE (but Gemini still is) Result: Many users just use Gemini 3 Pro
English
165
48
1.8K
188.1K
0
0@floAngel0·
@AIWarper Why is he wearing a vest
English
0
0
0
364
A.I.Warper
A.I.Warper@AIWarper·
This is absolutely insane! 30s video length with very decent stability. Performance on the bottom by JulianoMass on IG How? Foolishly simple 👇
English
282
1.2K
23K
4.1M