Juniper

19.8K posts

Juniper

Juniper

@JuniperViews

Let's embrace the future together.

San Francisco, CA Katılım Ocak 2025
397 Takip Edilen751 Takipçiler
Mark Kretschmann
Mark Kretschmann@mark_k·
News: @elonmusk says the latest X algorithm has been published to GitHub. This is huge for transparency. People constantly speculate about how reach, ranking, recommendations, and engagement work on X. Now the code is out there for everyone to inspect!
Mark Kretschmann tweet media
English
20
9
117
5.4K
unusual_whales
unusual_whales@unusual_whales·
US farm bankruptcies increased 46% year-over-year in 2025 to 315 filings. The Midwest and Southeast were the hardest hit, with filings up roughly 70% in each region, per the American Farm Bureau Federation.
English
90
319
1.1K
86.9K
Juniper
Juniper@JuniperViews·
An old Tesla model Y's back seat is the worst place to be. It's so stiff and jerky
English
0
0
1
10
T Wolf 🌁
T Wolf 🌁@Twolfrecovery·
The Supervisor for D9 in SF is out on mental health leave but showed up at a protest on May 1 at the airport. She's clearly not up for the challenge. This is her district. ktvu.com/news/sf-missio…
T Wolf 🌁 tweet media
English
10
15
109
22.8K
Ptuomov
Ptuomov@ptuomov·
At least that’s what you end up thinking if you spend a lot of time reading these market observers’ writings: Hyman Minsky George Soros Charles P. Kindleberger Robert Shiller John Kenneth Galbraith Owen Lamont John Maynard Keynes Irving Fisher Joseph Schumpeter Benoit Mandelbrot Andrei Shleifer Robert Vishny Rudi Dornbusch Edward Chancellor Howard Marks Jeremy Grantham
English
4
0
16
2K
Ptuomov
Ptuomov@ptuomov·
THE LIFE CYCLE OF A BUBBLE 1. A genuine advancement creates real productivity gains. A real technological or economic improvement increases productivity and leads to genuine revenue and earnings growth. 2. Stock prices leak into reported profitability. Rising stock prices improve reported earnings, financing conditions, collateral values, and perceived business performance. 3. Reported profitability drives real investment. Companies increase hiring, capital spending, construction, expansion, and speculative investment because of their own or their customers’ reported profitability. 4. Bubble beliefs and abandonment of present-value discipline. Investors stop focusing on discounted cash flows and begin relying on continuing gains from the greater fool theory, believing they can sell later at a higher price. 5. Inflows from sideline investors. Previously cautious investors enter the market in large numbers. New money from existing and new investors participation drive prices higher. 6. Extreme overvaluation. Prices rise far above historical normal multiples of reported fundamentals, even ignoring the fact that reported fundamentals have been driven by rising stock prices. 7. Issuance. Companies take advantage of high valuations through IPOs, secondary offerings, stock-based acquisitions, SPACs, and insider selling. 8. Exhaustion of inflows. The flow of new investors starts shrinking while existing investors approach their risk and leverage limits. Volatility and dispersion grow and gains become less uniform across stocks. 9. Earnings disappointments from slowing price appreciation. As stock prices stop rising rapidly, the earlier boost from higher valuations into earnings weakens or reverses. Companies begin missing expectations. 10. Stock-price collapse with high volatility. Confidence in both the fundamental growth and in the greater fool theory break down and prices fall sharply. Volatility rises further as leverage unwinds. 11. Bear-market rallies and progressively greater exhaustion. Bargain hunters and frustrated latecomers repeatedly buy the dips, creating violent temporary rallies that fail. Markets make lower highs and lower lows. 12. Capitulation, abandonment, and normalization. Bubble participants eventually give up in disgust or exhaustion. Volatility falls, valuations normalize, and the market returns to more ordinary behavior.
English
18
24
124
20.9K
Juniper
Juniper@JuniperViews·
@Austen The question is why the F are we giving them free shit
English
0
0
1
81
Juniper
Juniper@JuniperViews·
@JeffNylen @Austen There's also no way for a single person to spend infinite tokens. The idea is to not set a limit and have people do the max they can.
English
1
0
1
117
Jeff Nylen
Jeff Nylen@JeffNylen·
@JuniperViews @Austen There is no way this scales infinitely - if you have $1B of token spend per day - you cannot generate another $1B in profit per day by spending another $500M per day in tokens
English
1
0
0
91
Austen Allred
Austen Allred@Austen·
This feels solvable with like 10 minutes of tooling? Am I missing something?
Laura Bratton@LauraBratton5

New: @ServiceNow is the latest major public company to say it’s blown through its full year budget for AI coding tools from Anthropic in the first few months of 2026, just like @Uber CTO @praveenTweets said abt his company. “It’s a really hard problem,” CIO Kellie Romack said.

English
15
0
32
16K
Juniper
Juniper@JuniperViews·
@JeffNylen @Austen Well the question is why? If for every dollar of token spent you get two dollars worth of work, wouldn't you want to spend more? If some superstar can 10x their productivity wouldn't you lay off some regular employees to pay for the superstar's token usages?
English
1
0
1
124
Jeff Nylen
Jeff Nylen@JeffNylen·
@JuniperViews @Austen Can’t they just get every employee either the $200 or $20 plan depending on role - which builds in a natural token budget - or is this against TOS at enterprise levels?
English
1
0
0
121
Juniper
Juniper@JuniperViews·
@Austen Imagine telling your employee that every time they type something to an agent they'll have to think about token math. Layoffs are easier
English
1
0
1
282
The Kobeissi Letter
The Kobeissi Letter@KobeissiLetter·
BREAKING: Google and SpaceX are in talks to launch data centers into orbit amid surging AI demand, per WSJ.
English
485
732
8.8K
1.1M
Juniper
Juniper@JuniperViews·
@kimmonismus Turned 225M into 5.5B by raising money from investors...
English
0
0
3
241
Chubby♨️
Chubby♨️@kimmonismus·
OpenAI fired Leopold Aschenbrenner. Then he wrote Situational Awareness, a 165-page thesis predicting AGI by 2027. Then he reportedly turned $225M into $5.5B in 12 months. Not by buying Nvidia, Microsoft, Google, or Amazon. But by buying what AI actually runs on: Energy. Bandwidth. Storage. Compute. Bloom Energy. Lumentum. Sandisk. CoreWeave. Iris Energy. Everyone bought the AI companies. He bought the bottlenecks underneath them. Genius.
Chubby♨️ tweet media
English
53
69
1.1K
67.8K
Crémieux
Crémieux@cremieuxrecueil·
The battery revolution is amazing. Batteries have almost completely displaced gas in Queensland and all it took was two short years!
GIF
English
114
531
2.7K
227.8K
Juniper
Juniper@JuniperViews·
This correlates
Artificial Analysis@ArtificialAnlys

Announcing the Artificial Analysis Coding Agent Index! Our new coding agent benchmarks measure how combinations of agent harnesses and models perform on 3 leading benchmarks, token usage, cost and more When developers use AI to code they’re choosing a model, but also pairing it with a specific harness. It makes sense to benchmark that combination to understand and compare performance. The Artificial Analysis Coding Agent Index includes 3 leading benchmarks that represent a broad spectrum of coding agent use: ➤ SWE-Bench-Pro-Hard-AA, 150 realistic coding tasks that frontier models struggle with, sampled from Scale AI’s SWE-Bench Pro ➤ Terminal-Bench v2, 84 agentic terminal tasks from the Laude Institute and that range from system administration and cryptography to machine learning. 5 tasks were filtered due to environment incompatibility ➤ SWE-Atlas-QnA, 124 technical questions developed by Scale AI about how code behaves, root causes of issues, and more, requiring agents to explore codebases and give text answers Analysis of results: ➤ Opus 4.7 and GPT-5.5 lead the Index: Opus 4.7 in Cursor CLI scores 61, followed closely by GPT-5.5 in Codex and Opus 4.7 in Claude Code at 60. GPT-5.5 in Cursor CLI follows at 58. ➤ Open weights models are competitive, but still trail the leaders: GLM-5.1 in Claude Code is the top open-weight result at 53, followed by Kimi K2.6 and DeepSeek V4 Pro in Claude Code at 50. These are strong results, but still meaningfully behind the top proprietary models. ➤ Gemini 3.1 Pro in Gemini CLI underperforms: Gemini 3.1 Pro in Gemini CLI scores 43, well below where Gemini 3.1 Pro sits on our Intelligence Index, highlighting that Gemini’s performance in Gemini CLI remains a relative weak spot for Google’s offering. ➤ Cost per task (API token pricing) varies >30x: Composer 2 in Cursor CLI is cheapest at $0.07/task, followed by DeepSeek V4 Pro in Claude Code at $0.35/task and Kimi K2.6 in Claude Code at $0.76/task. At the high end, GPT-5.5 in Codex costs $2.21/task, while GLM-5.1 in Claude Code costs $2.26/task. For both models this was contributed to by high token usage, and in GPT-5.5’s case by a relatively higher per token cost. ➤ Token usage varies >3x: GLM-5.1 in Claude Code uses the most tokens at 4.8M/task, followed by Kimi K2.6 at 3.7M/task and DeepSeek V4 Pro at 3.5M/task. GPT-5.5 in Codex uses 2.8M tokens/task, substantially more than Opus 4.7 in Claude Code at 1.7M/task. In GLM-5.1’s case, higher token usage, cost and execution time were partly driven by the model entering loops on some tasks. ➤ Cache hit rates remain high but vary materially: Cache hit rates range from 80% to 96% across combinations. Provider routing, harness prompt structure and cache behavior can materially change the economics of running the same model given cached inputs are typically <50% the API price of regular input tokens. ➤ Time per task varies >7x: Opus 4.7 in Claude Code is fastest at ~6 minutes/task, while Kimi K2.6 in Claude Code is slowest at ~40 minutes/task. This is contributed to by differences in average turns per task, token usage and API serving speed. Opus 4.7 had materially lower amount of turns to complete a task than all other models while Kimi K2.6 had the most. ➤ Cursor made real progress with Composer 2: Composer 2 in Cursor CLI scores 48, near the leading open-weight model results, while being the cheapest combination measured at $0.07/task. Cursor has stated Composer 2 is built from Kimi K2.5, showcasing they have made substantial post-training gains. This is just the start. We are planning to add additional agents (both harnesses and models). Let us know what you would like to see added next.

English
0
0
1
18
Juniper
Juniper@JuniperViews·
@kimmonismus They said make believe shareholders are not entitled to anything
English
0
0
2
394
Taelin
Taelin@VictorTaelin·
@sama not sure if anyone will say this one but for real: - smarter - faster what do you think
English
12
1
240
8.9K
Sam Altman
Sam Altman@sama·
what would you most like to see improve in our next model?
English
8.3K
307
9K
1.4M