im not a cat

6.1K posts

im not a cat banner
im not a cat

im not a cat

@RadenJake

Personal Account

Los Angeles, CA Entrou em Eylül 2018
974 Seguindo431 Seguidores
im not a cat
im not a cat@RadenJake·
If you're smart and do research, in this case in AI, you're still safe when Mythos drops. smol comfort
im not a cat tweet media
English
0
0
0
14
im not a cat
im not a cat@RadenJake·
The irony of a frontier lab that trained their world beating model on a million copyrighted sources (without any pecuniary reciprocation) getting their source code for their most popular product leaked and replicated in a million repositories should not be lost on any one
English
0
0
0
28
im not a cat
im not a cat@RadenJake·
@datingbyblaine if you find someone that wealthy a SPOUSE I would think they would be willing to pay a great deal more than $50k
English
0
0
1
645
Blaine Anderson
Blaine Anderson@datingbyblaine·
Why is matchmaking expensive? To illustrate, here’s how I’ll lose money on a client’s $49,000 package. Client is 46, 6’2, exited tech founder. He’s looking for a woman 27-33, very specific criteria around match personality, appearance, and profession. Without diving into specifics, she: • Isn’t easily searchable online... • Isn’t likely to reply when we find her… • Isn’t likely to be single… • Often has a deal-breaker trait we can’t screen for without a phone call… • Isn’t necessarily interested in my client… I was expecting this to be a difficult search, so I quoted $49,000. I wasn’t expecting ~100 hours of labor to find each match, not including communication with the client! To date I’ve spent $45,000 on salaries for the women staffed on his search, plus $2,750 on styling and photos, and we still owe the client 2 matches... Before considering overhead (let alone opportunity cost) this will be a huge L financially. Things balance out though. Most engagements are profitable. Some engagements are quite profitable. For example, a new client in NYC paid $30,000 and paused after his first match, because he’s 99% sure we found his wife. That's still a new relationship, and engagements last 9 months (6 months of active matching + up to 3 months of pause), so we could be on the hook for more work in coming months. But you get the point 🙏
English
415
36
2.8K
5.5M
im not a cat
im not a cat@RadenJake·
@buccocapital @TheStalwart what was the answer? instead of burning a bunch of tokens answering the same question, there should be an easy way to discover someone else asking this and seeing the answer. Wheres the product for inference output search?
English
0
0
0
63
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
Here’s a very simple example: All Clad has four lines of pans, D3, D5, Copper core and graphite. Figuring out what the hell is going on here would take forever in another lifetime. This prompt basically answered all my questions with a couple follow ups “I’d like to better understand the difference between the all clad cookware lines. I don’t really care about price. Help me think through and understand the pros and cons to copper core vs graphite vs their other options. What tradeoffs should I consider? How much do these materials impact performance and usage? Can a home cook even tell or is this ultimately just marketing? Please supplement your insights with real consumer feedback and be mindful of integrating paid advertising which is worthless to me” Plenty of other examples but that’s a concrete one where I was like “oh this is just so much easier to understand than it was beforehand”
English
32
1
294
33.4K
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
One of the most positive changes from AI: I feel more informed as a consumer than I ever have Shopping is SO much better. The best products are easier to find, and marketing is easier to debunk I think AI will ultimately force more companies to compete on product quality
English
50
20
841
105.5K
im not a cat
im not a cat@RadenJake·
@RG_Leachman awesome just downloaded to try it out thanks for putting it out there
English
0
0
0
29
Ryan Leachman
Ryan Leachman@RG_Leachman·
I asked Claude to build my daughter an app that plugs into our piano, can read live key strokes, can show her sheet notes and key view and ends with a Guitar Hero style game. All while giving progressively harder songs. Today she’s using It and crushing It.
English
604
1.6K
24.8K
3.3M
im not a cat
im not a cat@RadenJake·
@buccocapital @mnoman As long as those employees get to keep their options and these co’s stop diluting and start and rerating every one wins, right?
English
0
0
0
119
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
@mnoman I would be genuinely, genuinely shocked if this list does not do massive layoffs. I don’t see any other solution, unfortunately. Management messed up here and the employees will pay
English
3
0
35
5.1K
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
🤮🤮🤮 You have to wonder when these management teams will wake up. If not now, probably never
BuccoCapital Bloke tweet media
English
40
6
397
123.3K
QF Research
QF Research@ResearchQf·
3) There have been many more details. That includes other suppliers at OFC as well as updates from Asian suppliers. Their 18-24 month target is $2B quarterly revenue with 40% OM. Consensus was ~$1.5B and ~35%. This target does not include impact of the new UHP laser fab with $5B annual revenue capacity. This could end up being conservative given their recent performance, but I'm more interested in 2029 - 2030. Photonics wave might just be starting.
QF Research tweet media
English
2
1
16
3.1K
QF Research
QF Research@ResearchQf·
1) $LITE is up $56 and $132 since yesterday morning. LITE presented during market hours at OFC yesterday! I may be almost 80% there on CPO scale-up opportunity through at least Feynman. There has been bunch of new info in a day. Here are 2 key LITE slides. Phase 0. Again scale-out is well understood near term. "Multi-hundred" million 1H27 alone. Quantum-X and Spectrum-X CPO switch build data later. Phase 1. That inter-rack NVL576 scale-up I've been referring to. 3x to 4x CPO links vs Phase 0. Phase 2. 3x to 4x vs Phase 1. NVL1176 also includes longer distance intra-rack due to those physical copper bandwidth-length limits (see 2nd slide). Phase 2 alone is causing those huge $NVDA (and now other customers) demand signals for LITE $COHR and bunch of other suppliers. 100% optical scale-up is inevitable. 3.2T 6.4T+. Will discuss various resulting photonics opportunities from a top down and bottoms up basis later. ASPs, units, high power (e.g. 400 mW) CW lasers etc.
QF Research tweet mediaQF Research tweet media
QF Research@ResearchQf

I might be 70% there in understanding CPO scale-up opportunity through Feynman, but some technical clarifications plus coming supplier datapoints should take that to 80-90%? Inter-rack optical scale-up for NVL576, as mentioned earlier, appears confirmed for Oberon. But that's only a first step. Scale-out near term is well understood. Orders received to date by $LITE or $COHR and their high level TAM statements seem roughly consistent. Jensen is often imprecise during presentations, but that often leads to opportunities. This is one of many AI technologies where fortunes could be made or lost over the next years.

English
6
16
142
52.7K
Bill Ackman
Bill Ackman@BillAckman·
@pmarca Can you give us your list of your favorite practitioners?
English
113
48
3.4K
370.4K
Marc Andreessen 🇺🇸
My information consumption is now 1/4 X, 1/4 podcast interviews of the smartest practitioners, 1/4 talking to the leading AI models, and 1/4 reading old books. The opportunity cost of anything else is far too high, and rising daily.
English
1.4K
3.9K
35K
34.6M
im not a cat retweetou
im not a cat
im not a cat@RadenJake·
@fejau_inc 90% of Bloomberg’s paying customers don’t actually need to know the price of an off the run bond CUSIP tho.
English
0
0
0
306
fejau
fejau@fejau_inc·
The UI isn’t what makes Bloomberg worth $25k/yr Its the data it has available Guarantee this copycat does not show you the price of some off the run bond CUSIP IMO in the age of vibe coding strong datasets that can be called via API will become the ultimate moat BBG probably more valuable than before here
ₕₐₘₚₜₒₙ@hamptonism

Perplexity just became the the first Al company to truly go head-to-head with the Bloomberg Terminal... Using Perplexity Computer (with no local setup or single LLM limitation), it was able to build me a terminal with real-time data to analyze $NVDA using Perplexity Finance:

English
83
18
543
185.1K
Jukan
Jukan@jukan05·
GF Overseas Electronics Communication Zhen Ding First Coverage: Undervalued Giant Embraces AI and CoWoP Opportunities Initiating coverage on Zhen Ding with a “Buy” rating and target price of NT$227: We believe Zhen Ding’s core Apple business will benefit from higher market share in foldable devices and stable iPhone volume growth. More importantly, we are bullish on Zhen Ding’s AI potential, with expected share gains across multiple AI platforms. Additionally, NVIDIA’s CoWoP may launch earlier than expected, and with its mSAP+HDI capabilities, Zhen Ding is positioned to be one of the leading participants. We forecast Zhen Ding’s 2026 and 2027 EPS at NT$15.13 and NT$22.54, respectively. Our NT$227 target price is based on 15x FY2026E P/E. Zhen Ding has underperformed the Taiwan Weighted Index by 16.7% over the past three months; we view its current risk/reward as attractive. CoWoP may launch ahead of schedule: We believe CoWoP could enter trial production as early as the Rubin platform, while the market expects late-2027/2028. The architecture places an mSAP PCB on top of two independent 7-layer HDI boards (using M9Q glass-based CCL), with the three boards connected via copper paste lamination technology. We see Zhen Ding as one of the leaders in CoWoP; its CoWoP solution has already been sent to OSATs for chip-level testing. We estimate CoWoP costs ~US$600 per GPU versus ~US$200 for a GB200 compute tray. Our scenario analysis suggests that, assuming 30% CoWoP adoption on the Rubin Ultra platform and 35% market share for Zhen Ding, CoWoP could add ~NT$1.8 to EPS by 2027. Steady AI server market share gains: We expect Zhen Ding to make solid progress in AI market share, driven by: 1) NVIDIA compute tray scheduled for 1Q26 launch, with switch tray verification underway; 2) AWS UBB starting in 4Q25; 3) potential Google cooperation supported by the Hon Hai Group; and 4) 1.6T optical module PCBs. Zhen Ding’s aggressive NT$30 billion annual capex plan for 2025–2026 also strongly supports share gains. Including potential CoWoP growth, we forecast server and optical module PCB revenue of NT$21.7 billion in 2026 and NT$51.9 billion in 2027. IC substrate upgrade cycle to continue: First, we believe CoWoP solutions will not cannibalize the ABF substrate business, as demand remains robust due to increasing chip size and layer count. Separately, we estimate ~75% of Zhen Ding’s ABF revenue comes from AI, with its Kaohsiung capacity largely locked in by AMD. Driven by the memory supercycle, BT substrate utilization has recovered to above 95%. Due to Low-CTE shortages, we have observed price increases: BT substrates up 5% monthly since 3Q25, and ABF substrates up 10–15% in 4Q25. With favorable pricing and strong utilization, we expect the IC substrate business to return to profitability in 2026.
English
4
11
71
99.8K
im not a cat
im not a cat@RadenJake·
@abcampbell Are they a cartel? Will this prevent real challengers to the hyperscalers? Is it not anti-competitive because they’re all competing with each other? Crazy times
English
0
0
0
9
Professor Campbell
Professor Campbell@abcampbell·
Nvidea “acquired” Groq because they were a competitor. But the didn’t buy them, bc of anti trust. Which likely hoses the startup’s early investors. File in “law of unintended consequences”
Aakash Gupta@aakashgupta

Big Tech just paid ~$40 billion in two years to avoid buying anything. The math is absurd. Google spent $2.4B on Windsurf to hire 40 people. That works out to $60M per head. Microsoft paid $650M to gut Inflection and take 70 employees. Amazon spent $400M+ on Covariant for three founders and 40 engineers. Google dropped $2.7B on Character AI to rehire Noam Shazeer, who they’d let walk in 2021. Now Nvidia announces $20B for Groq just three months after it raised at $6.9B. Every single one of these companies explicitly stated “we are not acquiring this company.” Jensen Huang literally told employees: “While we are adding talented employees to our ranks and licensing Groq’s IP, we are not acquiring Groq as a company.” Microsoft said the same about Inflection. Google said the same about Character AI and Windsurf. Amazon said the same about Adept and Covariant. The semantic gymnastics exist for one reason: antitrust. Traditional acquisitions trigger Hart-Scott-Rodino filing requirements. Regulators review. Competitors object. Deals take 12-18 months to close. In an AI arms race where model capabilities improve every 6 months, that regulatory timeline is existential. By the time a deal clears, the tech is already outdated. So Big Tech invented the “reverse acquihire.” Pay billions to license IP, hire the founding team, leave a shell company behind with a new CEO and a skeleton crew. Google did it with Character AI (Noam Shazeer + 30 researchers, left behind a co-op structure). Microsoft with Inflection (Mustafa Suleyman + 70 staff, left Sean White as CEO of nothing). Amazon twice with Adept (David Luan + research team) and Covariant (three co-founders + 25% of staff). Now Nvidia with Groq (Jonathan Ross + senior leadership, Simon Edwards inherits a cloud business). A whistleblower complaint filed with the FTC, DOJ, and SEC in January 2025 alleged that the Amazon-Covariant deal was “deliberately and unlawfully structured” to dodge antitrust review. The complaint claimed Covariant’s new CEO told employees that if Amazon had tried to buy them outright, regulators would have killed it. The deal terms reportedly restrict which licenses Covariant can sell without paying Amazon a fee. The FTC opened investigations into Microsoft-Inflection and Amazon-Adept. Both appear to be at a standstill. Amazon’s Adept deal closed without further action. The exposed logic: buying a company twice (once for talent, once for the husk) now costs less than waiting for regulatory approval of a single acquisition. Windsurf got split three ways in 72 hours. Google paid $2.4B for leadership and license. Cognition paid ~$250M for what remained. OpenAI walked away with nothing after Microsoft objected to IP terms. Big Tech found a loophole wide enough to drive $40 billion through while regulators debate whether hiring someone’s entire executive team and licensing all their IP counts as “control.”

English
5
5
97
49.3K
Connor Davis
Connor Davis@connordavis_ai·
The paper makes one thing painfully clear: Workflows ≠ Agents. A workflow follows a pre-written script. An agent writes the script as it goes, adapting to feedback and changing plans when the world shifts. This single distinction is why 90% of “AI agent demos” online fall apart in real interfaces.
Connor Davis tweet media
English
3
1
15
2.3K
Connor Davis
Connor Davis@connordavis_ai·
I didn’t truly understand how to build strong AI agents… until one paper snapped everything into place. Not a tutorial. Not a YouTube demo. A single arXiv paper: “Fundamentals of Building Autonomous LLM Agents.” It finally made sense why most “agents” feel like chatbots with extra steps… and why real autonomous systems need an actual architecture. Here’s the backbone the pros use the part nobody explains clearly 👇 1. Perception: what the agent actually sees It isn’t just text. Real agents mix: - screenshots - DOM trees - accessibility APIs - Set-of-Mark style visual encodings That’s how an agent stops guessing at a UI and starts understanding it. 2. Reasoning: the engine behind autonomy The paper breaks down why “single-pass reasoning” collapses almost immediately. Real agents rely on: - decomposition (CoT, ToT, ReAct) - parallel planning (DPPM) - reflection loops that critique + revise plans This is the part that turns a model from reactive to intentional. 3. Memory: the part everyone misbuilds Short-term memory lives in the context window. Long-term memory lives in RAG, SQL, trajectory logs, and past failures. Yes failures are stored intentionally because they teach the agent what not to try again. Without structured memory, the agent resets every step and looks “dumb.” 4. Action System: where the work actually happens This is the hardest part and the most ignored: - Tool calls - API execution - Python environments - GUI control at coordinate level Most demos cut right before this stage because execution is where agents usually break. Where agents collapse (and why): The paper maps out the real failure modes: - grounding errors on GUIs - infinite loops - hallucinated tool actions - bad memory retrieval - fragile long-horizon planning And then it gives the fixes: reflection, anticipatory reflection, guardrails, SoM grounding, specialized sub-agents, and tighter subsystem integration. If you’ve ever wondered why your agent falls apart by step 3… or why it “forgets” what it just decided… or why it panics the moment UI changes… This paper is the missing manual. It turns agent-building into engineering not trial and error.
Connor Davis tweet media
English
33
159
990
56.2K
im not a cat retweetou
im not a cat
im not a cat@RadenJake·
There’s just so much bad tempura in the US
English
0
1
0
0
im not a cat
im not a cat@RadenJake·
All I do all day is enter codes sent to my phone and email
English
0
0
0
25