p0lyn

1.8K posts

p0lyn banner
p0lyn

p0lyn

@p0lynice

former m&a due diligence lead, current fund manager. i post ideas i'm actually positioned on.

The Moon Katılım Ocak 2017
1.4K Takip Edilen4.1K Takipçiler
p0lyn
p0lyn@p0lynice·
thoughts on $uber the spending list for PIF right now: $9 billion for 730 patriot interceptor missiles. $1.78 billion for a grain company because you can't import food through a closed strait. unknown billions to expand yanbu port, which maxes out at 4 million barrels a day when saudi was shipping 6.3 million through hormuz before the war. infrastructure hardening after iran hit the samref refinery at yanbu. a 15% across-the-board capex cut that still isn't enough. NEOM slashed by 60%. so what do they sell? lucid is a $4 billion position that barely trades. EA went to a subsidiary. the US equity book is down to $12.9 billion and 46% of it is one stock: uber. it trades 20 million shares a day, it's up from where they bought it, and it's the one position they can exit without destroying the price on the way out. 33 analysts have uber at $107 with a strong buy. not one of them has a line item for "largest shareholder is funding a war, a missile stockpile, a port expansion, and a grain reserve at the same time."
English
0
0
0
156
p0lyn
p0lyn@p0lynice·
apollo investors asked for 11.2% of their money back this quarter. got 45 cents on the dollar. ares investors asked for 11.6%. got 43 cents. oaktree honored all 8.5% but brookfield had to write an $80 million check to make it work. all three filings dropped in the same week. retail private credit went from $34 billion to $222 billion in four years, and the maximum exit capacity across the entire sector is about 5% per quarter.
English
0
1
1
117
p0lyn
p0lyn@p0lynice·
I’m sure he knows the biggest one is a nuclear power plant, right? friday: "we are considering winding down our military efforts." also friday: 2,500 more marines deployed, sanctions lifted on 140 million barrels of iranian oil, S&P drops 1.5%. saturday morning: "at a certain point it'll open itself." saturday night after iran hits dimona and arad: 48-hour ultimatum to obliterate iran's power plants starting with the biggest one first. the biggest one is bushehr, which is a nuclear power plant.
OSINTdefender@sentdefender

President Donald J. Trump has posted to his Truth Social platform warning that if Iran fails to reopen the Strait of Hormuz within 48 hours, the US will begin targeting Iranian power plants. The message was posted at 7:44pm EST.

English
0
0
0
173
p0lyn
p0lyn@p0lynice·
@sentdefender buying their oil with one hand and threatening to nuke their power grid with the other, posted 24 hours after "we're considering winding down"
English
0
0
1
233
OSINTdefender
OSINTdefender@sentdefender·
President Donald J. Trump has posted to his Truth Social platform warning that if Iran fails to reopen the Strait of Hormuz within 48 hours, the US will begin targeting Iranian power plants. The message was posted at 7:44pm EST.
OSINTdefender tweet media
English
490
459
3.3K
1.8M
p0lyn
p0lyn@p0lynice·
the interesting thing about private banks adding concierge and travel planning isn't the services themselves, if managing money was enough to keep clients paying 1%, you wouldn't need to book their flights. family offices figured this out a long time ago (the lifestyle stuff was always the retention moat, not the returns) and now the private banks are reverse-engineering the same playbook at scale, which either means family offices were never as special or private banks are getting nervous enough to cosplay as one. i think it's probably both.
English
0
0
5
2K
p0lyn
p0lyn@p0lynice·
@R3genD3gen kid turns 20 today and his dad might tank his book as a birthday present
English
1
0
0
15
p0lyn
p0lyn@p0lynice·
@ianbremmer declaring victory on everything except the part that's actually costing people money
English
0
0
3
469
ian bremmer
ian bremmer@ianbremmer·
trump: very close to winding down war in iran. presumedly $200 billion supplemental no longer necessary.
ian bremmer tweet media
English
101
130
959
84.6K
p0lyn
p0lyn@p0lynice·
conciliatory language, futures gap up monday, crisis managed. but the tariff pause worked because tariffs have an off switch. you post on truth social, pause goes into effect, done. hormuz doesn't work like that. iran is sitting on the chokepoint and every signal that we want out tells them the leverage is working, which means they have zero reason to let up. the pentagon shipped 2,500 more marines out today, which is kind of a weird thing to do when you're winding down (unless you're not actually winding down and the post was for the s&p, not for centcom). brent closed at $112. nobody has volunteered to reopen the strait. i think the market gets a relief bounce off the language but the actual problem isn’t solved.
English
0
0
1
60
p0lyn
p0lyn@p0lynice·
@spectatorindex the countries he wants to police hormuz are the same ones he called cowards six hours ago for not policing hormuz
English
0
0
0
41
The Spectator Index
The Spectator Index@spectatorindex·
BREAKING: Trump post about considering winding down military efforts in the Middle East
The Spectator Index tweet media
English
467
741
4.5K
1.2M
p0lyn
p0lyn@p0lynice·
@Polymarket if you fire 30,000 people, mandate an AI coding tool that 1,500 engineers protested, and then the AI coding tool causes four sev1 outages in a week, do the $2 billion in savings still count
English
0
0
1
154
Polymarket
Polymarket@Polymarket·
BREAKING: Amazon reportedly holds mandatory meeting after “vibe coded” changes trigger major outages.
English
810
2K
26.7K
14.6M
p0lyn
p0lyn@p0lynice·
50 different PMs using 50 different AIs all independently deciding to sell NVDA at 8:31am and each one thinking they had an original idea
Alex Prompter@alex_prompter

🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity. 70+ AI models were given the same open-ended questions. They all gave the same answers. They asked over 70 different LLMs the exact same open-ended questions. "Write a poem about time." "Suggest startup ideas." "Give me life advice." Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses. Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors. They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions. This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken. The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems. Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more. They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard. First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times. The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality. Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training. Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses. These are models built by completely different companies with different architectures and different training pipelines. They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice. So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring. When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized. The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse. The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness. Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing. You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives. The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement. Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale. And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity. Because it quietly narrowed what we were exposed to until we all started thinking the same way too. Here's what you can actually do about it right now: → Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones → Use temperature and sampling parameters aggressively to push models out of their comfort zone → Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt → Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas" → Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus → Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time. The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now. The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode. Until that happens, your best defense is awareness and better prompting.

English
0
0
0
213