Some Guy

10.3K posts

Some Guy

Some Guy

@WhatIsPrivate1

Katılım Mart 2011
139 Takip Edilen217 Takipçiler
Some Guy
Some Guy@WhatIsPrivate1·
@dampedspring Even if it results in Hormuz being opened for the duration of the delay?
English
0
0
1
327
Some Guy
Some Guy@WhatIsPrivate1·
@emollick do you have access to Spud? Any comments?
English
0
0
0
1.2K
Ethan Mollick
Ethan Mollick@emollick·
I was told about the Mythos release, but didn't have access, so have no personal experience to add. Two points from brief: 1) It is not built for IT security, it is just a good enough model that it is good at that too 2) This is the first, not last, model to raise security risks
English
18
22
705
40.2K
Computer Cowboy
Computer Cowboy@benbbaldwin·
I still do not understand the purpose of OpenClaw. Am I a boomer
English
12
0
85
48.2K
Some Guy
Some Guy@WhatIsPrivate1·
@emollick From my conversations, it seems as though everybody saw the Claude / ChatGPT updates in December 2025--Feb 2026 as a step-change. I expect many firms will build off of that.
English
0
0
0
138
Ethan Mollick
Ethan Mollick@emollick·
There were likely no major work impacts of GenAI in any large firm throughout 2025. We did not have agentic tools, adoption takes time, and everyone was experimenting with process. That is starting to change. Studies that show no impact in 2025 don't tell us much about 2027.
English
47
15
269
20.8K
Some Guy
Some Guy@WhatIsPrivate1·
@DKThomp Honest question - what if the entire writing / editing process is done through AI, but with significant human involvement? i.e. "Write xyz... keep this, change this, I want to express this better... ok, now change this part... I don't like how this sounds, make it more..." etc
English
2
0
1
279
Derek Thompson
Derek Thompson@DKThomp·
Writing is thinking, and people who outsource the full writing process to AI will find their screens full of words and their minds empty of thought. But also: All writing involves and has always involved “outsourcing”—reaching outside of the writer’s mind to pull in pieces of the world, before and after the work of making words. Writers draw their ideas from other people, books, articles; after writing they often rely on outside copy editors, fact checkers, transcribers. Some of this stuff is just going to be done by AI in the future, and the boundaries between “good behavior” and “bad behavior” will have some blurry lines, and we should be honest and open about the blur rather than declare everybody with an open Claude window a part of the slopclass. Anybody who says AI transcription of long interviews obliterates the identity of a writer is being a little silly. But what about copy editing? Claude is a fast and decent copy editor, but it is inhuman to rely on it for that function? Is it moral to google “Econ papers on income transfers for child poverty” but immoral to write the same thing as an AI prompt? What about throwing 500 muddled words into ChatGPT and saying “does this make any sense? what do you think I’m trying to say here?” That’s going to be useful for some people. At an aesthetic level, I don’t like copy-pasting AI paragraphs into articles and pressing publish. That feels like me cheating myself. It feels like de-skilling. But the idea that “using AI” is anathema to the identity of being a writer is, in a few years, going to sound an awful lot like claiming that “using a computer” is a violation of the craft of writing. (Which, haha, maybe it is and we should all just go back to Steinbeck and his pencils; but talk about ships that have sailed.)
Emily Gould@EmilyGouldNYmag

using AI to "be a writer" is like .. playing a porn video game where you make your avatar cum

English
54
76
607
148.5K
Some Guy
Some Guy@WhatIsPrivate1·
@YardsPerPass Now lets look at who has historically been in that 7-8% band... DJM is 7.8%.
English
1
0
0
26
YardsPerPass
YardsPerPass@YardsPerPass·
@WhatIsPrivate1 Fair point. Megatron was a generational outlier. JSN, Chase, Jefferson are all great but now there's a tier of great WRs, not just 1 unicorn In 2016, only 1 WR took 10%+ of the cap In 2026, 13 do The top of the market hasn't moved. The depth has
English
1
0
1
94
Some Guy
Some Guy@WhatIsPrivate1·
@HaydenWinks Hang it in the Louvre. This graph is art.
English
0
0
1
480
Hayden Winks
Hayden Winks@HaydenWinks·
I think I will write about positional value in the NFL Draft. Going to be a big topic this year.
Hayden Winks tweet media
English
4
14
140
24.5K
Some Guy
Some Guy@WhatIsPrivate1·
@wpri12 As a business owner in one of these districts, this is the first I'm even hearing about this money. To be fair, we're not a member of the district group, because they've demanded money/inventory from us in the past and are poorly run. The whole thing is a massive joke.
English
0
0
0
35
WPRI 12
WPRI 12@wpri12·
Earlier this year, Providence Mayor Brett Smiley announced $135,000 in grants to help business districts affected by the deadly shooting at Brown University. Some local businesses tell 12 News they still haven't seen a cent. wpri.com/news/local-new…
English
4
3
1
1.9K
Some Guy
Some Guy@WhatIsPrivate1·
@adampensel I think if AI freezes where it is today (or makes just small improvements), then your thesis is accurate. If it becomes good enough that everybody can be a software engineer, then I think you're wrong. Even today, I (not a software engineer) can vibe-code basic stuff.
English
1
0
1
30
Adam
Adam@adampensel·
People joke about this, but AI isn’t going to collapse the market for software engineers. Instead, it’s going to make software engineers so productive, businesses that would never traditionally have in-house dev will have it moving forward.
Can Vardar@icanvardar

we are so back

English
4
1
6
3.2K
Some Guy
Some Guy@WhatIsPrivate1·
@AaronQuinn716 Agreed. But I'll still complain. But boy am I glad Beane didn't do this trade.
English
0
0
1
255
Some Guy
Some Guy@WhatIsPrivate1·
@AaronQuinn716 I thought it was ok. Didn't understand the hype. Also watched it on a plane.
English
0
0
2
1K
Some Guy
Some Guy@WhatIsPrivate1·
@Pro__Ant If I had to describe the difference between Chubb and Bosa/Von in a single word (based on their time with us), it would be "effort".
English
0
0
0
61
Anthony Cover 1
Anthony Cover 1@Pro__Ant·
Bradley Chubb (2) off the left edge. If one rep was a microcosm of his game as a rusher it’d be this. Power, pop, and effort
English
6
7
78
9K
Some Guy
Some Guy@WhatIsPrivate1·
@Pro__Ant Do you have data on how pressure rate and win rate changed throughout the season? Takes a while to recover from ACL.... wondering if he improved as the season went on.
English
1
0
1
530
Anthony Cover 1
Anthony Cover 1@Pro__Ant·
Out of 94 qualifying Edges with at least 200 pass rush snaps in 2025, Bradley Chubb was: 49th in win rate 11.5% 35th in pressure rate 13.5% 32nd in total pressures T-25th in sacks 8.5 (per PFF) He’s a power rusher/compressor very much in line with previous (current?) Bills Edge archetypes. Would be nice for them to dip outside that pool, but he’d be a fine addition at this stage of Free Agency (I’d also like/prefer Jacob Martin who has some velociraptor to his game)
Tom Pelissero@TomPelissero

The Dolphins have now officially released OLB Bradley Chubb with a post-June 1 designation.

English
19
12
123
32.6K
Some Guy
Some Guy@WhatIsPrivate1·
@YardsPerPass Yes, but... $60m guaranteed is a hell of a lot to spend to figure out if he is actually good or not.... and if he is good, you only "buy" yourself 1 year of cheaper contract.
English
0
0
1
51
YardsPerPass
YardsPerPass@YardsPerPass·
2 years is brilliant by the Colts, protects themselves from a Kirk Cousins like problem if Jones can't rediscover the magic
Ian Rapoport@RapSheet

The #Colts are locking in their QB, finalizing a deal with Daniel Jones to secure the most important position, per me and @TomPelissero. It’s a 2-year, $100M contract done by his agents at @AthletesFirst. The biggest two-year deal in NFL history.

English
6
0
15
9.4K
Some Guy
Some Guy@WhatIsPrivate1·
@exec_sum Florida over here collecting billionaires and then they'll announce a tax in arrears.
GIF
English
0
0
3
1.1K
Exec Sum
Exec Sum@exec_sum·
BREAKING: Former Starbucks CEO Howard Schultz announced on LinkedIn that he is officially leaving Seattle and moving to Miami. This comes on the same day the Washington state House passed a 9.9% “millionaires tax” on incomes over $1 million.
Exec Sum tweet mediaExec Sum tweet media
English
8
19
259
50.4K
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity. 70+ AI models were given the same open-ended questions. They all gave the same answers. They asked over 70 different LLMs the exact same open-ended questions. "Write a poem about time." "Suggest startup ideas." "Give me life advice." Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses. Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors. They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions. This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken. The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems. Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more. They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard. First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times. The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality. Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training. Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses. These are models built by completely different companies with different architectures and different training pipelines. They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice. So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring. When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized. The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse. The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness. Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing. You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives. The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement. Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale. And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity. Because it quietly narrowed what we were exposed to until we all started thinking the same way too. Here's what you can actually do about it right now: → Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones → Use temperature and sampling parameters aggressively to push models out of their comfort zone → Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt → Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas" → Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus → Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time. The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now. The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode. Until that happens, your best defense is awareness and better prompting.
Alex Prompter tweet media
English
334
895
3K
486.3K
Some Guy
Some Guy@WhatIsPrivate1·
@brian_armstrong but you can use APIs to give an agent access to your bank account, even if they can't open one themselves.
English
0
0
0
1
Brian Armstrong
Brian Armstrong@brian_armstrong·
Very soon there are going to be more AI agents than humans making transactions. They can’t open a bank account, but they can own a crypto wallet. Think about it.
English
2.3K
2.8K
20.3K
4.7M