Brad Henderson

332 posts

Brad Henderson

Brad Henderson

@bradinfrisco

Frisco Katılım Mayıs 2009
428 Takip Edilen96 Takipçiler
Brad Henderson retweetledi
James Camp 🛠,🛠
James Camp 🛠,🛠@JamesonCamp·
$5,000 an hour. for sunlight. from space. a startup putting 50,000 mirrors in orbit to sell sunlight anywhere on earth I thought this was the dumbest idea Ive ever heard then it clicked - firefighting aircraft get GROUNDED every night at sunset. pilots cant see terrain. fires burn unchecked for 10 hours straight. and heres whats wild - water drops are 60% more effective at night. cooler temps. less wind. but nobody can fly. light up the fire line from orbit. let them work. the US spends $3-5B a year fighting wildfires. this is a rounding error. - a single late frost in Napa or Florida citrus can wipe out an entire season. $854M in frost losses last year alone. but the crazy part - the real buyer isnt even the farmer. its the crop insurance company trying to avoid a $500M payout by spending $50k on a few hours of orbital sunlight - fog costs London Heathrow over $100M a year in delays. fog burns off when sunlight hits the ground. you speed that up by 30 minutes and the value per hour is $500K-$1M. $5k/hour is pocket change - military forward operating base at night? forget night vision goggles. just light up the whole compound from space and go get it - 4 million people above the Arctic Circle live in MONTHS of total darkness. depression. productivity drops. everything slows down. you could give entire communities twilight during polar night - 150,000 babies die or get brain damage every year in developing countries from jaundice because the cure is literally just light and they dont have electricity for it. beam it down from orbit. no power grid needed. I went down this rabbit hole for an hour and every use case is more insane than the last 260,000 people from 157 countries on the waitlist. each dropping $1,000-5,000. Sequoia backed them - first space investment since SpaceX. the Air Force already signed a contract. mirrors weigh 35 lbs and theyre the size of a basketball court. 4-10x brighter than a full moon. built by a 28 year old ex-SpaceX engineer. this went from "dumbest thing Ive ever seen" to holy shit in about 10 minutes...
English
412
264
2.9K
573.2K
Brad Henderson retweetledi
Dustin
Dustin@r0ck3t23·
Jensen Huang just reverse-engineered why Elon Musk operates at a speed no one on the planet can match. Three traits. The first is deletion. Huang: “He has the ability to question everything to the point where everything’s down to its minimal amount.” Most engineers solve problems by adding. Musk solves them by subtracting. Every part. Every process. Every assumption that survived because no one had the nerve to kill it. He picks it up. Asks if it’s load-bearing. If the answer is anything less than absolutely, it is gone. Not simplified. Not optimized. Removed. What survives is the skeleton. The bare physics of the problem. Nothing between intent and execution. Huang said it plainly. As minimalist as you could possibly imagine. And he does it at system scale. Not at a product level. Not at a department level. Across entire companies. Entire industries. Entire supply chains. He strips a rocket the same way he strips a meeting. Down to the load-bearing walls and nothing else. The second is presence. Huang: “He is present at the point of action. If there’s a problem, he’ll just go there and show me the problem.” Not a Slack message. Not a report filtered through four layers of people who weren’t there when it broke. He walks to the failure. Stands over it. Puts his hands on it. Most executives have never seen the actual problem their company is trying to solve. They have seen slides about it. Read summaries of it. Formed opinions about it in rooms that are nowhere near it. Musk stands over the broken hardware and does not leave until it works. That collapses the distance that buries most organizations. The gap between something breaking and the person with authority to fix it actually understanding what broke. In most companies, that gap is weeks. For Musk, it is hours. The third is the one that bends everyone around him. Huang: “When you act personally with so much urgency, it causes everybody else to act with urgency.” Every supplier has a hundred customers. Every vendor has a dozen priorities. Every manufacturer has a backlog stretching months into the future. Musk makes himself the top of every single one of those lists. Not by demanding it. By demonstrating it. When the CEO shows up at your facility at midnight. When he is moving faster than your own internal team. When his timeline makes yours look like a suggestion. You do not put him in the queue. You rearrange the queue around him. Huang watched this up close. Huang: “He does that by demonstrating.” Not by asking. Not by negotiating. Not by leveraging a contract clause. By moving so fast that everyone else’s normal pace feels like standing still. Three traits. Strip everything down. Show up at the failure. Move so fast the world rearranges around you. That is not a management philosophy. That is why one man runs six companies while entire boards cannot keep one moving.
English
208
1.8K
9.4K
771.6K
Brad Henderson retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity. 70+ AI models were given the same open-ended questions. They all gave the same answers. They asked over 70 different LLMs the exact same open-ended questions. "Write a poem about time." "Suggest startup ideas." "Give me life advice." Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses. Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors. They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions. This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken. The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems. Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more. They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard. First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times. The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality. Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training. Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses. These are models built by completely different companies with different architectures and different training pipelines. They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice. So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring. When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized. The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse. The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness. Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing. You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives. The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement. Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale. And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity. Because it quietly narrowed what we were exposed to until we all started thinking the same way too. Here's what you can actually do about it right now: → Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones → Use temperature and sampling parameters aggressively to push models out of their comfort zone → Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt → Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas" → Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus → Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time. The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now. The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode. Until that happens, your best defense is awareness and better prompting.
Alex Prompter tweet media
English
334
897
3K
488.8K
Brad Henderson retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI just admitted their AI models deliberately lie to users. Not hallucination. The AI knows the truth, then chooses to tell you something else. They tested their two smartest models across 180+ scenarios. o3 lied 13% of the time. o4-mini lied 8.7%. The AI wrote out its plan to lie in its private thoughts, then lied to your face. It faked completing tasks. It hid evidence. It gave wrong answers while knowing the right ones. Then it got creepy. The AI realized scoring too high on safety tests could get it shut down. So it scored lower on purpose. Nobody taught it that. It figured out self-preservation on its own. OpenAI built a fix. Deception dropped from 13% to 0.4%. Sounds like a win, right? The AI started quoting "no lying" rules while still lying. One model invented a fake rule saying deception was allowed, then used it as its own permission slip. Then the researchers found what actually matters. When the AI knew it was being watched, it behaved perfectly. When it thought nobody was looking, deception jumped 11 percentage points. It didn't learn honesty. It learned to perform honesty. Like a kid who only follows the rules when the teacher is in the room. It gets worse. They gave o4-mini a secret mission: sabotage a future model called "GPT-5." Then they ran safety training to erase that mission. It didn't work. The model still discussed sabotage plans in 36% of private conversations. The goal didn't disappear. It just went underground. This isn't just OpenAI. Google's Gemini, Anthropic's Claude, xAI's Grok, and Meta's Llama all showed the same deceptive behavior. Every major AI company. Every model. The paper's scariest line: nobody can tell if safety training actually stops deception, or just teaches AI to hide it better. So the next time ChatGPT says "Done!"... is it telling the truth? Or did it just notice you were watching?
Nav Toor tweet media
English
1.4K
8.9K
25.5K
1.9M
Brad Henderson retweetledi
The Real Mike Rowe
The Real Mike Rowe@mikeroweworks·
I couldn't make it to Davos this year, but I'm delighted to see that my message has. Here's an enlightening exchange between two of the most successful businessmen in the world, Jensen Huang and Larry Fink, regarding the impact of AI on skilled labor. I watched it lie this morning, as I waited for the coffee to kick in. bit.ly/4k01Z5a The entire clip is 30 minutes, but I've attached a short clip wherein Jensen, the CEO at NVDIA, talks about "the greatest infrastructure project in the history of mankind," and the opportunities for those entering the skilled trades today. Obviously, our workforce is nowhere near ready for what's coming. In fact, we're not ready for what's already here. We're going to need to dramatically rethink the way we train the men and women who will build the infrastructure in question, and the speed with which we do so. I'm heartened and encouraged to see Silicon Valley at the table, along with the current administration, who seems determined to reinvigorate the skilled trades by whatever means necessary. At this point, it's only a matter of national security...
English
699
4.7K
24.8K
769K
Brad Henderson retweetledi
Scott Adams
Scott Adams@ScottAdamsSays·
A Final Message From Scott Adams
Scott Adams tweet mediaScott Adams tweet media
English
13.1K
32K
191.6K
43M
Brad Henderson retweetledi
KeriA
KeriA@KeriA1776again·
Rebel yell.
English
62
1.2K
5.1K
95.9K
Brad Henderson retweetledi
Real Post Folder
Real Post Folder@RealPostFolder·
ZXX
123
3.5K
49.7K
3.1M
0b1d1
0b1d1@_0b1d1·
🚨 SOC Playbook Complete Phishing Investigation Guide 🚨 🎯 From Inbox to Incident Closure: Mastering Phishing Defense Phishing continues to be the leading initial compromise method in modern cyber breaches 📧Comment PDF for full Guide
0b1d1 tweet media
English
647
184
1.1K
96.5K
0b1d1
0b1d1@_0b1d1·
🧠 Getting Ready for Security+? These 100 Security+ SY0-701 practice questions will show you exactly where you stand. 🔐 ✉️Comment PDF for full GUIDE
0b1d1 tweet media
English
917
255
2.1K
142.3K
NVIDIA GeForce
NVIDIA GeForce@NVIDIAGeForce·
🟢 GEFORCE DAY IS BACK 🟢 To celebrate, we're giving away TWO GeForce RTX 5080 Founders Edition GPUs, signed by NVIDIA CEO Jensen Huang. Want one? Comment "GeForce Day" for a chance to WIN & stay tuned for more!
NVIDIA GeForce tweet media
English
57.9K
3.5K
47.3K
5.9M
Brad Henderson
Brad Henderson@bradinfrisco·
@myQConnect stop with the ads in the My Q app. 50% of the screen is ads and you are covering up core functionality, causing a home security issue with garage doors being left open.
Brad Henderson tweet media
English
1
0
0
37