Kyle Jordan Maxwell

56.7K posts

Kyle Jordan Maxwell banner
Kyle Jordan Maxwell

Kyle Jordan Maxwell

@KyleJMaxwell__

Neuroscience & Biochemistry @UMBC, Philosopher. Chef. Host of The Kyle Maxwell Podcast. My pasta will change your life…

Baltimore, MD Katılım Kasım 2011
1K Takip Edilen12.2K Takipçiler
Sabitlenmiş Tweet
Kyle Jordan Maxwell
Kyle Jordan Maxwell@KyleJMaxwell__·
I posted a black square last summer & was completely engulfed into Identity Politics & Wokeism. This was just last July… it’s absolutely amazing what can happen when you begin educating yourself, admit you were wrong & allow the old you to die. If I can do it, so can you.
English
600
2.6K
24.5K
0
Kyle Jordan Maxwell
Kyle Jordan Maxwell@KyleJMaxwell__·
Why is literally every single customer representative I’ve ever spoken to on the phone retarded?
English
0
0
0
49
Kyle Jordan Maxwell
Kyle Jordan Maxwell@KyleJMaxwell__·
Well said. As someone currently in year 2/13 in becoming a doctor, and who’s been making content for the last decade, a career is incredibly more stable than content creation. In fact, having an actual career makes watching your content that much better / informative.
Dr. Glaucomflecken@DGlaucomflecken

I try to remind med students and practicing physicians who consider scaling back or giving up clinical careers for content creation how unstable it can be. People view content creation as freedom. You can work for yourself on your own schedule. That's true to a certain degree, but keep in mind that you don't own your audience. TikTok owns it. YouTube owns it. One change in the algorithm could significantly impact your livelihood. A few years ago, YT reclassified shorts as videos that are under 3 minutes instead of 60 seconds. Overnight, my social media income was cut in half. It didn't matter too much to me, because I still work full time as an ophthalmologist. I know I shit on the healthcare system and point out all the terrible things we deal with as physicians, but for all its faults, practicing medicine is an incredibly stable career, much more stable than content creation. I will never begrudge anybody for pursuing their passions. Some people just can't practice medicine anymore, and are looking for any way out they can find. Content creation is a way out, and hardworking, talented people will make it happen. I just want people to know the grass isn't always greener...

English
0
0
0
140
Kyle Jordan Maxwell retweetledi
Dr. Glaucomflecken
Dr. Glaucomflecken@DGlaucomflecken·
I try to remind med students and practicing physicians who consider scaling back or giving up clinical careers for content creation how unstable it can be. People view content creation as freedom. You can work for yourself on your own schedule. That's true to a certain degree, but keep in mind that you don't own your audience. TikTok owns it. YouTube owns it. One change in the algorithm could significantly impact your livelihood. A few years ago, YT reclassified shorts as videos that are under 3 minutes instead of 60 seconds. Overnight, my social media income was cut in half. It didn't matter too much to me, because I still work full time as an ophthalmologist. I know I shit on the healthcare system and point out all the terrible things we deal with as physicians, but for all its faults, practicing medicine is an incredibly stable career, much more stable than content creation. I will never begrudge anybody for pursuing their passions. Some people just can't practice medicine anymore, and are looking for any way out they can find. Content creation is a way out, and hardworking, talented people will make it happen. I just want people to know the grass isn't always greener...
English
62
257
3.4K
206.1K
Kyle Jordan Maxwell retweetledi
Andrew D. Huberman, Ph.D.
Andrew D. Huberman, Ph.D.@hubermanlab·
I predict peptides will change everything re public health discourse for health & disease. B/c they’re closer to medications (in many cases actual medications!) than supplements but they entered the picture in supplement-like fashion. HRT went the other direction. Get educated.
English
174
198
5K
674.7K
Kyle Jordan Maxwell
Kyle Jordan Maxwell@KyleJMaxwell__·
Enjoying the beautiful town of Fogelsville, Pennsylvania.
Kyle Jordan Maxwell tweet media
English
0
0
1
56
Kyle Jordan Maxwell retweetledi
Danielle Miyagishima, M.D., Ph.D.
In 2025, finding out I didn’t match to #neurosurgery was one of the most devastating moments I have faced in my career… but in 2026, dreams do come true!! I’m going to be a neurosurgeon at Brown!! 🐻 for anyone out there facing a difficult year ahead—you got this!! #Match2026
Danielle Miyagishima, M.D., Ph.D. tweet media
English
43
95
2.3K
50.6K
Kyle Jordan Maxwell
Kyle Jordan Maxwell@KyleJMaxwell__·
Congratulations to all the MS4s today who’ve matched and are going on to do great things! #matchday2026
English
0
0
0
133
Kyle Jordan Maxwell retweetledi
Alpaca
Alpaca@DoctorAlpaca·
Leaving a well established position in Esports was one of the most difficult decisions I’ve ever made. Today I’m grateful to announce that I have matched with my #1 program!! 🙏🏼
Alpaca tweet media
English
43
10
767
53.1K
Kyle Jordan Maxwell
Kyle Jordan Maxwell@KyleJMaxwell__·
This jack ass just walked out the gym leaving all his weights on the rack, and all his little dumbbells laying on the ground. I utterly despise people like this on a spiritual level.
English
0
0
0
140
Kyle Jordan Maxwell
Kyle Jordan Maxwell@KyleJMaxwell__·
Made Lamb Bordelaise avec Pommes Pailles for lunch. 100/10!
Kyle Jordan Maxwell tweet media
Français
0
0
0
70
Kyle Jordan Maxwell retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.6K
48.9K
9.9M
Kyle Jordan Maxwell retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?
Nav Toor tweet media
English
1.4K
8.9K
33.7K
3.2M
Kyle Jordan Maxwell retweetledi
arc.
arc.@arceyul·
🚨Asciende a 3 millones la cifra de usuarios que han desinstalado ChatGPT en los últimos días.
arc. tweet mediaarc. tweet media
Español
563
1.4K
19.2K
6.1M
Kyle Jordan Maxwell retweetledi
Adam KP
Adam KP@AdamKPx·
This is what learning looks like in spatial computing 👀
English
161
1.6K
11.8K
640.5K