Michael Connor

383 posts

Michael Connor banner
Michael Connor

Michael Connor

@devatlanta

Atlanta based entrepreneur & Co-founder of https://t.co/nuFiJaOe4f, an AI based digital nutritionist focused on brain health

Atlanta GA Sumali Haziran 2009
121 Sinusundan155 Mga Tagasunod
Michael Connor
Michael Connor@devatlanta·
i have a friend who i’m working with, diagnosed with schizophrenia and psychosis that i believe has an autoimmune disorder attacking his brain. i’m using AI to analyze his labs and found countless indications that he is suffering from autoimmune encephalitis induced psychosis. just starting treatment based on that assumption, fingers crossed. all anti-psychotics have had paradoxical effects for him.
English
1
0
1
147
Brain Inflammation Collaborative
Brain Inflammation Collaborative@BrainInflCollab·
In 2017, Alina Sternberg, a psychiatrist, was hit with crushing fatigue and brain fog. Neurologists told her the symptoms were caused by depression. "No, I can enjoy my life, and I know what depression is... I’m a psychiatrist!” It took 6 years to discover the culprit...🧵
Brain Inflammation Collaborative tweet media
English
19
136
621
129K
Michael Connor
Michael Connor@devatlanta·
@robertlufkinmd fat. in the 80s we declared war on fat and replaced it with carbs. fat makes you feel good and full, your body needs it and fat (real natural fat not seeds oils) has nutrients. without fat, we eat but don’t feel satisfied.
English
1
0
4
551
Robert Lufkin MD
Robert Lufkin MD@robertlufkinmd·
If not sugar, then what is the cause?
Robert Lufkin MD tweet media
English
291
22
237
119.9K
Michael Connor
Michael Connor@devatlanta·
@DawnsMission how do you know you’re not unleashing millions of cancer cells into the patients body that can take hold in other places. not being cynical here, just curious.
English
4
0
16
2.1K
Dr. Dawn Michael
Dr. Dawn Michael@DawnsMission·
🚨 WOW —Tumors literally liquefied by sound waves. No scalpel. No chemo. No radiation. None of those horrible side effects. This is histotripsy: focused ultrasound blasts destroy cancer cells mechanically in minutes, sparing healthy tissue completely.
English
795
11.5K
32.7K
906.7K
Michael Connor
Michael Connor@devatlanta·
@NathieVR the UI looks cool but it's not obvious what string to play for each note. the frets are clear but gauging the height of the incoming note and guessing the string seems tough.
English
1
0
2
189
Nathie
Nathie@NathieVR·
This mixed reality app combines the fun of Rocksmith with real guitar lessons.
English
46
250
2.7K
218.4K
Michael Connor
Michael Connor@devatlanta·
i agree with the “every generation blames the next generation assessment” but we are seeing statistics to back this up. 23% of men 18-24 are opting out of the economy. smoking weed and playing video games in their parents basement. i don’t blame them, it’s a complex issue, but as a society, we need to quickly figure out what the hell is going on before we lose these kids.
English
0
0
2
201
The Wall Street Journal
For all sorts of reasons, Gen-Z is woefully unprepared for dealing with the workplace. Here’s why—and what companies need to do to fix it. on.wsj.com/3P3tAqy
The Wall Street Journal tweet media
English
21
23
80
45.6K
Simplifying AI
Simplifying AI@simplifyinAI·
"I don't have a GPU" is officially dead 🤯 You can now run 70B model on a single 4GB GPU and it even scales up to the colossal Llama 3.1 405B on just 8GB of VRAM. AirLLM uses "Layer-wise Inference." Instead of loading the whole model, it loads, computes, and flushes one layer at a time → No quantization needed by default → Supports Llama, Qwen, and Mistral → Works on Linux, Windows, and macOS 100% Open Source.
Simplifying AI tweet media
English
147
591
5.9K
323.6K
Michael Connor
Michael Connor@devatlanta·
@sciencegirl cool! that’s the same year that nuclear fusion is supposed to come out.
English
0
0
0
205
Science girl
Science girl@sciencegirl·
Kawasaki has confirmed its Corleo robot horse will enter production, debut at Expo 2030 in Riyadh, and be sold to consumers by 2035.
English
439
817
4.8K
433.3K
Michael Connor
Michael Connor@devatlanta·
@IntuitMachine this is done today with tool use and RAG. cursor does this by grepping files. i don’t think this is as novel as it may seem.
English
0
0
0
72
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
We’ve been obsessed with "bigger context windows" for years. 128k. 1M. 10M tokens. 🛑 Stop. New research from MIT suggests we might be doing it wrong. The secret to infinite context isn't a bigger brain. It's "Recursive Language Models." And they may have cracked the code on processing 10M+ tokens. 🧵 1/15 First, let's talk about the "Dirty Secret" of LLMs: Context Rot. Even if a model claims it can handle 1M tokens, it doesn't mean it handles them well. As context grows, performance usually nosedives. The model "forgets" the middle or gets confused by the noise. 2/15 Enter the Recursive Language Model (RLM). The researchers (Zhang, Kraska, Khattab) asked a radical question: "What if we don't feed the prompt into the model at all?" Instead, they treat the prompt as an external environment. 3/15 Here is the "Aha!" moment: In an RLM, the prompt (no matter how huge) is loaded as a variable inside a Python REPL (a coding environment). The LLM isn't forced to memorize the text. Instead, it is given the power to write code to interact with the text. 4/15 Think of it like this: Standard LLM = Trying to memorize a 10,000-page textbook in one second. RLM = A student sitting at a desk with the textbook. They can open it, read page 50, take a note, close it, then read page 200. It turns a memory problem into a search and processing problem. 5/15 The "Recursive" part is where it gets wild. 🤯 The model writes code to chunk the text. Then, it calls itself (a sub-agent) to process that chunk. It can say: "Hey Sub-Model, read lines 1000-2000 and tell me if you see a mention of 'apples'." It aggregates the answers and moves on. 6/15 The results? Absolutely bonkers. On the "OOLONG" benchmark (a task where you have to connect dots across the whole text), standard GPT-5 and Qwen models failed hard as length increased. The RLM? It maintained strong performance even at 10 Million tokens. 7/15 "But wait," I hear you ask. "Isn't that incredibly expensive?" Actually... no. Because the model uses code to filter what it reads, it doesn't always ingest the whole thing. The study found RLMs were often comparable in cost (or even cheaper!) than standard long-context calls. 8/15 The most fascinating part is the Emergent Behavior. Nobody taught the model how to do this. It figured out on its own how to: Use Regex to filter keywords. Chunk text by newlines. Verify its own answers by running code. It's basically inventing its own "Out-of-Core" algorithms. 9/15 This solves a massive problem: Quadratic Complexity. Some tasks get harder the longer the text is (e.g., finding relationships between every pair of people in a book). Standard models choke on this. RLMs just break it down into loops and sub-calls. They eat complexity for breakfast. 10/15 Why does this matter to you? If you are building RAG (Retrieval Augmented Generation) or agents, this is a wake-up call. We are moving away from "Retrieval" (finding the right snippet) to "Reasoning" (programmatically navigating the whole dataset). 11/15 A practical tip for devs: Stop trying to stuff your entire codebase or legal library into the system prompt. Instead, give your agent a tool to read the file and a scratchpad to write code. Let it decide what it needs to read. 12/15 This research hints at a major shift in AI scaling laws. We used to think: Better Performance = More Training Compute. Now we are seeing: Better Performance = More Inference Compute. Give the model time to think, recurse, and execute code, and it becomes a genius. 13/15 The limitations? It's not perfect. Sometimes the model gets stuck in a loop or burns tokens verifying things it already knows. But as a proof of concept? It proves that "Context Windows" are a hardware constraint we can solve with software. 14/15 The paper is "Recursive Language Models" (arXiv:2512.24601). It’s a masterclass in rethinking how we interact with LLMs. If this thread sparked a lightbulb 💡 for you: Repost the first tweet to share the knowledge. Subscribe for more breakdowns of bleeding-edge AI research. 15/15
English
8
7
52
8.1K
Michael Connor
Michael Connor@devatlanta·
@iAnonPatriot remember that homes in the 50s were 950 sq/ft with linoleum floors and formica countertops, no AC. the average home is now 2300 sq/ft with all the amenities. expectations are sky high.
English
0
0
0
18
American AF 🇺🇸
American AF 🇺🇸@iAnonPatriot·
The Average Salary vs. Home Prices This chart is insanity.
English
1.1K
6.4K
23.8K
2.1M
Michael Connor
Michael Connor@devatlanta·
@maxmarchione the big mistake most people make is carbs within three hours of bed. high glucose at night is a sleep killer. grab a glucose monitor and see for yourself, it will change your health.
English
1
0
0
492
Max Marchione
Max Marchione@maxmarchione·
my current sleep stack:
Max Marchione tweet media
English
15
2
118
30.5K
Michael Connor
Michael Connor@devatlanta·
i’ve had the opposite experience but my sample size is about 10 bosses. the women leaders i’ve worked for were more often focused on making the right long term decision for the organization even if it meant short term pain, whereas my male bosses were more tuned into optics that led to accolades and advancement. The women were also more willing to stick their neck out to protect or support an employee.
English
0
0
1
154
Douglass Mackey
Douglass Mackey@douglassmackey·
Here’s a taboo no one is allowed to talk about
Douglass Mackey tweet media
English
606
1.3K
18.5K
397K
Camus
Camus@newstart_2024·
Ever get that shrug and "Fine" when you ask your kid "How was school today?" You're not alone—it's basically the universal parent struggle. But psychologist Amy Morin (author of "13 Things Mentally Strong Parents Don't Do") shares 7 way better questions that actually get kids talking. More importantly, they quietly build habits like gratitude, empathy, resilience, and curiosity—without turning it into a lecture. Here are a few standouts: • "What was the best part of your day?" – Trains their brain to spot positives and boosts optimism. • "What mistake did you learn from today?" – Normalizes failure and turns it into a growth lesson. • "Who were you proud of today?" – Shifts focus to seeing good in others, growing empathy. • "Who did you help today?" – Makes kindness feel natural and rewarding. • "What was the most interesting thing you learned?" – Fuels genuine curiosity beyond grades. • "What's one thing you could have made better today?" – Encourages self-reflection and problem-solving. • "What's something new you want to try?" – Sparks courage and creativity. The magic? These aren't interrogations—they're invitations into your child's world. Over time, kids open up more, think deeper, and feel truly seen. I've seen parents try these and say the dinner table chats get richer, the eye rolls fewer. Small shift, big connection. Which one's hitting home for you? Drop it in the replies—I'd love to hear how it goes when you try it. Full breakdown in the 2:25 video below (trust me, it's gold).
English
45
1.4K
6.5K
364.1K
Michael Connor
Michael Connor@devatlanta·
@drantbradley i still roughhouse with my 13 year old and it’s one of her favorite things. Only down side is that she will relentlessly pick on me when she wants to roughhouse.
English
0
0
0
647
Anthony Bradley
Anthony Bradley@drantbradley·
Roughhousing is one of the best activities fathers can do with their sons. I have a stack of psych data on the benefits. I often get asked by dads, “At what age should I stop?” Ans: When you die.” It’s esp. vital during the teen years. It protects boys against addiction, etc.
English
76
128
3.3K
1.1M
Frank Turek
Frank Turek@DrFrankTurek·
What do you think is the most common cause of poverty in the U.S.?
English
12.2K
118
1.3K
1.1M
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
The Survival Guide to the Singularity
Carlos E. Perez tweet media
English
14
36
134
6K
Michael Connor
Michael Connor@devatlanta·
@its_The_Dr i was a lifeguard and remember being shocked that 25% of americans drown in water that in three feet deep or less. seems crazy but it’s a real statistic.
English
0
0
3
7.9K
Johnny Midnight ⚡️
Johnny Midnight ⚡️@its_The_Dr·
Hard to believe ! Obama’s Chef fell off his paddle board and became the first Man ever to drown in a 4ft deep pond. Now new evidence is emerging.
English
392
6.9K
29.2K
1.1M