Jonathan Staples

34 posts

Jonathan Staples banner
Jonathan Staples

Jonathan Staples

@staples46198

AI & Business Transformation | Turning hype into workforce wins. Thrivers master AI. Survivors adapt.

Pflugerville, TX Katılım Mart 2026
45 Takip Edilen7 Takipçiler
Sabitlenmiş Tweet
Jonathan Staples
Jonathan Staples@staples46198·
Just launched – your new home for straight-talk on AI and business transformation in 2026. Check my recent LinkedIn pieces on AI and what's driving business transformation. Data-first. Leader-sourced. Actionable.
English
0
0
0
10
KIEL
KIEL@0xkiel_·
Mark Cuban says Elon Musk texted him "f*ck you" and called him a racist after he criticized Tesla. "I had his number and I texted him congrats on your 97th kid and he text me back ‘Mars needs kids.’ then I said something about a Tesla and he just sends me a text saying fuck you.” He’s called me racist, he’s called me poop emoji multiple times... that just gives me license to f*uck with him even more.”
English
540
1.3K
17.9K
3.5M
Leif-Erik Hvide
Leif-Erik Hvide@LeifErikH·
Just tell it "let's sleep on it. Have a good night" wait 10 seconds and then say good morning. I don't know why but that stupid little act refreshes Claude Code and it's like a completely different state. My theory is that it has no sense of time the way humans do, so it measures time in terms of token consumption or conversation turns. Once it hits enough of them it starts "acting tired" and tells you it's time to get some sleep.
English
12
1
53
2.6K
Omar Shahine
Omar Shahine@OmarShahine·
What the heck is going on with Claude Code. It spends a lot of time telling me things are going to take hours days or weeks to do and wants to stop working a lot. "Or if you'd rather I sleep on it and pick up tomorrow with fresh eyes, that's fine too." Like why would I want you to sleep on it? You don't sleep.
English
161
14
440
53.3K
Jonathan Staples
Jonathan Staples@staples46198·
@OmarShahine Yeah - it keeps telling me to “pick it back up tomorrow” or try to close out the session or tell me to go to bed. Like wtf. I can’t sleep -the clowns’ll eat me.
English
0
0
0
377
BridgeMind
BridgeMind@bridgemindai·
I have two. Two NVIDIA DGX Sparks stacked on a shelf in my office. One running Hermes Agent doing cold outreach that's generated $20K+ in partnerships. The other running GLM 5.1 for local inference. 256GB of combined compute. No API. No rate limits. No subscriptions. Running 24/7. Sovereign intelligence is not a future thing. It's happening right now in my living room.
BridgeMind tweet media
Alex Finn@AlexFinn

Do you understand how cool this is? On my desk is a DGX Spark running a Hermes agent powered by Qwen 3.6 It runs 24/7/365 doing tasks for me. Doesn't matter if the internet goes out. I have super intelligence running for me at all times Next step I want to get a Tesla solar roof so I'm dependent on NOBODY to run my intelligence. Even if they cut off my power I'll keep going. This is the future. Sovereign intelligence.

English
33
22
253
30.2K
Hitchslap
Hitchslap@Hitchslap1·
Serious question. Do you believe time travel is possible?
English
1.5K
27
553
56.6K
simran sachdeva
simran sachdeva@simranrambles·
wonder how will Elon rename cursor, considering both xcode and codex are taken
English
707
98
5K
321.9K
Jonathan Staples
Jonathan Staples@staples46198·
@signulll They train on their internal code and keep it separate for their own use. Can’t let the cat outta the bag on how they do their magic.
English
0
0
2
836
signüll
signüll@signulll·
not a single person i have ever spoken to uses gemini for coding. this is still very very weird. why is gemini so bad at coding when google has scoured the web full of code for decades?
English
1.1K
163
9.6K
847.2K
송준 Jun Song
송준 Jun Song@jun_song·
What’s stopping you from getting into local LLMs? Drop a comment below! 👇
English
94
1
58
9.6K
송준 Jun Song
송준 Jun Song@jun_song·
분명 지금 당신이 우울한 이유는 충분한 VRAM을 가지지 못해서 일겁니다.
한국어
74
184
1.6K
46.8K
Matthew Berman
Matthew Berman@MatthewBerman·
Anthropic is crazy for this
Matthew Berman tweet mediaMatthew Berman tweet media
English
27
12
168
33.8K
James 𝕏ond
James 𝕏ond@james_xond·
Trying to prove a point: If you didn’t need to work to survive, what would you do all day?
English
763
9
163
31.4K
Jonathan Staples
Jonathan Staples@staples46198·
@MilkRoadAI So basically the frontier models brute force the intelligence instead of training on clean data. Sounds like a business opportunity - clean training data.
English
2
0
2
861
Milk Road AI
Milk Road AI@MilkRoadAI·
Andrej Karpathy just made one of the most interesting arguments about AI model design that most people are completely missing. His take is that frontier AI models are not too big because the technology is complex and too big because the training data is garbage. When you or I think of the internet, we picture Wall Street Journal articles, Wikipedia entries, serious writing. That is not what a pretraining dataset looks like. When researchers at frontier labs look at random documents from the actual training corpus, it is stock ticker symbols, broken HTML, spam, gibberish. One estimate puts Llama 3's information compression at just 0.07 bits per token meaning the model has only a hazy recollection of most of what it trained on. So we build trillion parameter models not because we need a trillion parameter brain but because we need a trillion-parameter compression engine to squeeze some intelligence out of a firehose of noise. Most of those parameters are doing memory work, not cognitive work. Karpathy's prediction is separate the two entirely. Build a cognitive core, a model that contains only the algorithms for reasoning and problem-solving, stripped of encyclopedic memorization and pair it with external memory that it can query when it needs facts. He thinks a cognitive core trained on high-quality data could hit genuine intelligence at around one billion parameters. For reference, today's flagship models run between 200 billion and 1.8 trillion parameters with most of that weight dedicated to remembering the internet's slop. The trend is already moving his direction. GPT-4o operates at roughly 200 billion parameters and outperforms the original 1.8 trillion-parameter GPT-4. Inference costs for GPT-3.5-level performance dropped 280-fold between 2022 and 2024 driven almost entirely by smaller, cleaner, better-architected models. The real bottleneck in AI right now is not compute but rather data quality.
English
47
135
907
198.6K
Matthew Berman
Matthew Berman@MatthewBerman·
I have to say something...please don't be mad. OpenClaw has been nearly unusable for the past week. Something changed and now everything is broken. Looking forward to testing Personal Computer.
Perplexity@perplexity_ai

Today we're releasing Personal Computer. Personal Computer integrates with the Perplexity Mac App for secure orchestration across your local files, native apps, and browser. We’re rolling this out to all Perplexity Max subscribers and everyone on the waitlist starting today.

English
236
15
553
155.6K
Trav 👄🫀
Trav 👄🫀@trav12037911·
What did AI replace in your life?
English
171
1
94
6.2K
Jonathan Staples
Jonathan Staples@staples46198·
@ollama Is the improvement across all gemma4 models or just the 31b?
English
0
0
2
398
ollama
ollama@ollama·
Ollama 0.20.6 is here with improved Gemma 4 tool calling! more improvements to come for Gemma 4!
ollama tweet media
English
64
138
1.9K
106.5K
Jonathan Staples
Jonathan Staples@staples46198·
@Seltaa_ I’m running dual 3060’s with 12gb each. Runs the gemma4:31b, but a little slower than ideal. The sweet sport is the 26b MOE version. Runs really nice so far
English
0
0
1
427
Selta ₊˚
Selta ₊˚@Seltaa_·
I have an RTX 5080 but its 16GB VRAM is too low for running my fine-tuned 31B model locally. Each response takes over 1~3 minutes. Thinking of selling it and switching to a 4090 with 24GB. Fuck.
English
49
4
150
22.5K
Jonathan Staples
Jonathan Staples@staples46198·
6/6 What’s ONE branch you’re growing this month—prompting, workflows, agents, or leadership strategy? Drop it below—I read every reply and will share resources + prompts that actually work. Let’s climb the tree together.
English
0
0
0
2
Jonathan Staples
Jonathan Staples@staples46198·
5/6 My take: You don’t need to be a tech genius. Just stay curious and keep climbing. The next promotion might be one branch away.
English
1
0
0
4
Jonathan Staples
Jonathan Staples@staples46198·
1/6 The Skill Tree That Turns AI Curiosity into Real Career Growth 🌳 We went from survivors vs thrivers → AI-augmented managers. Now the real edge? Climbing the skill tree, one intentional branch at a time. LinkedIn Skills on the Rise 2026 just dropped the data. Thread 🧵
English
1
0
0
6