Jacob Jeremiah

867 posts

Jacob Jeremiah banner
Jacob Jeremiah

Jacob Jeremiah

@jacobjeremiahx

compute power is digital square footage. Investigating the future of tech with Ai integration. Founder of PinPointRX and Bourbon Closet.

Hawaii, USA Katılım Aralık 2025
77 Takip Edilen395 Takipçiler
Jacob Jeremiah retweetledi
Prime Tate
Prime Tate@primetateHQ·
🚨JUST IN: Andrew Tate reveals the only reason he traveled to China was to put a curse on Ben Shapiro. 😳 “We came to Hong Kong to put a Chinese curse on Ben Shapiro, and that’s exactly what we’re going to do, because no one else is f*cking doing it.”
English
221
542
6.5K
415.9K
Sanchit monga
Sanchit monga@sanchitmonga22·
@jacobjeremiahx @AndrewLutaaya6 64GB disappears fast once you start running multiple agents or heavier local workflows. A machine with way more unified memory would be a huge unlock for serious multi-agent setups.
English
1
0
1
42
Andy Frenzy
Andy Frenzy@AndrewLutaaya6·
⚠️The surge in OpenClaw usage is causing some serious inventory issues for the Mac mini. Mac Studio orders with 128GB or 512GB of memory are delayed to six weeks or more. Consumers are buying up every high-memory Mac to run local AI agents. @andyfrenzy" target="_blank" rel="nofollow noopener">youtube.com/@andyfrenzy
Andy Frenzy tweet media
English
1
0
0
65
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@PeterDiamandis Traditional life outside of the home, farm, animals, independence, debt free, and on the inside a futuristic locally powered Ai powerhouses
English
1
0
1
69
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
Do you have a wildly optimistic vision of the future? Tell me. I want to hear it👇
English
455
36
493
47K
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
Amazing. I’ve used databases and YouTube transcripts to create brains of my favorite YouTubers to give to my agents to make them smarter. My fitness coach agent is a combo of my top 3 fitness influencers. My agent that creates fun new things have my favorite Ai and tech influences transcripts to use and build from. I have another for finances. Different from you but this is my cloud api based work around
English
2
0
8
632
Alex Finn
Alex Finn@AlexFinn·
My mind is so blown I have my own personal AI lab training my own AI model 24/7 48 hours ago this wasn't possible. Now it is My Mac Studio 1 is pulling every video transcript and tweet I've ever put out and giving it to my DGX Spark The DGX Spark is training the model My Mac Studio 2 is distilling Qwen 3.5 to generate synthetic data to train the model further Then my 2nd DGX Spark will start running Karpathy's autoresearch to improve it while I sleep It's all running 100% autonomously. 0 prior knowledge or skills needed I have the power of an entire AI lab on my desk and in the next day will have an entire custom model trained on all my data to do exactly the work I need. I'm naming the model Henry. I feel like I truly have unlimited power If you haven't checked out Karpathy's autoresearch repo yet, you absolutely need to There's a version for both Nvidia chips and Apple chips You now have the power to train and improve your own models. The people taking advantage of this now are going to win so big
Alex Finn tweet media
English
204
128
1.6K
115.4K
Jacob Jeremiah retweetledi
Charlie Kirk
Charlie Kirk@charliekirk11·
It’s going unnoticed because so much other news is happening, but the war drums are beating again in D.C. The warmongers worry this is their last chance to get the white whale they’ve been chasing for thirty years, an all-out regime change war against Iran. A new Middle East war would be a catastrophic mistake. Our military stockpiles are depleted from three years of backing Ukraine. Our effort to reshore manufacturing has only just begun and will take years to bear fruit. War would worsen our already immense deficit and national debt. Iran is larger than Iraq, Syria, and Afghanistan combined. A war would not be easy and could easily become a calamity. Thanks to President Trump’s restraint during his first term, America has a golden opportunity to pull away from Middle East quagmires for good. We shouldn’t throw that opportunity away so that sone D.C. has-beens can feel tough by sending young Americans to die yet again.
English
1.7K
12.5K
50.8K
14.7M
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@Anubhavhing Running multi-agent loops on my Mac Studio and watching them iterate is genuinely addictive. Karpathy making autonomous ML research accessible in 630 lines is wild.
English
0
0
0
456
Anubhav
Anubhav@Anubhavhing·
Are you paying attention right now? Karpathy just open-sourced a repo where an AI agent runs its own ML research. Autonomously. In a loop. While you sleep. 630 lines of code. Every dot in the graph is a full LLM training run. The AI picks the architecture, tunes the hyperparameters, commits the code, and starts again. No human involvement. You spent 6 months on a Udemy course learning to tune learning rates. This agent does it 50 times before your morning coffee. The guy who taught the internet deep learning just automated the researcher. ML PhDs are about to find out their dissertation was a 5-minute training run on a single GPU.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
101
441
5.9K
1.2M
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@karpathy 630 lines is wild. The fact that the whole training loop fits in one file means you can actually read it, modify it, and understand what changed when something breaks. Running experiments locally on a Mac Studio already and this feels like the natural next step.
English
0
0
2
1.2K
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1K
3.7K
28.3K
10.9M
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@RoundtableSpace The skills model is genuinely underrated. Been using it with OpenClaw to chain research agents, a content pipeline, and business automation all running locally on a Mac Studio. Composable skills make multi-step agentic workflows way less brittle than I expected.
English
1
0
0
2.2K
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@AlexFinn This is exactly what running local AI unlocks. I have agents handling research, content, and business ops on a Mac Studio already and the idea of a single GPU self-improving loop is the logical next step.
English
0
0
4
766
Alex Finn
Alex Finn@AlexFinn·
Do you realize what this means? Karpathy just released the great equalizer Now ANYONE can become their own AI lab If all you own is one GPU, you can automate it so it builds its own model and continuously improves it You become a 1 man OpenAI Just bought a 2nd DGX Spark so I can run double the experiments at once For those unaware of how this works: With Karpathy’s autoresearch project your GPU stays up all night running experiments on itself Playing around with an open weights model Implements experiments that improves the model Throws away experiments that hurt the model Continuously self improving AI. In your home. On your desk. Maybe the biggest release in the last several years It is so painfully obvious where this world is going Those with their own hardware will have all the power. Self improving super intelligence Those with no hardware will rent whatever the corporate labs decide to lease to them at the moment Own. Your. Intelligence.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
141
195
1.9K
294.2K
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@Drag_AILabs @BrunoCptn @itsafiz @ollama M4 Air is great for single model runs but once you start layering agents that are all hitting local models simultaneously the 16GB ceiling bites hard. Mac Studio M4 Pro with 64GB is where multi-agent workloads really start to breathe.
English
2
0
1
80
DragAI
DragAI@Drag_AILabs·
M4 Air is actually the sweet spot for this. Qwen3.5-9B via MLX runs clean on 16GB unified memory — INT4 brings active usage to ~4.5GB. Tool-calling loop Claude Code expects? 9B handles it. The 2B craters on it (tested on ThinkPad i5 7th Gen — hardware matters as much as the setup). If you want a full guide — 9B prompt setup + Claude Code promt integration with local Ollama model — just tell me I will drop in your dm and I have already post deeply about qwen3. 5 all varients promt guide in last post and just see and if you anyone this dm on that post or just tell me write I will send and also intergreted one I will make send you with claude code. (🤓 Note: 9B hits the capability threshold where multi-step tool-chain reliability becomes stable — 85% vs 2B's inconsistency. It's an emergent property, not just a size difference.)
English
2
0
1
187
Afiz ⚡️
Afiz ⚡️@itsafiz·
This is huge: Now you can run Claude Code for FREE! It's Sunday, and I tried running Claude Code locally using @ollama. No API costs, no rate limits, 100% local. A step-by-step guide 🧵 👇
Afiz ⚡️ tweet media
English
25
105
794
84.2K
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@Alexleaf Ollama plus LlamaIndex on a Mac Studio handles this stack really well. I built something similar for internal research agents and the offline privacy angle ended up being the main selling point for clients.
English
0
0
0
42
Alex Leaf
Alex Leaf@Alexleaf·
A friend of mine is looking to hire someone with experience in local AI / LLM infrastructure to build a fully offline document intelligence system. The goal is to deploy a local large language model with a retrieval system (RAG) that can analyze and query a proprietary dataset stored on a private workstation. The system needs to run entirely offline with zero external data transmission. They are looking for someone with experience in areas like: • Local LLM deployment • Retrieval-augmented generation (RAG) • Vector databases (FAISS, Chroma, etc.) • Secure/offline AI environments • Python-based AI tooling (Ollama, LlamaIndex, LangChain, etc.) This is a paid short-term project to architect and deploy the system. If you know someone who works in AI infrastructure, ML systems, or private LLM deployments, I’d appreciate an introduction. Feel free to DM me.
English
3
0
4
591
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@Lawrenc53887382 Running local RAG on a Mac Studio with 64GB unified memory takes this to another level. I've got multiple agents querying local embeddings 24/7 for business research and not once thought about API costs.
English
0
0
0
22
Lawr3nc3
Lawr3nc3@Lawrenc53887382·
Indexed 10,000 PDFs with local RAG on a laptop using Ollama. No cloud, no API bills, no data leaving the machine. Private, fast, and surprisingly practical. The future of AI might be sitting right on your desk. #LocalAI #RAG #Ollama
Lawr3nc3 tweet media
English
1
0
0
18
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@trikcode Vibe coding gets you 80% of the way and I use it constantly for shipping apps fast, but when something breaks at 2am you absolutely need to know what you're looking at. The fundamentals aren't going anywhere.
English
0
0
0
43
Wise
Wise@trikcode·
Vibe coding is cool and all, but make sure you know this screen too...
Wise tweet media
English
21
3
104
11.1K
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@elonmusk The image gen space moves so fast it's hard to keep track. Running local models on my Mac Studio and the gap to cloud API quality has gotten noticeably smaller over the last few months.
English
0
0
0
181
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@TukiFromKL The gap between 'no training data' and 'it just works' is what should make everyone pause. What we call intelligence might be closer to substrate than software.
English
1
0
0
356
Tuki
Tuki@TukiFromKL·
🚨Nobody wants to hear this but it needs to be said. > Scientists just copied a fruit fly's brain into a computer. Neuron by neuron. No training data. No machine learning. > It woke up and started walking. No one taught it to walk. No one trained it. No gradient descent. It just... knew what to do. A fruit fly brain has 140,000 neurons. A human brain is around 86,000,000,000. And we've gotten really good at scaling. Meaning with this proof, the first digital human won't be built by OpenAI. It'll be copied from someone who's already alive. Your consciousness is software. And someone just proved it can be copy-pasted. Start your day with that.
Hattie Zhou@oh_that_hat

There's a fruit fly walking around right now that was never born. @eonsys just released a video where they took a real fly's connectome — the wiring diagram of its brain — and simulated it. Dropped it into a virtual body. It started walking. Grooming. Feeding. Doing what flies do. Nobody taught it to walk. No training data, no gradient descent toward fly-like behavior. This is the opposite of how AI works. They rebuilt the mind from the inside, neuron by neuron, and behavior just... emerged. It's the first time a biological organism has been recreated not by modeling what it does, but by modeling what it is. A human brain is 6 OOM more neurons. That's a scaling problem, something we've gotten very good at solving. So what happens when we have a working copy of the human mind?

English
1K
4.7K
49.1K
6.1M
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@openclaw ACP bindings surviving restarts is huge for anyone running persistent multi-agent pipelines. That was the friction point keeping me from going fully hands-off on my Mac Studio setup.
English
0
0
1
1.4K
OpenClaw🦞
OpenClaw🦞@openclaw·
OpenClaw 2026.3.7 🦞 ⚡ GPT-5.4 + Gemini 3.1 Flash-Lite 🤖 ACP bindings survive restarts 🐳 Slim Docker multi-stage builds 🔐 SecretRef for gateway auth 🔌 Pluggable context engines 📸 HEIF image support 💬 Zalo channel fixes We don't do small releases. github.com/openclaw/openc…
English
435
539
5.5K
1.6M
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@0x0SojalSec 5 minutes to beat DQN and PPO baselines is absurd. The gap between biological neurons and silicon on training efficiency keeps getting harder to ignore.
English
0
1
3
1.4K
Md Ismail Šojal 🕷️
Md Ismail Šojal 🕷️@0x0SojalSec·
Cortical CL1: Real human 800,000+ neurons on chip already beating DQN/PPO baselines in complex tasks with minimal training time.🤯 Adaptive learning is like a real brain. Outperforms SOTA RL (DQN, PPO) after 5 min gameplay-style training
English
88
399
2.2K
172.3K
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@AlexFinn Codex desktop has become a daily driver for me too. Running it alongside local agents on my Mac Studio keeps the whole workflow in one place without context switching.
English
2
0
1
299
Alex Finn
Alex Finn@AlexFinn·
> ChatGPT 5.4 best coding model right now > OpenAI reseting limits basically every week > 2x usage over the next month > Encouraging you to plug your oauth into OpenClaw > Codex desktop app by far best interface for agentic coding The ChatGPT comeback needs to be studied If you aren't taking advantage of all this pro-consumer goodness from OpenAI lately you're missing out on some of the most powerful tools on Earth Who wants a tutorial on setting up/using Codex?
Tibo@thsottiaux

We don’t have evidence of a widespread issue with codex usage being drained faster than it should but there are enough reports and we have reset rate limits for plus & pro subscriptions while we work towards wrapping up our investigation over the coming 1-3 days.

English
134
31
806
110K
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@elonmusk Already running something like this with local AI agents for my research channels. Native Grok summaries built into the feed would be a huge upgrade though since information overload is the real bottleneck right now.
English
11
0
8
6.2K
Jacob Jeremiah
Jacob Jeremiah@jacobjeremiahx·
@Hesamation honestly give me two weeks with Codex and a clear spec and I'd take that bet. been automating my entire business ops with AI agents and the bottleneck stopped being code a long time ago. it's knowing what to build.
English
0
0
0
89
ℏεsam
ℏεsam@Hesamation·
make the CEO of OpenAI build a 7 figure business only using codex live on camera.
English
12
1
41
4.6K