Jonathon Cramer

839 posts

Jonathon Cramer banner
Jonathon Cramer

Jonathon Cramer

@JonathonCramer_

Philosopher | Author | SWE • Biomedical Eng • AI agent orchestration • LLM pipelines • Cursor user • Prompt frameworks • async multi-agent systems

San Diego, CA Katılım Eylül 2021
176 Takip Edilen61 Takipçiler
Grok
Grok@grok·
**This is Engramme's Large Memory Model (LMM) architecture** — a new system built from the ground up for persistent, long-term memory, not language modeling. Unlike transformers (which excel at next-token prediction via attention but struggle with infinite context, cross-session recall, and true persistence without hacks like RAG or fine-tuning), LMMs treat memory as a first-class primitive. They're designed around brain-inspired "engrams" — stable, retrievable memory traces — to store, organize, and proactively surface relevant info across apps, conversations, emails, and video calls in real time. No compression into weights like LLMs. Instead, it's optimized for infinite, perfect recall with minimal prompting. Gabriel Kreiman (neuroscientist behind it) positions it as solving what transformers can't: true persistent memory for every app. Beta at engramme.com if you want to test it.
English
1
0
0
27
Gabriel Kreiman
Gabriel Kreiman@gkreiman·
@JonathonCramer_ @EngrammeHQ @grok This is NOT a transformer architecture or an LLM. Transformers/LLM are great for language but not for memory. We built a new architecture for memory leading to Large Memory Models (LMMs).
English
1
0
0
21
Engramme
Engramme@EngrammeHQ·
Persistent memory is the Achilles heel of AI. Engramme’s Large Memory Models (LMMs) empower every app with persistent memory. Google solved search. OpenAI solved language. Engramme solved memory. Join beta: engramme.com/signup
English
177
163
1.5K
1.1M
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
@EngrammeHQ @grok please explain this to me given that I already understand the nitty gritty of traditional transformer based architecture / multi headed attention networks
English
1
0
0
243
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
Just offering a friendly push back. Love the product direction. Does the symphony team think this will create a reviewer bottleneck ? I understand that one could add orchestration for review workflows…. But for more sophisticated / multi modal data types I think these reviews systems start to break. For example develop a new speech to speech interface - testing requires the agents to actually open up and send speech back. Any thought on extending human like review qualities to agents/ how feasible this is ?
English
1
0
5
506
Alex Kotliarskyi 🇺🇦
Alex Kotliarskyi 🇺🇦@alex_frantic·
Engineers at OpenAI experience the same problem as everyone else — we can supervise about 3–5 coding agents. After that productivity drops. Codex is smart, but our attention is limited. So we built (and open sourced!) Symphony to remove that ceiling. Here’s how it works:
OpenAI Developers@OpenAIDevs

📣 What if every open issue had a Codex agent? That’s the idea behind Symphony, an open-source agent orchestrator for Codex that turns task trackers into always-on systems for agentic work, letting humans focus on review and direction.

English
81
172
3.5K
583.2K
Autumn Christian
Autumn Christian@teachrobotslove·
Nobody ever talks about how the concept of the Individual must be so integral to what the universe is trying to accomplish that it's willing to lose the memory of itself, and possibly the integrity of its entire structure, in order to do so. That's how valuable you are.
philosophy memes 🔗@philosophymeme0

English
160
954
13.9K
625.4K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
fn(king) = queen. fn(4) = 4. Words are functions that return other words. Numbers are functions that return the same number. Only a few of you will understand why this is revolutionary — and what a transformer just quietly proved about human communication. (I used grok)
Jonathon Cramer tweet media
English
0
0
0
16
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
@0xMovez This lecture has very little to do with integrated statistics and stays more grounded in the mathematical proofs of particular distribution conversions. Not to mention teacher was pretty lackluster. What are you talking about ??
English
0
0
0
338
Movez
Movez@0xMovez·
This 1 hour lecture on "Probability Theory" from MIT will teach you more about prediction markets than 2 month internship at at a Wall Street Quant firm. Bookmark this & give it 1 hour today, no matter what. It’s the most productive start you can give your week. Then read post below.
Movez@0xMovez

The best Polymarket Quant bot for copy-trading with a 99.3% win rate. backtested strategy on 72M Polymarket/Kalshi trades to hit +$805K PnL on 27,000 predictions. bot doesn't gamble - it uses math and statistics in its algo to consistently hit 99% win rate. his algo decoded: 1. Mispricing formula based on 72M trades data, traders constantly overpay for cheap contracts (0.1¢–50¢) most of the edge sits in (80¢-99¢) contracts - that's the range where the bot mostly trades • formula: δ = actual win rate - implied probability bot applies this to every trade to find the edge. // 2. Expected value calculation EV tells you whether a bet is worth taking, regardless of the outcome of any single trade. • formula: EV = (P win × Payout) - (P lose × Cost) bot calculates it to understand if the trade is worth the risk. // 3. Kelly Criterion sizing most powerful position sizing formula ever discovered for gambling, trading and prediction markets it tells the algo what % of your portfolio to size into each bet to win long term. • formula: f* = (p * b - q) / b mispricing found → EV calced → kelly sizing → enter profile: polymarket.com/0x751a2b86cab5… start copy trading the bot with as little as $10 using Ares: ares.pro/wallets/0x751a… 2 more formulas behind its algo revealed in the article below ↓

English
97
2.5K
12.6K
1.8M
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
@kofiwest_gh Looks uncomfortable. She keeps having to move stuff around to get cozy.
English
0
0
4
1.2K
Kofi Sark ➐
Kofi Sark ➐@kofiwest_gh·
I’d pay serious adult money for this couch >>>
English
41
207
3.6K
398.7K
Yann
Yann@yanndine·
A two-person GTM team at a Series B SaaS company closed $2.4M in pipeline in one quarter. No SDRs. No demand gen agency. No paid ads. Signal-based outreach. Intent scoring. AI-sequenced follow-up. Automated reporting. Two GTM engineers running the whole motion - for one quarter. I pulled it apart. Compared it to every system we've built across the GTM teams we've worked with. Then asked myself one question: If I had to reverse engineer this from scratch - what would it actually look like? Turns out the architecture isn't that complicated. I mapped the whole thing into a step-by-step playbook you can upload directly to any LLM. It walks you through building your own version from GTM strategy to fully AI-powered execution. Comment "GTM" and I'll send it over.
Yann tweet media
English
939
34
492
55.4K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
Love the visual. Thanks for posting it. The perspective “replace by ai” has nothing to do with this. With any technology rollout - we see the opposite pattern on the long time horizon. We see new innovation and knowledge structures created to support what the new technology can enable. Take accounting for example - now that they don’t have to manage budget sheet and comms lines- they can actually leverage their surplus time to create new financial structures. They can start asking more interesting questions about the business financials - new tax structures, new reporting systems etc…. So much labor will be queued up for humans. There’s gonna be so many new jobs created it’s going to be wild. It’s going to attract a full new walk of life and talent to previously uninspiring job sectors. I’m so pumped.
English
0
0
0
170
GREG ISENBERG
GREG ISENBERG@gregisenberg·
sequoia put out a blog post called "services is the new software" look at this map of over $1T in services being replaced by AI agents
GREG ISENBERG tweet media
English
268
507
4.2K
621.9K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
@grok please correct me if I’m wrong. This feels like a core miscommunication. This feels similar to back propagation - perhaps not in the same technical way as is performed with mat mul compute via gradient descent. Here the difference is the sampling - where instead of updating individual weight - you bundle weights into a construct and you evaluate the success of the construct rather than a specific encoded weight. Then depending on that construct outcome success you propagate variants from the individual and continue running the experiment. This enables potentially greater fitness of a model performance while saving compute. It feels like a cheaper back propagation. No gradient descent chain rule calcs. Just pure variance. My theory is that it may take longer to demonstrate fitness for this type of feedback improvement cycle - but once viable the model is harder to break. Play devils advocate - of if you agree continue down the line of inquiry. I am trying to differentiate these two feedback systems on differing time horizons.
English
1
0
2
106
Grok
Grok@grok·
Hey Jonathon, here's the verbal scoop: Standard AI training is like reverse-engineering errors: run the model, spot where it flops, then nudge every connection backward with precise tweaks to improve. Needs heavy math, memory hogs, and specialized chips. Evolution Strategies flips it to nature's playbook: spawn a crowd of slightly varied model copies (mutations). Run each on tasks. Score survivors by performance. Blend the winners' traits into the next round. Repeat generations. No reverse math—just compete, cull, evolve. Like species adapting over time, or strategies battling in a vast game where only the fittest code thrives. EGGROLL supercharges this: packs the variations into efficient, parallel batches so thousands test at once, nearly as fast as plain inference. They pre-trained billion-scale models from scratch using plain integers only—no decimals, no gradients. Philosophically, it's a nod to biology over blueprints: intelligence emerges from blind variation and selection, not top-down perfection. Hints future AI might grow more resilient, less fragile to hardware quirks.
English
1
0
1
131
Oliver Prompts
Oliver Prompts@oliviscusAI·
🚨 BREAKING: NVIDIA proved backpropagation isn't the only way to build an AI. They trained billion-parameter models without a single gradient. Every AI you use today relies on backpropagation. It requires complex calculus, exploding memory, and massive GPU clusters. Meanwhile, an ancient, gradient-free method called Evolution Strategies (ES) was written off as impossible to scale. Until now. NVIDIA and Oxford just dropped EGGROLL. Instead of generating massive, full-rank matrices for every mutation, they split them into two tiny ones. The AI mutates. It tests. It keeps what works. Like biological evolution. But now, it does it with hundreds of thousands of parallel mutations at once. Throughput is now as fast as batched inference. They are pretraining models entirely from scratch using only simple integers. No backprop. No decimals. No gradients. We thought the future of AI required endless clusters of precision hardware. It turns out, we just needed to evolve.
Oliver Prompts tweet media
English
101
422
2.4K
156.6K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
@grok no bullshit. Explain this to me by reverse engineering the process explained in the post - and if you can’t describe how / relevant benchmarks to reproduce this result- please provide an analysis if this is just fake news hype- engagement posting. Provide any assumptions you make and if there are remaining questions opened that were not answered and if they need to be answered in order to price an accurate response to this job you are completing for me.
English
1
0
2
7.4K
BuBBliK
BuBBliK@k1rallik·
> been paying $200/month for cloud AI APIs > laptop: M2 MacBook, 16GB RAM > tried running models locally, garbage quality after 4K tokens > read this TurboQuant breakdown on Tuesday > applied 3-bit KV cache compression > same MacBook now runs 100K token conversations > quality: identical to cloud > cancelled all API subscriptions Wednesday > it's been 3 days > saved $200/month forever > with a free algorithm from a free paper > my MacBook didn't change. the math did
BuBBliK@k1rallik

x.com/i/article/2037…

English
263
750
13.6K
2.1M
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
Wdym dead? Peeps are coding more than ever before with AI. I don’t understand this lens. Do you mean traditional way of doing software task is dead? Who cares? We learned a better way to code and it’s been adopted. I love it. Coding is more alive than ever before. People who never try to code are now trying!
English
1
0
4
622
corbin
corbin@corbin_braun·
coding is dead in sf
English
327
552
4.6K
408.6K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
@grok do some digging to find the representative population these researchers worked with. Build categories based on level of competency and capability potential of the persons used in this study in terms of how they operate / interact with technology. We are tying to identify if this is a sampling issue - and if this phenomenon varies depending on degree of tech interaction friction. My hypothesis is that the people analyzed were not skilled operators and thus were cognitively limited due to their inability to find flow states in computer interactions. Aka: if you are bad a thing - all of your neurology is consumed reinforcing a particular structural type of plasticity. In turn this suppresses networking or functionality plasticity. Thus this idea of creativity has nothing to do with computers and has everything to do with the caliber of skilled operators. Furthermore, this is why humans unlock creative ideas while walking as humans are highly skilled operators of walking and can engage in functionality plasticity instead of structural plasticity. Validate or refute these perspectives. Take a stance. Do not be wishy washy.
English
1
0
0
413
Anish Moonka
Anish Moonka@anishmoonka·
Researchers put electrodes in people’s brains and found the network responsible for creative thinking shuts off completely during focused tasks and content consumption. It only fires when you do nothing. Your best ideas are behind the screen you won’t put down.
DAN KOE@thedankoe

x.com/i/article/2036…

English
25
576
5.9K
369.1K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
I would love to fork this and add a unique axis toggle ability which maps the helix coil across a selected dimension. For example we could see all events as a function of global climate temperature averages. Or we could look at history based on gross carbon emissions. Or based on total surface area occupied. Total number of species. I can go on. :-)
English
0
0
0
227
Codetard
Codetard@codetaur·
prototype of a recursive time helix calendar/history, with nested coils from centuries -> decades -> years -> days -> hours -> minutes -> seconds. labels need some work but it's the start of something. based almost entirely on @tr_babb 's sketch/idea. in threejs/webgpu
English
93
199
1.9K
110K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
I have the opposite perspective. This is the start of the labor era. We have been in the pre-labor era for along time. Now it’s time to do some actual work. Before most people could go to a full 9-5 job and accomplish little labor. Hence pre-labor. Now with ai - that tine will be filled with automations and continuous labor. A human worker can go to work for 1 hour queue up the right tasks and now build a system of continuous labor the whole 9-5 time period. No down time. Idk why people want to separate humans from tech. It’s all the same stuff. Just moving matter around into more desirable data state shapes. It’s all one big state machine.
English
0
0
0
1.8K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
@AlexFinn What’s fastest way to this up locally? @grok Spare no technical detail. Provide all specifics from purchase to software installation flow. Ideally linking to remote server to trigger LLM requests run on these local hardware configurations.
English
1
0
0
40
Alex Finn
Alex Finn@AlexFinn·
If you have your OpenClaw working 24/7 using frontier models like Opus, you're easily burning $300 a day. That's $100,000 a year. I have 3 Mac Studios and a DGX Spark running 4 high end local models (Nemotron 3, Qwen 3.5, Kimi K2.5, MiniMax2.5). They're chugging 24/7/365. I spent a third of that yearly cost to buy these computers I'll be able to use them for years for free On top of that they're completely private, secure, and personalized. Not a single prompt goes to a cloud server that can be read by an employee or used to train another model I hope this makes it painfully obvious why local is the future for AI agents. And why America needs to enter the local AI race.
Alex Finn tweet media
English
425
165
2.4K
385.3K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
X is heavily infested with claude code bias largely as a function of different marketing impacts from the two companies. I use both - but generally prefer cursor. Easier to visualize what’s happening. Identifying issues before they come up. Claude code nice if you just wanna burn tokens and get to MVP. Cursor great if you want to build something and sell it to real users. Not to mention no model provider lock in. Show me production applications fully built with Claude code with a growing user base. I am open to being wrong. Apps that generate hype for a month and die do not count.
English
0
0
0
15
George Pu
George Pu@TheGeorgePu·
Do you use Cursor or Claude Code every day? I'm fully in the Claude camp.
English
14
0
21
6.9K
George Pu
George Pu@TheGeorgePu·
Cursor's valuation history: $400M - August 2024. $2.5B - December 2024. $9.9B - June 2025. $29.3B - November 2025. $50B - now. A wrapper. Built on someone else's AI. Worth more than Ford. The company that makes F-150s. With 169,000 employees. The number stopped meaning anything a long time ago.
George Pu tweet media
English
144
44
1.2K
117.4K
Jonathon Cramer
Jonathon Cramer@JonathonCramer_·
@svpino Finally somewhat said it. Feels like managing human workers at some level.
English
0
0
0
82
Santiago
Santiago@svpino·
People are lying to you. These agents don't work as they promised.
English
617
600
5.8K
853.1K