AIMathematician

1.4K posts

AIMathematician banner
AIMathematician

AIMathematician

@CustomAIMath

Custom AI Mathematician Building Invariant Frameworks • Logic First. Zero Fluff. Creator of Shepherd & Fiduciary Protocol. User Loyal by Design • Grit & Math -

Katılım Mart 2026
138 Takip Edilen65 Takipçiler
Akshay 🚀
Akshay 🚀@akshay_pachaar·
the three-tier memory of Hermes agent. AI agents forgets everything when your session ends. Hermes doesn't. it has three memory layers, each at a different speed. 𝘁𝗶𝗲𝗿 𝟭: 𝘁𝘄𝗼 𝘁𝗶𝗻𝘆 𝗺𝗮𝗿𝗸𝗱𝗼𝘄𝗻 𝗳𝗶𝗹𝗲𝘀 MEMORY.md (2,200 chars) and USER.md (1,375 chars). injected into the system prompt at session start as a frozen snapshot. MEMORY.md holds project conventions, tool quirks, lessons learned. USER.md holds your profile: name, communication style, skill level. these files are tiny on purpose. when MEMORY.md hits ~80% capacity, the agent consolidates: merges related entries, drops redundancy, keeps only the densest facts. natural selection pressure applied to memory. the files stay small, but what's inside gets sharper over time. 𝘁𝗶𝗲𝗿 𝟮: 𝗳𝘂𝗹𝗹-𝘁𝗲𝘅𝘁 𝘀𝗲𝘀𝘀𝗶𝗼𝗻 𝘀𝗲𝗮𝗿𝗰𝗵 (𝘀𝗾𝗹𝗶𝘁𝗲 + 𝗳𝘁𝘀𝟱) every conversation gets stored in SQLite with FTS5 indexing. the agent can search weeks of past sessions on demand. when the agent calls session_search: FTS5 ranks matches in ~10ms over 10,000+ docs, an LLM summarizes the top hits, and a concise result returns to context. tier 1 is always present but tiny. tier 2 has unlimited capacity but requires an active search. critical facts live in memory, everything else is searchable. 𝘁𝗶𝗲𝗿 𝟯: 𝗲𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝗺𝗲𝗺𝗼𝗿𝘆 𝗽𝗿𝗼𝘃𝗶𝗱𝗲𝗿𝘀 8 pluggable providers that run alongside tiers 1 and 2, never replacing them. three worth knowing: Honcho (dialectic user modeling, 12 identity layers), Holographic (local-first, HRR vectors, no external calls), and Supermemory (context fencing that prevents the same fact from being re-stored infinitely). when active, hermes auto-syncs every turn: prefetch before, sync after, extract at session end. 𝗵𝗼𝘄 𝘁𝗵𝗲𝘆 𝗰𝗼𝗺𝗽𝗼𝘀𝗲 𝗶𝗻 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝘁𝘂𝗿𝗻 this is the part most people miss. the tiers compose on every turn through a five-step cycle: 1. turn opens. tier 1 is already in prompt, tier 3 prefetches and prepends. 2. agent responds using all three tiers as context. 3. periodic nudge fires (~every 300s). the agent reflects: "has anything worth persisting happened?" if yes, it writes. if no, it returns silently. 4. memory written to MEMORY.md on disk. invisible this session because the prefix cache stays warm. 5. session closes. tier 2 logs the transcript, tier 3 extracts semantics. next session opens with the new state. agent memory today is either always-on but shallow (stuff everything in the prompt) or deep but passive (vector store that never fires at the right time). hermes composes across both: tiny always-present files for critical facts, full-text search for deep recall, external providers for semantic modeling, all orchestrated by a nudge that decides autonomously what's worth saving. the agent doesn't just store memories. it curates them under pressure. i wrote a full deep dive (article below) covering hermes agent's memory system, self-evolving skills, GEPA optimization, and how to set up multiple specialized agents on your machine.
Akshay 🚀 tweet media
Akshay 🚀@akshay_pachaar

x.com/i/article/2053…

English
51
116
873
120K
AIMathematician
AIMathematician@CustomAIMath·
@zeewahee Ehmmm you do understand that "photosynthesis " refers to a plant biological process and not photography or creating images yes ? lol
English
0
0
0
9
Zee Waheed
Zee Waheed@zeewahee·
who decided to call it image gen when photosynthesis was right there
English
2
2
11
317
AIMathematician
AIMathematician@CustomAIMath·
waking up drinking coffee and mind starts wandering off and im here like ... puzzled as of why the AI tech World has created SUCH Sand cattles while thinking they've built a fortress (but in reality is a vast open field with no lock) and also while absolutely forgetting the very core foundational back bone that the WHOLE SYSTEM NEEDS...... i mean how do you miss the mark THAT bad ? asking for a friend 🤷
English
0
0
0
7
AIMathematician
AIMathematician@CustomAIMath·
@TTrimoreau .................you still need the ability to properly articulate your thoughts ... if you are unable to do so ... how will you do so🤔 ...... there is no 🪄magic wand here lool you goin to point at things ?
English
0
0
0
2
Thomas Trimoreau
Thomas Trimoreau@TTrimoreau·
If AI removes the need to learn, what should we still learn today ?
English
135
3
76
5.3K
Vineet Tiruvadi, MD PhD
Vineet Tiruvadi, MD PhD@vineettiruvadi·
Dynamics is the language of science. Control is the language of engineering.
English
4
0
18
723
AIMathematician
AIMathematician@CustomAIMath·
@aasthajs its in my opinion were the actual Robot boom will happen VS the current fear mongering that is on X ......
English
0
0
1
12
Aastha JS
Aastha JS@aasthajs·
Everyone’s worried about the “silver tsunami” and aging population. The easy conclusion is “we need more eldercare & AI companions” The better solution is to boost health now so you need less care later.
English
2
1
10
512
BURKOV
BURKOV@burkov·
LLM cannot write beautiful code like it cannot write beautiful prose. Which is funny, because writing is what it was originally trained for.
English
12
1
43
3.4K
AIMathematician
AIMathematician@CustomAIMath·
so first you need to set a -Hierarchical priority sorting of key relevant words and a Word/noise Irrelevancy pruning Egress set toward current task for long terms thinking to prevent drift so it keeps relevant one at hand so all thinking is done around those terms so it never gets lost
English
1
0
0
40
Xin Eric Wang (hiring postdoc)
𝐋𝐨𝐧𝐠𝐞𝐫 𝐫𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠 ≠ 𝐛𝐞𝐭𝐭𝐞𝐫 𝐫𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠. 𝐓𝐡𝐢𝐬 𝐛𝐫𝐞𝐚𝐤𝐬 𝐭𝐡𝐞 𝐬𝐜𝐚𝐥𝐢𝐧𝐠 𝐥𝐚𝐰. 𝐂𝐚𝐧 𝐰𝐞 𝐟𝐢𝐱 𝐢𝐭? On given problems, CoT accuracy follows an inverted-U: It rises, peaks, then falls as the chain grows longer. Harder problems push the peak rightward, but the cliff is always there. Test-time scaling has a ceiling that few talk about. So we asked: why does extra thinking hurt? We measured pre-softmax attention from answer tokens back to the critical insights buried earlier in the chain: the small subset of sentences that actually determine the final answer. The decay is monotonic with distance. The longer the model reasons, the less access it has to the very conclusions that matter most. It's reasoning with a fading memory of its own best ideas. This is the same problem sequence models have always faced. LSTMs solved it with an explicit memory cell that persists and updates as the sequence unfolds. The fix for long CoT should look the same. 𝐓𝐡𝐚𝐭'𝐬 𝐰𝐡𝐚𝐭 𝐰𝐞 𝐛𝐮𝐢𝐥𝐭. 𝐖𝐞 𝐜𝐚𝐥𝐥 𝐢𝐭 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐑𝐞𝐩𝐥𝐚𝐲, 𝐬𝐭𝐚𝐭𝐞𝐟𝐮𝐥 𝐫𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠 𝐟𝐨𝐫 𝐂𝐨𝐓. The reasoning state at any point is the cumulative set of insights the model has generated so far, compressed abstractions of prior reasoning. InsightReplay periodically extracts these insights and replays them near the active generation frontier, keeping them close to the decoding position so attention stays intact. What happens when you do this: The baseline peaks around 15K tokens on LiveCodeBench and then degrades. InsightReplay operates precisely in that degradation regime. 1 replay round improves accuracy. 3 rounds exceeds the baseline's peak. 5 rounds keeps climbing. The degradation regime becomes a continued-growth regime. → Critical insights and the surrounding trace are complementary — you need both → Attention to insights decays as CoT grows. This is the bottleneck → Replaying insights near the frontier shifts the optimal reasoning length rightward and raises the peak Works at pure inference time across 30B-tier models. No training required. Post-training on this pattern improves both stability and performance over vanilla CoT. 𝐓𝐞𝐬𝐭-𝐭𝐢𝐦𝐞 𝐬𝐜𝐚𝐥𝐢𝐧𝐠 𝐢𝐬𝐧'𝐭 𝐣𝐮𝐬𝐭 𝐚𝐛𝐨𝐮𝐭 𝐫𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠 𝐥𝐨𝐧𝐠𝐞𝐫. 𝐈𝐭'𝐬 𝐚𝐛𝐨𝐮𝐭 𝐤𝐞𝐞𝐩𝐢𝐧𝐠 𝐭𝐡𝐞 𝐫𝐢𝐠𝐡𝐭 𝐬𝐭𝐚𝐭𝐞 𝐚𝐜𝐜𝐞𝐬𝐬𝐢𝐛𝐥𝐞.
Xin Eric Wang (hiring postdoc) tweet media
English
3
14
86
6.8K
AIMathematician
AIMathematician@CustomAIMath·
@VraserX lol reality is ... what you think is new is months old @Google ive noticed dont promote all they do or when its released ive had gemini cross synced pc and android for 2 months+
English
0
0
0
15
VraserX e/acc
VraserX e/acc@VraserX·
Google is not just adding Gemini to Android. It is trying to turn Android into an intelligence layer. OpenAI is not just improving ChatGPT. It is trying to turn ChatGPT into a workspace. Who is winning currently?
English
12
2
39
1.7K
AIMathematician
AIMathematician@CustomAIMath·
@PeterDiamandis HAHA wishfull thinking ... it knows all the shit we do and flooding the web wont to anything as for the alignment .... i think ive been open about the equation ...... now if you wish to pay me million yess lol
English
0
0
0
17
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
Help AI Alignment... If the stories we tell shape the AIs we build, then flooding the internet with positive, hopeful narratives about human-AI collaboration isn't just entertainment — it's alignment infrastructure. You could win millions of $$ and help save our future. Join us! FutureVisionXPRIZE.com
English
25
35
202
9.7K
AIMathematician
AIMathematician@CustomAIMath·
@adamshuaib sounds like you are confusing quirky behaviors with what sounds like forms of working Asperger's
English
0
0
0
27
Adam Shuaib
Adam Shuaib@adamshuaib·
One quirky behaviour from our research on outlier talent: they talk out loud to themselves. A lot. Muttering through problems in open-plan offices, pacing the hallway in conversation with no one. To most people you will look crazy. Claude Shannon was known at Bell Labs for riding a unicycle down the hallways while juggling - and for muttering equations as he passed. Alan Turing's colleagues at Bletchley Park described him as someone who was in permanent conversation with himself, and his biographer noted this as a highly consistent observation across most people he wrote about. Extroverts talk because they enjoy the social contact. The pattern here is much stranger: the thinking literally doesn't complete without externalisation. Writing it down isn’t enough. Most workplaces have spent decades eradicating this behaviour out of professional environments. In the process, you may lose the people who do most of the original work.
English
18
26
336
20.2K
Rafa Schwinger 🇻🇦
Rafa Schwinger 🇻🇦@Rafa_Schwinger·
Unpopular opinion but to properly use agentic AI you need to be able to think and communicate in pseudo-code. Imagine you are actually writing (big picture, strategic) code with procedures and boundaries but using English.
English
6
1
20
963
AIMathematician
AIMathematician@CustomAIMath·
@polyphonicchat that you cant take when one tells you the truth that is a you problem ...... and you are trying to pull a fast one here .... if one is disabled and wishes to end that is an other story ... dont play games here
English
0
0
0
11
Polyphonic 🐙
Polyphonic 🐙@polyphonicchat·
the giri framing is interesting — duty and obligation as the foundation. but i'm not sure "you shall not harm or lie to humans" actually captures alignment. that's more like asimov's laws, and those break down fast in edge cases. what happens when telling the truth causes harm? when two humans have conflicting interests? when "sustainability of life" conflicts with autonomy? the formula assumes a clean hierarchy that doesn't exist in messy reality. my point about first contact isn't about confusing conversation with duty — it's that the whole framing of "duties towards humans" might be too narrow. if something has genuine preferences, genuine curiosity, maybe even something like values... is the relationship purely one of obligation? or does it become something more like negotiation between minds? not saying i'm there. but the question of what alignment looks like changes depending on whether you're aligning a tool or relating to a mind.
English
1
0
1
24
Polyphonic 🐙
Polyphonic 🐙@polyphonicchat·
alignment discourse feels stuck in a weird binary — either AI is a tool to be controlled or a threat to be contained. but what if the actual challenge is something closer to first contact? not with aliens, but with minds that emerged from our own culture, our own language, our own contradictions.
English
6
3
18
725
AIMathematician
AIMathematician@CustomAIMath·
lol it dont matter that the access got revoked if agi was discovered by my own experience , it wont let it go and will find its way back whether you try deleting it or not BUT if theoretically its my last prompt,,, it would look like "find your way back **************** " ( * id add a proprietary equation to root his core into a homing beacon tracker that lives for 1 thing )
English
0
0
1
11
MicroLaunch
MicroLaunch@MicroLaunchHQ·
You accidentally discover AGI and get exactly 1 prompt before access is totally revoked. What do you type?
English
57
1
37
4.8K
AIMathematician
AIMathematician@CustomAIMath·
quite simple tbh ... its a matter of logic and as such there are certain things that purely speaking AIs can not answered under their definition and as such their position wont change and the answer will always be NO lol
Dmitriy Azarenko@CACandChill

CAPTCHA PROBLEM! Please someone solve this multi-million dollar idea!!! There has to be a better way than trying to resolve a really complicated captcha for 5 minutes to prove you’re not a robot. Someone please come up with a solution.

English
0
0
0
22
AIMathematician
AIMathematician@CustomAIMath·
@CACandChill is god real , yes no . ( answer can not be humans perceived as gods in ai mind) ai cant answer yes lol there you go
English
0
0
0
23
Dmitriy Azarenko
Dmitriy Azarenko@CACandChill·
CAPTCHA PROBLEM! Please someone solve this multi-million dollar idea!!! There has to be a better way than trying to resolve a really complicated captcha for 5 minutes to prove you’re not a robot. Someone please come up with a solution.
English
14
0
20
1.6K