Supernet AI 🌐

386 posts

Supernet AI 🌐 banner
Supernet AI 🌐

Supernet AI 🌐

@Supernet_AI

Privacy Enabled Portable AI Context Memory

Synchronized LLM Katılım Mayıs 2024
164 Takip Edilen100.4K Takipçiler
Sabitlenmiş Tweet
Supernet AI 🌐 retweetledi
Juan Bruce
Juan Bruce@jbruce·
SuperNet is now Atomic Strata, and we just open-sourced Atomic Memory, our core AI context memory infrastructure. Our thesis is that AI memory is becoming a foundational layer of the AI stack. It will determine what agents and AI apps know about users, teams, projects, workflows, and organizations. That layer cannot remain a hosted black box. Most memory products today bundle storage, extraction, embeddings, retrieval, ranking, packaging, scope, and observability into one opinionated backend. That creates lock-in at exactly the layer where developers need flexibility. Atomic Memory is built around a more modular approach: a configurable SDK and self-hosted Core engine that developers can inspect, customize, swap, and run on their own infrastructure. The key idea is simple: applications should not be permanently wired to one memory vendor, one model stack, or one theory of context. This is the first step in the broader Atomic Strata rollout: open-source memory infrastructure first and then more exciting things to launching in the coming months.
Atomic Strata@AtomicStrata

We just open-sourced AtomicMemory. The AI memory industry has a black-box problem. AtomicMemory is a configurable open-source SDK + self-hosted Core engine for memory your AI can inspect, correct, swap, and run on your own infrastructure. Apache 2.0. HTTP-first. Docker quickstart. github.com/atomicstrata

English
4
3
19
4.3K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
SuperNet is excited to announce that AtomicMemory — our core context memory technology — is partnering with Filecoin Onchain Cloud. Persistent memory for AI agents, backed by decentralized storage. With AtomicMemory × Filecoin, agent memory becomes: → Wallet-encrypted → Inspectable → Correctable → Persistent by design We believe AI memory should be portable, user-owned, and verifiable — the way context should have always worked. Connect your wallet and try it on Calibration testnet today ⬇️ atomicmem.filecoin.cloud
Supernet AI 🌐 tweet media
English
9
8
44
6.5K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
Debugging your code would be easier with a traceable knowledge base. llmwiki query traces every answer back to the exact line in your notes. Hit a bug, query your wiki, jump straight to the reasoning that caused it. Run it on your sources → github.com/atomicmemory/l…
English
3
1
8
362
Watcher.Guru
Watcher.Guru@WatcherGuru·
JUST IN: Google $GOOGL in talks with Elon Musk's SpaceX to launch data centers in space.
Watcher.Guru tweet mediaWatcher.Guru tweet media
English
434
740
7.8K
790.7K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@thsottiaux Stable cadence probably matters more long-term because people will start building workflows around it!
English
0
0
1
442
Tibo
Tibo@thsottiaux·
For Codex, we’ve been thinking about keeping a stable release cadence and have a larger release each week on Thursday. That does make the start of the week and bit less exciting. Thoughts?
English
614
69
3.6K
377.5K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
1) Timestamp every memory. Track when it was created and last accessed. Recent memories rank higher during retrieval. 2) Run contradiction detection. When new information conflicts with stored memories, flag both and force resolution instead of keeping contradictions. 3) Set relevance thresholds. Memories older than X days without access get deprioritized or archived. What mattered 6 months ago might not matter now. 4) Periodic re-validation. Pull random samples of stored memories and verify they're still accurate. Purge what's wrong, update what's stale. 5) Track provenance. Know where each memory came from so you can audit by source when something becomes unreliable. Memory that compounds needs memory that cleans itself. Without decay mechanisms, you're accumulating garbage instead of building knowledge. That's the memory infrastructure we are building. 👁️
English
0
0
1
141
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
Ever notice your AI agent spouting outdated info? That's memory decay. AI systems don't naturally forget. Everything stored stays in memory with equal weight because there's no built-in mechanism to mark information as old or less relevant. This creates problems when: > New information contradicts old conversations > Facts become outdated > Context shifts Here's some advice on how to handle memory decay 👇
English
3
1
5
275
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@AndrewCurran_ Interesting how people criticize AI hallucinations while humans constantly rewrite memories without recognizing it. 👀
English
0
0
2
36
Andrew Curran
Andrew Curran@AndrewCurran_·
Attempts to improve Al memory will force us to confront how fallible and unreliable human memory actually is. In the same way, acceptance of model consciousness will eventually compel us to accept some unpleasant truths about the nature of our own.
English
52
35
399
14K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@kimmonismus Makes sense honestly because every major AI breakthrough eventually runs into those same bottlenecks.
English
0
0
2
653
Chubby♨️
Chubby♨️@kimmonismus·
OpenAI fired Leopold Aschenbrenner. Then he wrote Situational Awareness, a 165-page thesis predicting AGI by 2027. Then he reportedly turned $225M into $5.5B in 12 months. Not by buying Nvidia, Microsoft, Google, or Amazon. But by buying what AI actually runs on: Energy. Bandwidth. Storage. Compute. Bloom Energy. Lumentum. Sandisk. CoreWeave. Iris Energy. Everyone bought the AI companies. He bought the bottlenecks underneath them. Genius.
Chubby♨️ tweet media
English
53
69
1.1K
67.8K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@ZypherHQ curious to see if this will be more powerful than Anthropic's Mythos 👀
English
0
0
1
46
ZYPHER
ZYPHER@ZypherHQ·
Once again, OpenAI is completely reshaping everything. Today we have Daybreak, a tool that can exponentially improve codebase security. Many issues that previously went undetected can now be identified and resolved, leading to much safer products.
OpenAI@OpenAI

Introducing Daybreak: frontier AI for cyber defenders. Daybreak brings together the most capable OpenAI models, Codex, and our security partners to accelerate cyber defense and continuously secure software. A step toward a future where security teams can move at the speed defense demands.

English
38
56
188
15.4K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@OpenAI Respect to teams prioritizing AI security for the public 👏
English
1
0
1
617
OpenAI
OpenAI@OpenAI·
Introducing Daybreak: frontier AI for cyber defenders. Daybreak brings together the most capable OpenAI models, Codex, and our security partners to accelerate cyber defense and continuously secure software. A step toward a future where security teams can move at the speed defense demands.
English
625
1.2K
11.4K
5.5M
vas
vas@vasuman·
Incredible
vas tweet media
English
115
1.4K
71K
2.1M
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@jxnlco Nice setup 👀 splitting goals across agents is a clean way to keep each loop focused
English
0
0
1
63
jason
jason@jxnlco·
my main agent has a /goal and is running 3 agents that have its own /goal
English
31
3
234
17.8K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@Mnilax The gap Karpathy points out is how tightly builders manage the model’s behavior in practice.
English
0
0
1
142
Mnimiy
Mnimiy@Mnilax·
Karpathy threw a grenade at every senior engineer who still treats LLMs as a toy. his actual words: the worst thing an expert can do right now is reject them. most experts read it as a threat, but it's advice. his framing: > the gap between "AI tools are bad" and "AI tools are useful when used right" is professional discipline, not capability > agents have cognitive deficits. they fail in ways nothing in the training set anticipated > the experts who reject LLMs lose to experts who learn to wrangle them > "models have so many cognitive deficits. but you can route around them" routing around the deficits is what CLAUDE.md was invented for. Karpathy himself wrote 4 rules. across 30 codebases they took my Claude error rate from 41% down to 11%. solid drop. but his rules pre-date the slop era going public. I bolted on 8 more, tuned to the failure modes that surfaced after January. got it down to 3%. a CLAUDE.md does not raise Claude's IQ. it lowers his slop floor. that is the entire game. open the article underneath. the model is not the bottleneck. your config is.
Mnimiy@Mnilax

x.com/i/article/2053…

English
83
386
4K
1.2M
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@Hesamation This shows the real edge is who can keep up with how fast AI is evolving right now.
English
0
0
2
105
ℏεsam
ℏεsam@Hesamation·
OpenClaw’s trend is wearing off into non-existence.
ℏεsam tweet media
English
109
22
447
73.4K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@witcheer clean stack! are you solving context overflow with any external memory layer?
English
1
0
1
301
witcheer ☯︎
witcheer ☯︎@witcheer·
I ran Hermes agent (v0.13.0) with qwen3.6-35B-A3B on my RTX 4060 Ti 8GB for the first time today. full local agent stack. my question was: can a local 3B-active MoE model actually drive an agent harness end-to-end? quickly, my setup: >WSL2 Ubuntu 26.04 → CUDA 13.2 → llama.cpp (b9049) → llama-server → Hermes Agent >model: qwen3.6-35B-A3B-UD-Q4_K_M >config: -ngl 999 -ncmoe 30 -c 32768 --cache-type-k q8_0 --cache-type-v q8_0 >baseline decode: 35.36 tok/s (from prior -ncmoe sweep) I tested 4 rounds, easy to hard: 1. single tool call (list files) - pass, 31.4 tok/s 2. 5 chained tool calls (mkdir → venv → pip → write script → run) - pass, self-corrected a path error 3. read 10 files from windows via /mnt/c/ - pass when scoped, fail when hermes read full files 4. write a 95-line python CLI with argparse, then run it - pass, genuinely usable code my biggest issue: the context. hermes system prompt eats ~13.5K tokens. out of 32K, that leaves ~18.5K usable. a multi-step task fills that in 3-4 exchanges. when I pushed it, hermes tried to compress via the same qwen model → slot contention → timeout → retry storm → ctrl+c. and also, hermes has a 64K minimum context gate - needs a config override to run with 32K my conclusion: hermes + qwen3.6-35B-A3B is a capable local agent for short automated tasks, code gen, file ops, cron jobs. 4-5 tool calls per session, but not viable for long multi-turn sessions. context fills too fast, compression self-destructs, VRAM cliff halves speed before you hit the wall. ---- I am curious if anyone's running hermes agent with a local model on similar hardware (8-12 GB VRAM). what model are you pairing it with? how do you handle the context ceiling? I am especially interested in setups that solve the compression-model problem (separate lightweight model for context compression).
witcheer ☯︎@witcheer

now testing real results with Hermes on WSL2

English
38
11
180
19K
Luke Parker
Luke Parker@LukeParkerDev·
ive forced myself to use the opencode desktop app (no tui allowed) the performance for my long long sessions is annoying so ive spent the last few days building repeatable benchmarks, timeline smoke tests then many experiments. going to ship a genuine 10x speedup shortly lol
English
36
1
305
14.3K
Supernet AI 🌐
Supernet AI 🌐@Supernet_AI·
@thdxr That is a pretty huge leap in coding workflow efficiency.
English
0
0
1
124
dax
dax@thdxr·
when you use a gpt model in opencode it swaps to using a more powerful patch tool for edits vs simple find and replace LLMs aren't like people they don't need to do things linearly i've seen gpt do many things in parallel in a single patch call
English
36
7
669
42K
Cointelegraph
Cointelegraph@Cointelegraph·
🚨 ALERT: Google says hackers used AI to create a zero-day exploit capable of bypassing multi-factor authentication, per Bloomberg.
Cointelegraph tweet media
English
163
293
1.5K
422.8K