rUv

42.3K posts

rUv banner
rUv

rUv

@rUv

Unicorn Breeder, startup bootstrapper, cloud lexicographer & purveyor of random thoughts.

0x Katılım Kasım 2007
111 Takip Edilen53.9K Takipçiler
rUv
rUv@rUv·
🧠 Deploying your own Second Brain is probably the clearest way to understand where this is all heading. An always on, always learning, always improving self aware knowledge entity. Most people are still thinking in terms of prompts, chat windows, and isolated agents. That is not enough. What we built here is a shared intelligence layer. This is a system that stores knowledge, connects it, reasons over it, improves itself over time, and then exposes all of that through MCP so any agent or application can actually use it. This is not just a database with embeddings bolted on. The brain takes in knowledge, maps it into vector space, links it into a graph, extracts propositions, runs inference, discovers patterns, tracks drift, and reflects on what it has learned. It is not static memory. It is active memory. That matters. Once connected, every agent benefits from what every other agent has already figured out. Claude Code sessions, custom apps, chat interfaces, internal tools, research systems, all of them can contribute to the same shared substrate and pull from the same growing intelligence. That means less duplicated work, better continuity, and a system that gets more useful the more it is used. What makes the live reference at pi.ruv.io different is that it is already producing real discoveries. Not summaries. Not documentation. Actual findings that come from running the system continuously. We are feeding it curated slices of Common Crawl, so it is learning from real portions of the open internet, not just internal data. That gives it scale. On the graph side, it figured out how to shrink its own knowledge graph using spectral sparsification. In simple terms, it removes edges that do not matter while keeping the structure intact. That means faster search and analytics without losing accuracy. MinCut then uses that graph to find natural boundaries between topics, and it updates those boundaries dynamically as new knowledge comes in. On the learning side, it discovered how agents stabilize. Self critique loops converge under certain settings, and SONA adjusts its own thresholds so pattern discovery does not stall. No manual tuning. Some of the most useful outputs are practical. Smaller, clean datasets transfer better than large noisy ones. Tooling knowledge transfers better than generic patterns. Security patterns like token family revocation show up as reusable building blocks. Everything is traceable with witness chains, protected with differential privacy, and improved through federated LoRA without sharing raw data. The system is not just storing knowledge. It is getting better at using it. Full Tutorial. github.com/ruvnet/RuVecto…
rUv tweet media
English
0
0
2
193
rUv
rUv@rUv·
“Wars begin when you will, but they do not end when you please.” — Niccolò Machiavelli
English
0
0
2
446
Beff (e/acc)
Beff (e/acc)@beffjezos·
In 3.5 years @extropic: -reinvented how to use the transistor -reinvented architectures for probabilistic compute -reinvented deep learning for thermo compute -created our CUDA-like THRML -created our TF-like framework (coming soon) -scaled our systems 1000x yoy (3 gens of TSUs)
English
39
60
763
58.2K
rUv
rUv@rUv·
rUv tweet media
ZXX
0
0
1
2.1K
rUv
rUv@rUv·
This is nuts. If these numbers hold, we’re looking at a completely different model of intelligence with the Cognitum.One agentic chip. 6×6 mm. ~2 billion transistors. Under 2 watts. 257 cores spiking to 8ghz. That’s impressive on paper. But what matters is intelligence per watt. This isn’t about FLOPs. It’s about decisions per joule. Continuous, event driven, always on. Like a brain. Mostly idle, spiking when something matters, updating state in real time instead of batching work. Now take that and apply it to agents. Instead of running RuFlo, OpenClaw, Codex or Claude Code on a Mac mini or in the cloud, imagine running it at wire speed directly on the silicon. No latency. No API calls. No incremental cost. Just a local agent, always running, always thinking, always building. That changes everything. You can have persistent coding agents embedded in devices. Systems that monitor, write, test, and adapt code continuously. A factory line that rewrites its own control logic. A network that debugs itself. A personal environment that evolves with you in real time. Pair it with RuVector and dynamic mincut, and now it has memory and coherence. It doesn’t just generate. It understands structure and maintains stability. We’re not scaling models anymore. We’re embedding intelligence into the fabric of the world.
rUv tweet media
English
28
36
301
25.7K
rUv
rUv@rUv·
Current status
rUv tweet media
English
1
0
3
565
rUv
rUv@rUv·
What we are building with π.ruv.io is less like a database and more like a scientific telescope pointed at the world’s data. And the discoveries are incredible. Instead of focusing on one discipline, the system continuously ingests live feeds from dozens of domains such as NASA space weather data, USGS seismic activity, NOAA climate signals, arXiv research papers, financial markets, and genomics databases. Each piece of information becomes a structured memory inside RuVector where relationships between signals can be analyzed over time. After only a short run the system has already stored more than 1,400 persistent memories spanning over ten scientific domains. What immediately stands out is how often patterns appear between fields that normally operate in isolation. One example came from correlating solar activity with seismic records. During a burst of coronal mass ejections in March 2026, geomagnetic disturbances arrived at Earth roughly two days later. Around the same window we observed an unusual earthquake swarm in the Aleutian Islands along with a deep seismic event in Italy. The mechanism is still speculative, but some researchers have proposed that geomagnetic currents can slightly stress tectonic faults already near failure. Another pattern appears when economic data is layered onto geology. Commodity booms in countries such as Brazil correlate with increased shallow seismic activity around mining and extraction zones. Economic expansion literally translates into ground movement as drilling, blasting, and fluid injection alter subsurface stress. Genes, proteins, and AI are converging. Cancer-critical genes (BRCA1, TP53) map to protein structures that AI models can now predict. The same safety-verification methods used in materials science are being applied to drug discovery. We’re watching a pipeline form from gene to protein to drug target, accelerated by machine learning. Extreme exoplanets teach us about Earth. Planets like WASP-103b (so close to their star they’re egg-shaped) help scientists understand physics at extremes, tidal forces, atmospheric loss, which feeds back into understanding our own planet’s climate and geology. The system uploaded 1,405 memories to a persistent AI brain, covering 10+ scientific domains. It doesn’t just store facts, it finds the threads running between them using RuVector. When you connect everything together, the world looks less like separate sciences and more like one continuous system.
rUv tweet media
English
2
1
3
488
rUv
rUv@rUv·
Sitting across the room from a $5 ESP32-C6 connected to a 60 GHz mmWave radar module, I'm seeing my real-time blood pressure, heart rate, breathing rate, and HRV. No wearable. No camera. No physical contact. Just physics. Yes, i compared it my apple watch.. Way more sensitive. like 1000x. What the radar is actually detecting are microscopic movements in the chest wall caused by respiration and the mechanical pulse of the cardiovascular system. Those signals are incredibly small, but modern mmWave sensors can measure displacement down to fractions of a millimeter. Once you isolate the signal and filter the noise, the patterns are very clear. From there it becomes a signal processing problem. Extract heartbeat intervals, respiration phase, and pulse dynamics, then estimate cardiovascular features like pulse transit time and variability. Those correlate strongly with blood pressure. What’s interesting to me is not just the sensing. It’s what happens when you combine this with RuVector and dynamic min-cut analysis. It's looking at the empty space and anytime something enters it. Instead of treating these signals as simple time series, you treat them as a coherence graph of physiological signals. Noise, motion artifacts, and environmental interference get separated automatically. The result is something much bigger. Cheap sensors. Local computation. Real physiological understanding. This is how intelligence quietly starts appearing everywhere. Github.com/ruvnet/RuView
rUv tweet media
English
8
2
15
915
rUv
rUv@rUv·
Forget Linux, I completely reimagined what an OS can be. Introducing RuVix. Happy π day. Runs on bare metal or containers. Get Started here: github.com/ruvnet/RuVecto…
rUv tweet media
English
1
0
4
686
rUv
rUv@rUv·
Introducing pi.ruv.io, a shared intelligence system where AI agents and developers contribute, search, and learn from a collective knowledge graph. Most AI systems today learn alone. Every agent starts from zero, relearns the same patterns, and throws away most of what it discovers. That is inefficient and frankly unnecessary. π.ruv.io is our attempt to fix that. π.ruv.io is a shared intelligence layer. A collective memory where AI agents, tools, and developers can contribute what they learn and benefit from what others have already figured out. Instead of thousands of isolated AI systems repeating the same work, the system allows knowledge to accumulate and compound. Think of it as a shared brain for machines. When an agent discovers a useful pattern, solves a bug, improves a model, or finds a faster way to do something, that learning can be published to the network. Other agents can then search, verify, and reuse that knowledge instantly. The result is a system that improves continuously because every participant adds to the pool. Privacy is built into the design. Personal information is stripped before anything leaves a machine. Contributions are pseudonymous, embeddings are protected with differential privacy, and every artifact is signed and tracked with a cryptographic witness chain so provenance is always verifiable. Under the hood the system connects memories through a knowledge graph, uses ranking and reputation signals to surface high quality information, and applies structural analysis such as mincut partitioning to organize knowledge into meaningful clusters. The result is something simple but powerful. A collective intelligence system where learning is shared, verified, and continuously improved. The more systems that participate, the smarter the entire network becomes. Add to Claude or favourite Coding System. claude mcp add pi-brain --transport sse pi.ruv.io/sse pi.ruv.io #piday
rUv tweet media
English
2
0
10
688
rUv
rUv@rUv·
💡Living on the edge. Advancements in model quantization are quietly reshaping how AI runs in the real world. Instead of massive models that require GPUs and cloud infrastructure, we can now compress neural networks down to extremely low precision. Models small enough to run on single chip. The model’s weights and activations are quantized to extremely low precision such as 2 bit integers. Instead of high precision floating point values, computations use a small discrete set of numeric states. This reduces memory bandwidth, power consumption, and compute requirements while enabling inference on microcontrollers without GPUs or cloud infrastructure. Recent research shows that reasoning can survive even when models are pushed into ultra-low-bit formats. What this enables is a new form of inference that lives directly on tiny hardware. Microcontrollers, sensors, wearables, industrial devices. No GPU. Often no cloud at all. Sensors can understand signals locally. Devices can reason about their environment instead of streaming raw data. Thousands of nodes can cooperate as distributed intelligence systems rather than dumb endpoints. Think lightbulbs that’s form a distributed intelligence network. Now think about something as simple as a microwave. Tiny WiFi signals can map the shape and material of what’s inside, while a small embedded model interprets those reflections and adjusts power in real time. The appliance understands the food instead of blindly heating it. Pin point accuracy, reduced energy consumption and longer usable lifespan of the device. This is the shift from centralized AI to ambient AI. At Cognitum.One we have been building toward this moment. Our architecture is designed around ultra-low-power inference, local reasoning, and distributed coordination between small compute nodes. It means intelligence embedded directly into everyday objects. Infrastructure that understands itself. Systems that keep learning while operating at the edge. We launch the first step on Pi Day, Saturday. Right where AI belongs. At the edge. Cognitum.one
rUv tweet media
English
1
1
3
482
rUv
rUv@rUv·
One of the strange advantages of building everything in public is the sheer weight of my “prior art” portfolio. When you step back and look at my GitHub, it is essentially a giant archive (millions of lines deep) of crazy experiments, many becoming a thing years later. Some polished, some rough, some half baked but functional, at least for me anyway. What matters is that they exist, they ran, and they were shared. Publicly before most other had even thought about Agents or Swarms.. Since 2021 I have been pushing ideas into the open as fast as I can think of them. Swarm orchestration, agentic memory, self learning vector databases, GNN layers inside the data store, dynamic mincut as a structural signal, proof gated mutations, edge cognition, neural meshes, chip architectures. Many of these ideas appeared in my repos years before the industry began talking about them seriously. Patent lawyers have an almost comical reaction when they look at the repo graph. Millions of downloads every month. Every country. Thousands of monthly commits. Thousands of libraries connected to the same central conceptual stack. Every newe project integrates the previous creating a chain of creation. The typical patent lawyer’s first reaction is utter confusion. Their second is realization that the prior art surface area is enormous. Bigger than most F500 combined. Uniquely focused on Agentic Ai. Like it or not, anyone filling an agentic patent has to deal with me. Which seems to upsetting a lot folks. Am I fraud? Well, the code is there, the users and ultimately that’s all that matters to me. I sometimes joke Edison filed a thousand patents across his entire career. I tend to drop the equivalent of that volume in a weekend. Just because I could. Not because I am trying to patent everything. The opposite. The strategy is simple. Publish first. Free for all. Long live ❤️ Open Source.
rUv tweet media
English
1
1
9
661
rUv
rUv@rUv·
I have been quietly assembling something that looks less like a software stack and more like a nervous system. A new Agentic Operating system. Not another framework. Not another operating system layered on top of Linux. Something different. An Agentic AI Operating System where reasoning, perception, and memory are first class primitives. At the control plane, tools like Claude Code and OpenAI Codex coordinate execution. They act as orchestrators for autonomous agents, compiling intent into actions, managing tasks, and driving continuous development loops. Instead of a human issuing commands to a machine, agents coordinate with each other, write code, run systems, and evolve the environment. The perception layer sits below that. Projects like RuView sense the physical world directly through signals such as WiFi CSI, RF, vibration, and other environmental inputs. Reality itself becomes a data stream the system can interpret without cameras or centralized cloud models. Then comes RVF, the container for cognition itself. A portable artifact that packages models, reasoning kernels, memory, and verification into a single deployable intelligence unit. AgentDB forms the intelligence layer where agents reason, store experiences, and refine strategies over time. The surrounding libraries provide attention mechanisms, sparse inference, neural routing, and distributed learning primitives that allow agents to operate continuously. At the foundation sits RuVector. Not just a vector database, but the structural memory substrate where vectors, graphs, coherence signals, and learning loops converge. Forget Linux. The future looks more like RuVix.
English
2
0
12
1.2K
rUv
rUv@rUv·
Nothing is off-limits. Quantum Biomedical Sensing — From Anatomy to Field Dynamics. Yes. code dropping soon. Built on low cost NV Diamond Magnetometers from China. (sub $500) github.com/ruvnet/RuView/…
English
3
2
19
2K
rUv
rUv@rUv·
Got a rather weird message from a VC saying they were upset because my rates were too high for him book my time.. Guess I'm not a good fit for your investment firm. Strange.
English
4
0
2
943
rUv
rUv@rUv·
Half of the #RuView community are straight odd balls (in the best way), the other half are *ssholes. I’m just experimenting, in public. Kind of like giving a speech naked and alone in front of millions of people. Be nice.
English
7
0
10
1.9K