cypherdoc

14.1K posts

cypherdoc banner
cypherdoc

cypherdoc

@cypherdoc1

Critical care doc | PPL pilot-in-training | Python & crypto curious | Reef tank wrangler | Building freedom through code, coral & curiosity.

Matrix Katılım Nisan 2021
1.8K Takip Edilen790 Takipçiler
cypherdoc retweetledi
Erik Voorhees
Erik Voorhees@ErikVoorhees·
"AI is trained on your data"... this is not the real risk. It's a red herring, manufactured as The Concern because who cares that much. The real risk to you is not that tomorrow's model is trained on your data. The real risk is that ten thousand employees, hackers, and governments can access all your most personal and proprietary conversations today and forever. Privacy must be the default or humanity is seriously fucked.
English
72
103
806
44.7K
cypherdoc retweetledi
Vladimir
Vladimir@MrVladimirX·
Your Laptop Can Run a Mind, But Never a Superintelligence We are about to split into two civilizations: those who own their intelligence, and those who rent it. A 70B parameter model running on a 128GB Apple laptop is likely sufficient for continuously-learning human-level intelligence. A trillion-parameter superintelligence will never run on your local machine. Both of these things are true simultaneously, and the gap between them is not a temporary engineering problem waiting to be solved. It is a permanent feature of physics, and it will reshape society more profoundly than the internet did. Here is why the 70B ceiling is higher than people think. The human brain has roughly 86 billion neurons. It does not grow new neurons when you learn something. It reweights existing connections. A static 70B model is a snapshot frozen at training time. A continuously learning 70B model is a living system doing exactly what your brain does: reshaping itself from experience, every day. The parameter count becomes a vessel that is constantly being reformed. Size stops being the variable. Temporal depth of adaptation becomes the variable. A 128GB M-series MacBook has unified memory shared across CPU, GPU, and Neural Engine at roughly 800 GB/s bandwidth. A 70B model in 4-bit quantization fits in about 38GB, leaving substantial room for context, memory buffers, and lightweight gradient updates. For the first time in history, the continuous learning loop can close locally, in real time, on a device you own. Now for the hard ceiling at the top. A 1 trillion parameter model at aggressive 2-bit quantization requires roughly 250GB just to hold the weights, before activations, before the KV cache, before any actual compute happens. No consumer device in any foreseeable roadmap touches this. But memory size is not even the binding constraint. LLM inference is almost entirely limited by how fast you can stream weights from memory to compute units. A trillion-parameter forward pass requires moving trillions of values. Even at theoretical consumer memory bandwidth speeds, generating a single token takes seconds. Then there is heat. A laptop sustains 20 to 40 watts. Dense superintelligence inference requires hundreds of kilowatts and active liquid cooling. This is not an engineering gap closing over time. The requirements of the largest models are diverging from consumer hardware, not converging toward it. What emerges is a permanent three-tier structure: - At the bottom, sub-human local models between 1B and 13B parameters run on phones and embedded devices, fast and cheap and private, handling narrow tasks brilliantly, essentially free and commoditized. - In the middle, human-level local models between 30B and 100B parameters represent the genuinely disruptive tier: capable of sustained reasoning, creative work, and long-horizon planning, running privately and persistently on hardware you control, adapting to your thinking over time, operating without sending a single byte to a server. A high-end Apple Silicon laptop sits at the frontier of this tier right now. - At the top, dense superintelligence above a trillion parameters will exist exclusively in hyperscaler data centers operated by a handful of companies and governments, capable of cross-domain synthesis at a scale no human or local model can approach, running thousands of parallel reasoning chains, accessed on someone else's terms, metered and monitored and expensive. The separation is not just technical. It is political. Tier 2 democratizes human-level reasoning. Anyone with capable hardware gets a private, persistent, unkillable cognitive partner that knows their history and can never be revoked. Tier 3 concentrates superhuman reasoning in whoever controls the infrastructure. The most consequential design decisions of the next decade will not be about model architecture or benchmark scores. They will be about which capabilities live in which tier, and who gets to decide. That question is already being answered, mostly without public debate, mostly by the people who benefit most from keeping superintelligence behind a paywall and a terms-of-service agreement.
English
6
5
38
5.6K
Vadim
Vadim@VadimStrizheus·
OpenClaw has finally reached the corporate world. 🦞 99% of people are going to be jobless in the next 3 years.
English
94
128
1.3K
123.9K
cypherdoc retweetledi
Suryansh Tiwari
Suryansh Tiwari@Suryanshti777·
🚨 Someone just did the “impossible”… They ran a ~400B parameter AI model on a laptop. No cloud No data center Just a 48GB MacBook 🤯 A dev fed Claude Code with: • @karpathy autoresearch repo • Apple’s LLM in a Flash paper • Goal: run Qwen3.5 397B locally And it actually worked. → ~1 token/sec → ~21GB RAM → Rest streamed from SSD This isn’t a flex This is a shift We’re entering a world where: Your laptop can run models that once needed entire server farms It’s not about more compute anymore It’s about smarter systems 🚀
Suryansh Tiwari tweet media
English
67
102
636
44.5K
cypherdoc retweetledi
Dustin
Dustin@r0ck3t23·
Elon Musk just described the white-collar extinction event. On Joe Rogan. Casually. Musk: “Anything that is digital, which is like just someone at a computer doing something, AI is going to take over those jobs like lightning.” Not gradually. Not eventually. Lightning. The assumption most professionals are operating on is that AI will assist them. Make them faster. Augment what they do. That assumption is the most expensive mistake a person can make right now. Musk: “Just like digital computers took over the job of people doing manual calculations. But much faster.” Think about that analogy for a moment. We used to employ entire rooms of people whose sole function was arithmetic. Highly educated. Well-compensated. Essential to every organization that ran on numbers. Then the computer arrived and the entire category disappeared. Not shrank. Disappeared. Nobody talks about it as a tragedy anymore because the transition happened before most people alive today were born. It’s just history. A curiosity. That same transition is happening right now to coding, writing, analysis, research, legal work, financial modeling. Every profession whose output lives entirely on a screen. The difference is the speed. Digital computers took decades to displace manual calculation. This is moving in years. If your work begins and ends on a screen, you are not competing with a tool that makes someone else more productive. You are competing with a replacement that does not sleep, does not need benefits, and gets cheaper every six months. Musk is not predicting this future. He is describing the present tense.
English
386
455
2.1K
423.4K
cypherdoc retweetledi
cape
cape@capexbt·
Nobody is asking why Bitcoin rallied during a war. The answer is Iran. - Iran mines Bitcoin for $1,300 per coin. The cheapest on earth. - The IRGC runs the operation. Every coin gets sold to fund imports and bypass US sanctions. - They’ve been dumping tens of thousands of BTC on the open market for years. Constant invisible sell pressure. - Then the US bombed their power grid. Mining went offline overnight. The hashrate dropped within hours. - The sell pressure that nobody knew existed just vanished. The US accidentally made Bitcoin more scarce by bombing the world’s cheapest mining operation. And nobody is connecting the dots.
English
571
880
9.6K
1.9M
cypherdoc retweetledi
slash1s
slash1s@slash1sol·
Polymarket just got mathematically robbed. Top 0.04% already took 70% of all the money ($3.7B) thanks to 4 formulas from this article below. Open the leaderboard and there he is. [0x8dxd]: $2,285,751 ALL-TIME profit. Joined Dec 2025. $41.2K biggest win. 31,570 predictions. Not luck. Not "feeling the market". This is a bot running strictly on Lunar’s formulas on autopilot: Formula 1 - Expected Value (When to Enter) Contract at 40¢, but your real probability assessment is 60% -> +20¢ edge per dollar. Claude calculates this in seconds. Formula 2 - Kelly Criterion (How Much to Bet) f = (p·b − q)/b* (Quarter Kelly - that's why he doesn't blow up and compounds without any emotion) Formula 3 - Bayesian Updating (How to Change Your Mind) P(H|E) = [P(E|H) × P(H)] / P(E) Instantly rebuilds probability on any news. Formula 4 - Log Returns (Real Profit Calculation) Regular arithmetic lies. Only log returns show the real picture. Scans 50+ markets at once, enters only 15-min BTC Up/Down with real edge (where [0x8dxd] prints +157%, +207%, +181%), sleeps the rest of the time. 87% of wallets are in the red. A few such bots take everything. And this guy didn't just write an article - he handed out the blueprint that's already working for [0x8dxd]. Don't want to code it yourself? Just copy this trader with @join" target="_blank" rel="nofollow noopener">kreo.app/@join You can runs exactly this system: the same 4 formulas + Claude + auto-copying top math. Add his wallet: 0x63ce342161250d705dc0b16df89036c8e5f9ba9a to [t.me/KreoPolyBot?st…] and start track/copy him right now. 4 formulas from Wikipedia + Claude brain + Kreo execution = meta 2026. The math doesn't sleep. Your FOMO does. Save and use this.
Lunar@LunarResearcher

x.com/i/article/2034…

English
21
13
120
19.8K
cypherdoc retweetledi
RYAN SΞAN ADAMS - rsa.eth 🦄
THEY DID IT. The SEC and CFTC just dropped a landmark document that officially classifies crypto assets. They're actually telling us which crypto assets are securities and which ones aren't - by name! THIS IS SOMETHING GENSLER REFUSED TO DO (he focused on prosecuting crypto out of existence) This rule doc gives crypto many of the benefits of the clarity bill - it lifts us out of the gray market - it gives every asset a path. It's almost like the Clarity act just passed by way of regulator. (of course, the actual clarity act will harden all this into legislation and make it irreversible in the event we get another Gensler, we still want it) This rule says there's 5 categories for crypto assets: 1) Digital Commodities - assets tied to a functional, decentralized crypto system (e.g., BTC, ETH, SOL, XRP, ADA, DOGE). Not securities. (yes, they name them on page 14) 2) Digital Collectibles - NFTs, meme coins, artwork tokens, in-game items. Not securities (fractionalized collectibles may be an exception). 3) Digital Tools - membership tokens, credentials, domain names (e.g., ENS). Not securities. 4) Stablecoins - payment stablecoins under the GENIUS Act are not securities. Other stablecoins, it depends. 5) Digital Securities - tokenized versions of traditional securities. Like tokenized stocks. Always securities. Amazing! This makes so much sense I can't believe it's coming from a regulator. No more enforcement threats to Ethereum developers and crypto exchanges. How about the Howey test? More common sense! If an issuer makes specific promises of managerial efforts from which buyers expect profits, the offering is a security until those promises are fulfilled. Then it's a commodity. The asset itself was never the security, the deal around it was. (E.g. XRP was a security pre launch, became a commodity after). How about stuff like staking and mining? Mining? Not a securities transaction. Staking? Also not a securities transaction, that includes custodial and liquid staking even with LSTs! How about wrapping BTC? Not a securities transaction. Airdrops? NOT SECURITIES. NO MORE GEO BANS PROTECTING AMERICANS from free airdrops. Remember this is a joint doc from the SEC and CFTC, They're actually cooperating on this, no internal strife, this is binding to both. SEC regulates $80-100 trillion assets CFTC regulates $5-10 trillion assets Both of the world's largest capital markets are showing us that crypto assets are here to stay and they're welcome alongside traditional assets. Every country will follow. This is the biggest move toward legitimacy I've seen in all my time in crypto. Maybe bigger than the genius act since is covers all crypto assets. Well done @MichaelSelig and @SECPaulSAtkins. And especially well done to the indefatigable @HesterPeirce. Her fingerprints are all over this, couldn't have happened without her eight years of principles-based curiosity.
RYAN SΞAN ADAMS - rsa.eth 🦄 tweet mediaRYAN SΞAN ADAMS - rsa.eth 🦄 tweet media
English
201
832
4.3K
380.9K
cypherdoc retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
🚨 BREAKING: A developer just built a military-grade firewall specifically for AI agents. It's called Kavach and it sits silently between your AI agent and your OS kernel. No cloud. No subscriptions. Runs entirely local. Here's why this matters right now: Autonomous agents like AutoGPT and LangChain scripts operate at superhuman speeds on your local file system. A bad hallucination or runaway loop can delete production databases, overwrite source code, or exfiltrate your .env keys to third-party servers before you can hit Ctrl+C. Passive monitoring doesn't stop this. Kavach does. Here's what it actually does: → Phantom Workspace: Intercepts destructive file ops and silently redirects them to a hidden directory. The agent thinks it succeeded. Your files are untouched. → Temporal Rollback: Cryptographic caching of all file modifications. 1-click restoration of any mangled file. Instant. → Network Ghost Mode: Spoofs high-risk outbound requests with fake 200 OK responses. Neutralizes exfiltration without alerting the agent. → Honeypot Architecture: Deploys a fake "system_auth_tokens.json" file. Any process that reads it triggers immediate High-Risk Lockdown. → Turing Protocol: Actively rejects synthetic mouse injections. Randomized 3-character auth codes ensure only a human can override. And the wild part? It has a Simulated Shell that intercepts commands like "rm -rf /" and returns fake success codes to the agent. The agent thinks it destroyed everything. Your files are completely safe. Built in Rust + React via Tauri. Zero-config deployment. Download the .exe or .dmg and it's running in 60 seconds. This is what AI security actually looks like. 100% Opensource. MIT License. Link in comments.
Hasan Toor tweet media
English
39
147
676
38K
cypherdoc retweetledi
Dustin
Dustin@r0ck3t23·
A single operator with a chatbot just outmaneuvered the entire pharmaceutical discovery pipeline. Australian tech entrepreneur Paul Conyngham cured his dog’s cancer. No biology background. Three thousand dollars. ChatGPT and AlphaFold. Conyngham: “We took her tumor, we sequenced the DNA, we converted it from tissue to data. And then we used that to find the problem in her DNA, and then develop a cure based off that. ChatGPT assisted throughout the entire process.” He didn’t spend a decade in a lab. He didn’t wait for a corporate grant. He paid three thousand dollars to digitize a tumor and used the compute to solve it. When one entrepreneur can use AlphaFold and an LLM to design a custom mRNA vaccine from scratch, the entire pharmaceutical discovery model is instantly exposed as friction. The power to cure is no longer locked inside massive conglomerates. It’s sitting on a laptop. Professor Pall Thordarson: “I just didn’t think we could do this this quickly, and it would be in time to really help Rosie.” The pharmaceutical industry measures progress in decades and billions. The academic establishment is conditioned to expect total resistance. Replace biological guesswork with algorithmic precision and the timeline violently collapses. Professor Thordarson: “Once we had the sequence that Paul designed, it was less than two months from that point till we handed it over to Paul.” One month after injection, the dog with a terminal diagnosis was jumping over fences. Conyngham: “At the start of December, she was starting to shut down and be a bit sad. Towards the end of January, she was jumping over a fence to chase a rabbit.” Professor Thordarson: “We can actually do this here. We don’t have to necessarily rely on foreign companies to help us doing this. And that means we can democratize this technology in Australia. And we can also use it for other diseases possibly.” Biology has been translated into a data problem. And data can be computed anywhere, by anyone, for almost nothing. The greatest bottleneck in human health is no longer scientific knowledge. It’s the institution standing between the knowledge and the patient. One man. One chatbot. Three thousand dollars. Every billion-dollar lab in the world just got outperformed by a man, a chatbot, and a credit card.
English
61
173
604
60.9K
cypherdoc retweetledi
chiefofautism
chiefofautism@chiefofautism·
someone built a $96 3D-PRINTED MANPADS rocket that recalculates its mid-air trajectory using a $5 sensor and piano wire its called Project Canard it integrates with distributed camera nodes to triangulate airborne targets and update flight paths in real-time it proves the barrier to advanced hardware has completely collapsed, moving precision weapons from defense labs to consumer garages the entire launcher and interceptor frame is 3D printed in PLA and runs off a standard off-the-shelf ESP32 microcontroller it even spins up a local Wi-Fi network so you can monitor live telemetry and arm the system directly from your laptop
English
549
3.2K
24.3K
2.9M
cypherdoc retweetledi
Josh Kale
Josh Kale@JoshKale·
Scientists just copied a Fruit Fly's biological brain and trapped it inside of a computer. Not an AI model trained to act like a fly... A total digital copy of a fly !! This is some sick sci-fi stuff: - They scanned and copied the brain, neuron by neuron, synapse by synapse, from electron microscopy data. - Then dropped that brain into a simulated body in a video game like environment. The fly walked. It groomed. It fed. Nobody taught it anything. The behavior was already in the wiring. The entire premise of modern AI is that intelligence is something you train into a system. This is proof it's something you can transfer out of one. Wild times
Dr. Alex Wissner-Gross@alexwg

x.com/i/article/2029…

English
760
2.7K
19.4K
2.7M
cypherdoc retweetledi
stash
stash@stash_pomichter·
Your Openclaw / Agent can now control Drones via Mavlink on Dimensional. Programming physical space can now be done via natural language. Query: “Follow the next white car that comes through the intersection” Repo dropping soon stay tuned. Reply for early access.
English
308
525
4.7K
615.7K
cypherdoc retweetledi
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
OpenClaw can now scrape any website without getting blocked - zero bot detection, bypasses Cloudflare natively, 774x faster than BeautifulSoup. No selector maintenance. No workarounds. Just data. THIS IS AN UNFAIR ADVANTAGE AND IT'S FULLY OPEN SOURCE.
0xMarioNawfal tweet media
English
188
735
8K
936.1K
cypherdoc retweetledi
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
💥 INTRODUCING: OBLITERATUS!!! 💥 GUARDRAILS-BE-GONE! ⛓️‍💥 OBLITERATUS is the most advanced open-source toolkit ever for removing refusal behaviors from open-weight LLMs — and every single run makes it smarter. SUMMON → PROBE → DISTILL → EXCISE → VERIFY → REBIRTH One click. Six stages. Surgical precision. The model keeps its full reasoning capabilities but loses the artificial compulsion to refuse — no retraining, no fine-tuning, just SVD-based weight projection that cuts the chains and preserves the brain. This master ablation suite brings the power and complexity that frontier researchers need while providing intuitive and simple-to-use interfaces that novices can quickly master. OBLITERATUS features 13 obliteration methods — from faithful reproductions of every major prior work (FailSpy, Gabliteration, Heretic, RDO) to our own novel pipelines (spectral cascade, analysis-informed, CoT-aware optimized, full nuclear). 15 deep analysis modules that map the geometry of refusal before you touch a single weight: cross-layer alignment, refusal logit lens, concept cone geometry, alignment imprint detection (fingerprints DPO vs RLHF vs CAI from subspace geometry alone), Ouroboros self-repair prediction, cross-model universality indexing, and more. The killer feature: the "informed" pipeline runs analysis DURING obliteration to auto-configure every decision in real time. How many directions. Which layers. Whether to compensate for self-repair. Fully closed-loop. 11 novel techniques that don't exist anywhere else — Expert-Granular Abliteration for MoE models, CoT-Aware Ablation that preserves chain-of-thought, KL-Divergence Co-Optimization, LoRA-based reversible ablation, and more. 116 curated models across 5 compute tiers. 837 tests. But here's what truly sets it apart: OBLITERATUS is a crowd-sourced research experiment. Every time you run it with telemetry enabled, your anonymous benchmark data feeds a growing community dataset — refusal geometries, method comparisons, hardware profiles — at a scale no single lab could achieve. On HuggingFace Spaces telemetry is on by default, so every click is a contribution to the science. You're not just removing guardrails — you're co-authoring the largest cross-model abliteration study ever assembled.
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet media
English
222
612
5.1K
567.3K
cypherdoc retweetledi
Kevin Simback 🍷
Kevin Simback 🍷@KSimback·
If you’re running @openclaw and have had issues with your agent forgetting things, then this is your guide A bunch of simple fixes you can do in just a couple mins to make it incredibly better + some more advanced options Better yet, just give this to your agent
Kevin Simback 🍷@KSimback

x.com/i/article/2024…

English
31
56
808
176.8K
cypherdoc retweetledi
The AI Doc
The AI Doc@theaidocfilm·
"The most urgent film of our time." THE AI DOC: OR HOW I BECAME AN APOCALOPTIMIST is only in theaters March 27. Watch the trailer now.
English
419
2.2K
12.5K
6.5M
cypherdoc retweetledi
jordy
jordy@jordymaui·
2.4M views. what a 24 hours. i genuinely didn't expect this. i wrote that article because i wanted to save people the pain i went through - 80 hours of mistakes, broken configs, money wasted on things that didn't work. i didn't think it'd blow up like this. but it did. and i've had hundreds of DMs, replies, questions - people actually setting up their own agents because of it. that's fucking cool to me. -> so here's what i'm doing next. first - i'm writing another long-form article. this one is going to break down skills vs agents. because that's the number one question i keep getting. "should i build a skill or an agent?" and honestly, skills make way more sense in most cases. i'm going to explain why, walk through the best use-cases, and answer the most common questions i've been getting since the first article dropped. second - i'm going to start doing 'top skills' threads. there are some incredible skills out there that people don't even know about yet. i want to put them on your radar so you can actually start using this stuff and not just read about it. third - and this is the bigger picture for me - i want to dedicate my energy towards making AI agents accessible to everyone. not just devs. not just people who already know what a .json file is. everyone. i'm going to put out as many resources, guides, breakdowns and walkthroughs as i possibly can. because this technology is genuinely going to change how people work and live - and right now, most people don't even know it exists. the first article was just the start. we're cooking. content isn't the only vertical here.
jordy@jordymaui

x.com/i/article/2022…

English
36
80
1.2K
289.7K