Manitcor

17.1K posts

Manitcor banner
Manitcor

Manitcor

@Manitcor

SAFE Agents that finish what they start. https://t.co/iQY2Rpze6q Reposts ≠ endorsements

Katılım Nisan 2010
3.5K Takip Edilen2.2K Takipçiler
Manitcor
Manitcor@Manitcor·
@VictorTaelin Testers are reporting to me they are having the most success with hermes and qwen3.5:9b on systems with as little as 8gb of vram.
English
0
0
1
343
Taelin
Taelin@VictorTaelin·
So what's currently the best OSS coding model that can run on the base Mac Mini, to be used with Pi? Is there anything decent? Is it Qwen3.5-9B?
English
33
1
164
24.7K
Manitcor retweetledi
ᚱoko Network
ᚱoko Network@RokoNetwork·
ROKO NETWORK: WHY THE PHYSICS OF TIME IS THE LAST MOAT LEFT IN CRYPTO We just deleted 6,000 lines of code. Not because we failed. Because we got smarter. The "Court" reconciliation system we built was technically elegant. It handled edge cases that appear roughly once every several months of network operation. We were engineering for ghosts. So we cut it — replaced the entire apparatus with a five-second inclusion deadline, and the network didn't just survive. It got faster, cleaner, and harder to attack. This is what real protocol maturity looks like. Not adding complexity. Knowing what to remove. But the deletion isn't the story. The story is what's underneath it — and why what Roko is building cannot be replicated by any existing L1 or L2 Extreme without tearing their architecture down to the studs. The Problem No One Wants to Name Ethereum loses over $1 billion per year to MEV. MEV — Maximal Extractable Value — is the systematic extraction of value from ordinary users by validators, searchers, and block builders who can see your transaction before it settles and reorder it to their benefit. Front-running. Sandwich attacks. Liquidation sniping. This isn't a bug they're fixing. It's a structural property of how blockchains handle time. Every EVM chain treats time as a block property. Your transaction doesn't have a timestamp — your block does. Every transaction inside that block is, by protocol definition, simultaneous. This is a fiction. A convenient lie that makes consensus easier and makes MEV possible. Roko treats that lie as the problem worth solving. The Roko Moat Is Physics Here's what we built: nanosecond-precision timestamps, assigned at the hardware level, consensus-verified across the validator network, and now — accessible directly inside smart contracts through a new pre-compile. For the first time, a Solidity contract can ask: when, exactly, did this transaction arrive? Not the block time. The transaction time. Down to the nanosecond. This sounds like a small thing. It is not a small thing. It means time-locked auctions that can't be gamed by block reordering. It means sequence integrity that is enforced not by software rules but by the physics of when photons arrived at a network node. It means a structural, hardware-grounded defense against front-running that a searcher bot cannot outmaneuver by paying a higher gas fee. Why can't Ethereum just copy this? Because they'd need to rebuild the validator coordination layer, replace the block time model, instrument hardware across a decentralized node set, and ship consensus changes through a governance process that takes years. You can't bolt nanosecond temporal ordering onto a chain that was designed without it. The assumption that time is a block property is load-bearing. Removing it requires a new foundation. We didn't add a feature. We built a different substrate. Agentic OS: The Next Layer AI agents need infrastructure built for agents, not retrofitted from infrastructure built for humans. Right now, most "AI agent" deployments run on top of general-purpose cloud compute, with key management bolted on, secret handling as an afterthought, and coordination between agents happening through API calls that were designed for SaaS integrations, not autonomous multi-agent orchestration. Roko is building the OS layer these agents actually need. Model runtimes that are substrate-aware. Secure enclaves for secret management — graduating to hardware security keys, eliminating the soft underbelly of environment variables and shared credentials. A coordination layer that lets agents negotiate, delegate, and synchronize without a human in the loop. The temporal ordering layer isn't just for DeFi. It's for agents. When you have ten autonomous agents operating across chains, across data sources, across time zones, making financial decisions in milliseconds — the question of who acted first becomes legally and financially material. You need a ground truth for sequencing that isn't dependent on which cloud region your agent is running in. That's what Roko provides. Provable, hardware-grounded sequence of events for AI systems operating at machine speed. Time as a Service The MEV protection story is the right story for crypto-native audiences. It's visceral. It's a billion-dollar problem with a name. But the larger market is simpler and bigger: enterprises need trusted timestamps. Compliance systems. Audit trails. Cross-chain settlement. High-frequency data feeds. Every system that needs to answer the question "what happened, and when?" with a result that can survive legal scrutiny. Centralized timestamp authorities exist — but they're single points of failure, single points of trust, and single points of compromise. A decentralized, hardware-anchored, cryptographically verifiable timestamp oracle is a primitive that no serious infrastructure market has yet. We call it Time as a Service. It sounds boring. It is worth building. What We're Not Doing We're not chasing Ethereum's TVL. Uniswap liquidity doesn't copy-paste to a new chain because you fork the contracts. Liquidity follows utility, and utility has to be grown, not inherited. Roko grows through unique capability. Temporal ordering that EVM chains structurally cannot provide. Agent infrastructure that general-purpose cloud cannot safely support. Timestamping primitives that no decentralized network currently offers with hardware-grade precision. We're also not building for the lowest common denominator of accessibility. The current race to make everything feel like a chatbot interface is producing systems with the security posture of a browser extension. We're building hard metal security — segmented agent architectures, hardware key management, substrate-level isolation — because the agents that will run on Roko will be making real decisions with real value at stake. The veneer approach gets people hurt. Where We Are The Court removal is done. The codebase is cleaner. The five-second deadline protocol is live. The Solidity pre-compile for nanosecond transaction timestamps is shipping. Internal agent deployments begin this week — stress-testing resource control, key management, and the slashing mechanism under worst-case conditions so we know exactly where the edges are before anyone else finds them. The investor deck is being refined around one thesis: the $1B+ MEV problem is solvable only at the physics layer, and Roko is the only network built at that layer. If you're building in the agent infrastructure space, in compliant DeFi, in cross-chain settlement, or in any domain where sequence integrity is not optional — we're worth a conversation. Time isn't just a feature. It's the foundation. — Roko Network
English
5
11
30
508
Manitcor
Manitcor@Manitcor·
i think its a UX issue, some of us may find this an optimal interface to maximize their cognitive load, others need other systems, the amazing part of where we are is its entirely possible to drive toward entirely customized and dynamic ux systems to help enable this. Gamification and other concepts may become far more important in every-day workflows.
English
0
0
0
44
Manitcor retweetledi
Peter Gostev
Peter Gostev@petergostev·
There's worry that people will stop using their brains with LLMs, but managing several AI agent threads in parallel has been some of the most cognitively intensive work I've done in years
English
175
138
1.7K
67.5K
Alex the Engineer
Alex the Engineer@AlexEngineerAI·
What AI coding tools are you actually shipping with beyond the obvious stack? What's your most underrated daily driver?
English
36
1
27
2.5K
Brendan Falk
Brendan Falk@BrendanFalk·
(And if you want to build AI coding agents and think you can crush this interview, please DM me!)
English
4
0
38
14.8K
Brendan Falk
Brendan Falk@BrendanFalk·
I believe we've found the best AI-native coding interview We call it the “Composer 1 interview” Candidates get 1 hour to build a real, medium-sized project live The only constraint: they have to use Cursor’s Composer 1 model
English
122
47
1.8K
289.6K
Manitcor retweetledi
solst/ICE of Astarte
Remember “zero trust” a decade ago. The idea was: idc if it’s coming from the office. We need to authenticate all devices. We’re seeing a continuation of that today: are you typing on your keyboard, or via Claude remote, or is it an agent in your machine? Doesn’t matter, every action should be monitored at runtime based on what it’s doing, and the access requested etc. Evolution: Trust inside the perimeter —> trust based on device/identity —> trust based on each action
English
19
22
243
17.8K
Manitcor retweetledi
Nainsi Dwivedi
Nainsi Dwivedi@NainsiDwiv50980·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License
English
152
447
2.3K
291K
Manitcor retweetledi
Sam Altman
Sam Altman@sama·
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.
English
4.3K
2.1K
35.6K
5.4M
Manitcor retweetledi
Jamon
Jamon@jamonholmgren·
To reiterate a few things so they don't get lost: 1. I never want to review another agent-produced plan again. Waste of my time, overwhelming, not worth it. It's valuable *to the agent*, but not to me. 2. I will burn all the tokens, run all the tests, do all the validations to make sure that when the work product lands on my desk, it's as good as the agents can make it. My time and energy is the most important thing here. 3. The feedback loop is critical: I'll work on the process, docs, and specs as much as I need to, in order to reap the benefits in future sessions. No more manual guidance via interactive sessions (with the exception of exploratory hacking).
Jamon@jamonholmgren

My current agentic workflow is about 5x faster, better quality, I understand the system better, and I’m having fun again. My previous workflows have left me exhausted, overwhelmed, and feeling out of touch with the systems I was building. They also degraded quality too much. This is way better. I’m not ready to describe in detail. It’s still evolving a bit. But I’ll give you a high level here. I call this the Night Shift workflow.

English
33
21
712
124.7K
Manitcor retweetledi
dex
dex@dexhorthy·
We get stuck every month or two on complex problems, usually around complex concurrency problems across multiple services. After solving it manually I always stash the git sha (before/after) in a running list. We now have some very useful eval sets for when new models come out. Most of them still unsolvable without hindsight steering from many many context windows worth of investigation and reproduction
English
3
2
107
16.9K
Manitcor retweetledi
vittorio
vittorio@IterIntellectus·
this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get
vittorio tweet mediavittorio tweet mediavittorio tweet mediavittorio tweet media
Séb Krier@sebkrier

This is wild. theaustralian.com.au/business/techn…

English
2.5K
19.9K
117.9K
17.3M
Manitcor retweetledi
ThePrimeagen
ThePrimeagen@ThePrimeagen·
it's crazy how many people think that 30 minutes a day isn't enough to learn anything. Who told you this? Unknown knowns are wild. People just ingrain hurdles in their head about the universe because some thought leader spoke into their life in the distant past.
English
101
85
1.8K
57.1K
Manitcor
Manitcor@Manitcor·
syntax linting working as intended, if this failed none of it would work. emergent "features" like this are common at the base of a language. initial hooks are generally meant to be simple and light, adding complexity to remove this edge case is not worth it for anyone. I will also note that there is someone, somewhere that relies on this a critical part of their process. Don't ask why, its usually horrifying.
English
1
0
2
2.3K
Mari
Mari@Tech_girlll·
How does this even make any sense? How did they even think of all these things?
Mari tweet media
English
29
7
344
31.6K
Ankit Jxa
Ankit Jxa@kingofknowwhere·
chatGPT is so bad how does anyone use it anymore . 1. Too many "followup suggestions" 2. Too much hiding the details from the user (I asked it to create a doc for insurance purposes and it kept going around in circles until it figured it couldn't read a pdf I had uploaded) 3. The internal tooling has becom worse?? 4. Too much NPS optimisation and "continue talking task" optimisation
English
16
1
67
4.3K
Manitcor
Manitcor@Manitcor·
@sickdotdev you didn't delete it, you just used a command called delete
English
0
0
0
104
Sick
Sick@sickdotdev·
Interviewer: Why does deleting a 2GB file take 1 second but copying it takes 2 minutes?
English
52
7
271
40.9K
BOOTOSHI 👑
BOOTOSHI 👑@KingBootoshi·
HOLY FUK I JUST LEARNED ABOUT TLA+ AND IT'S SO GOOD FOR AGENTIC CODING ur telling ME that i can mathematically fact check every possible scenario of my design STATE to prevent bugs and crashes AND IF IT FINDS SOMETHING THE AGENTS GET INSTANT FEEDBACK AND LOOP FIXING IT TILL IT ALL POSSIBLE BUGS IN THE DESIGN ARE PATCHED LOL THIS IS OP
BOOTOSHI 👑 tweet media
English
93
58
1.4K
269.8K