Anvil

974 posts

Anvil banner
Anvil

Anvil

@anvil_ic

Principal Architect | Neutrinite

Katılım Kasım 2021
338 Takip Edilen4.7K Takipçiler
Sabitlenmiş Tweet
Anvil
Anvil@anvil_ic·
SR9 language walkthrough, literally. All of the code you see took ~4.5 months and was written with @OpenAI (75%) and @claudeai (25%). Music by @suno
English
1
6
18
1.3K
ICGold
ICGold@ICGolddigger·
@anvil_ic Hey @anvil_ic I noticed that the burning of cicp has stopped for a couple of days. This has happened before too. Is there any reason for this?
English
1
0
0
23
Anvil retweetledi
Snassy.icp
Snassy.icp@SnassyIcp·
Several New Sneed Hub Updates Are Live! 💵Quick Wallet - access your wallet from anywhere via the wallet icon in the header. 🔍Scan Tokens - scan for tokens with balances and add automatically to your wallet. 👑Me Page - Show all your hotkeyed SNS neurons directly on your Sneed Me page in one go. 👤User Page - Shows all hotkeyed SNS neurons and scans for token balances for the selected principal ➡️Send Tokens Dialog - shows balance of recipient address 🏃Wallet and Neuron Cache - makes several key pages much snappier. ⭐️New Front Page Design - shows token and DAO stats as well as the newest items from the Feed and the most recent active auctions from Sneedex Marketplace. 🤖Rebranding - our "ICP Neuron Manager" canister apps for liquid ICP staking are renamed to "ICP Staking Bots" - the first in a family of coming "Bot" personal canister apps that live in your wallet and run tasks for you. Lots of news. All live, so come check it out right now at app.sneeddao.com
Snassy.icp tweet media
English
3
11
42
1.8K
Anvil
Anvil@anvil_ic·
5 years of ICP ledger transactions in 1 minute
English
3
6
38
1.7K
Anvil
Anvil@anvil_ic·
A very powerful and little-known Motoko feature. 🙍User: Are modules in Motoko first-class values? 🤖ChatGPT: No — Motoko modules are not first-class values. 🚨 Beep — wrong, ChatGPT! That’s likely Andreas Rossberg placing some of his alien-tech research into Motoko. The future of this? You can’t transfer modules between canisters yet, but that limitation can be lifted with a bit of extra code. That's 🪄🎆🔥
Anvil tweet media
English
5
6
29
1.5K
Anvil
Anvil@anvil_ic·
SR9 Sneakpeak. The first video introduces the basics of VDD (Verification-Driven Development) and why it’s so powerful. Developers define the boundaries and constraints, and AI finds a solution that must respect them. This goes far beyond pure math—your entire network of canisters, their exposed services, and the Internet Computer’s capabilities all become part of it. The second video shows snippets from a fairly complex AMM service that is currently passing verification. Its invariants guarantee that the system is always solvent, regardless of conditions. What verification-aware languages do is translate your program into mathematics. This is where the difference from TDD (Test-Driven Development) becomes critical. TDD checks whether an algorithm behaves correctly under specific conditions. An AI can still find ways to pass tests while producing incorrect behavior in untested cases. VDD, on the other hand, proves that the program behaves correctly under all conditions. There’s no shortcut: the AI either produces a correct solution—or the program simply doesn’t verify. One key realization is that, with only a few services, VDD mostly delivers security benefits. And security alone is hard to sell—few entrepreneurs put it at the top of their priority list. But once a critical mass of verified contracts exists, something more powerful emerges. At that point, it’s no longer just about security. It becomes a fundamentally new unique capability: the ability to build fast, correct, and highly reliable systems on top of a network of public services that behave exactly as specified. And that unlocks the real breakthrough: giving AI high-level goals and letting it discover solutions that safely compose hundreds of verified third-party services—with confidence that the entire system will behave as expected.
English
1
5
19
1.2K
Anvil
Anvil@anvil_ic·
@NebulaOnIC True for some cases, depends. Few projects like that exist, coding from scratch took them avg 9years
English
1
0
1
68
Project Nebula
Project Nebula@NebulaOnIC·
@anvil_ic If you're doing specialized stuff, the time spent cleaning up the mess AI introduces is much higher than the time you would have spent coding it from scratch. What you describe has been my experience with AI which is why I'm using it to complement, e.g. Stack Overflow, if at all.
English
1
0
2
89
Anvil
Anvil@anvil_ic·
It’s fair to talk honestly about what AI is good at — and what it isn’t. AI is an incredible tool right now. It’s like a supercharged search engine, library, thinker, idea generator & auto complete: instant access to papers, theories, algorithms, hypotheses, and knowledge that used to be taught at only a handful of top universities. It can massively accelerate learning. But it can also sound extremely confident while being completely wrong. As Ilya Sutskever (former OpenAI chief scientist) has said: models generalize ~100× worse than humans, AGI may be physically impossible, and current scaling will stall. LLMs have the information, but often can’t use it well. I’ve yet to see an LLM genuinely combine ideas into something truly new. Progress is slowing. OpenAI & others haven't meaningfully improved core capabilities in the last ~4 months beyond client optimizations. What Codex 5.2 x-high does, 5.0 Pro could mostly do already — maybe ~15% worse. Growth used to be ~200% year over year; that curve is flattening. Yes, AI services can power useful apps built by non-experts. But devs and researchers using AI deeply will still be operating orders of magnitude ahead. LLM providers charge these services 4–5× more per token than subscribing end users — likely because those services can switch models or train competitors. What it actually takes for a dev to build something with AI that they couldn’t build before (we tried exactly that): In our latest project, we used Gemini, Claude, and OpenAI (API + TUI) to generate ~300 research docs (≈10 pages each) on the codebase, alternatives, and whitepapers (3 month project after 4months of learning how to steer the thing). Most had to be carefully read and understood. LLMs have tunnel vision. Once they head in a wrong direction — even a doomed one — they keep going. Every interaction has a real chance of introducing a lie. If that lie enters the plan, it becomes hundreds of lines of bad code. Miss one cleanup, and the model keeps amplifying the mistake until you have something that’s 90% wrong and 10% correct — but sounds amazing. You end up managing agents that manage your code. You constantly invent new project-specific tricks to keep them on track. We didn’t write many lines manually — we deleted a lot. Cognitive load requirements went up ~10× (While the agent is writing code, you are reading, researching and checking if something wrong slipped in): • 300 research docs • 100 todos with ~20 subtasks each. Agent kept running nearly 24/7 • 1,350 tests (prompted, manually verified multiple times) At this point, Opus 4.5 can’t do anything meaningful in our codebase (it could a month ago). Only Codex 5.2 x-high still works, and even then it takes multiple attempts and an entire night of brute-forcing to reach a viable solution. The next LLM upgrade will likely bring ~10% improvement — helpful for a couple of weeks — before the project slows down again. The agent went in the wrong direction ~200 times, overfit tests, hard-coded edge cases, ignored existing abstractions, and forced ~8 refactors of ~30k LOC. Still: AI + devs is insanely powerful. If you’re a dev not using AI, you’d better be exceptional at theory, experience or academia — the kind of person other AI-using devs come to for advice. I guess when normies start building great apps/protocols fully automated, devs + AI will be building spaceships and time machines.
English
2
4
25
1.5K
Anvil
Anvil@anvil_ic·
The reason I said a few months ago that DeFi will be massively better on the IC is Sector9’s architecture. Currently, swapping 100 different tokens between canister A and canister B with guarantees requires a custodian canister in the middle. This custodian receives both tokens and performs the swap atomically. On top of that, two ledger canisters are involved. Altogether, this results in roughly 1200 inter-canister calls. With Sector9, once fully implemented and executed correctly, the same swap of 100 will require a single call, while providing the same—or even stronger—guarantees, along with significantly more flexibility and does not require you to develop standards for everything. This new architecture goes far beyond better token handling. It addresses some of the most fundamental problems of the IC, particularly those arising from asynchronicity and the lack of a public single state machine.
Anvil@anvil_ic

#ICP was never about the cheap casino-floor hype—it was the raw crypto physics humming under the hood. That’s what dragged me in. Four years tangled in IC tech, recently pacing like a half-mad code-sheriff in a ’60s noir trench coat, and only now do I see the real beast waking up. I went bull, bear, then deep into the basement—chain-smoking my way through five straight months of AI experiments, trying to push the machines until they squealed. And somewhere in that hallucinatory jungle of code and sleepless nights… the next logical evolution snapped into place like a loaded magical bazooka. Maybe the IC founders smuggled alien tech in their pockets. Maybe it’s just the natural arc of centuries of human obsession. Doesn’t matter. What matters is the secret gear I’m holding now, humming like a reactor in the fog-of-war. And I’ll tell you this straight: I’ve never been more god-tier bullish on IC. This thing is going to tear open the floorboards of the industry. Everyone gets disrupted. ICP → top 3. Sooner than you think. Immediate fallout: 1.IC DeFi becomes sharp, clean, brutal—and actually works. 2.We need more developers, not less. A whole new wave of them. The frontier is wide open and screaming.

English
4
12
55
2.4K
Anvil
Anvil@anvil_ic·
Sector9 | Launching in a few months. Video v0.2 — AI-generated from the whitepaper, using code-to-video rendering. 🎧
English
0
9
44
1.7K
Anvil
Anvil@anvil_ic·
One of the problems Sector9 aims to solve for dApps is this: your code can remain closed-source, while your contract/API behavior is public. A verifier checks that the declared behavior is actually proven by the code. Your code itself is never visible during verification even to us, since it runs inside TEEs with a verified build process governed by a DAO. Result is - clients get guarantees without having to see your whole codebase and LLMs cant get all your knowhow for free.
Marc@MarcJSchmidt

All my new code will be closed-source from now on. I've contributed millions of lines of carefully written OSS code over the past decade, spent thousands of hours helping other people. If you want to use my libraries (1M+ downloads/month) in the future, you have to pay. I made good money funneling people through my OSS and being recognized as expert in several fields. This was entirely based on HUMANS knowing and seeing me by USING and INTERACTING with my code. No humans will ever read my docs again when coding agents do it in seconds. Nobody will even know it's me who built it. Look at Tailwind: 75 million downloads/month, more popular than ever, revenue down 80%, docs traffic down 40%, 75% of engineering team laid off. Someone submitted a PR to add LLM-optimized docs and Wathan had to decline - optimizing for agents accelerates his business's death. He's being asked to build the infrastructure for his own obsolescence. Two of the most common OSS business models: - Open Core: Give away the library, sell premium once you reach critical mass (Tailwind UI, Prisma Accelerate, Supabase Cloud...) - Expertise Moat: Be THE expert in your library - consulting gigs, speaking, higher salary Tailwind just proved the first one is dying. Agents bypass the documentation funnel. They don't see your premium tier. Every project relying on docs-to-premium conversion will face the same pressure: Prisma, Drizzle, MikroORM, Strapi, and many more. The core insight: OSS monetization was always about attention. Human eyeballs on your docs, brand, expertise. That attention has literally moved into attention layers. Your docs trained the models that now make visiting you unnecessary. Human attention paid. Artificial attention doesn't. Some OSS will keep going - wealthy devs doing it for fun or education. That's not a system, that's charity. Most popular OSS runs on economic incentives. Destroy them, they stop playing. Why go closed-source? When the monetization funnel is broken, you move payment to the only point that still exists: access. OSS gave away access hoping to monetize attention downstream. Agents broke downstream. Closed-source gates access directly. The final irony: OSS trained the models now killing it. We built our own replacement. My prediction: a new marketplace emerges, built for agents. Want your agent to use Tailwind? Prisma? Pay per access. Libraries become APIs with meters. The old model: free code -> human attention -> monetization. The new model: pay at the gate or your agent doesn't get in.

English
0
2
14
1.5K
Anvil
Anvil@anvil_ic·
Example code demonstrating Sector9, derived from Motoko with built-in verification.
Anvil tweet media
English
0
0
7
808
Anvil
Anvil@anvil_ic·
What is Sector9? Whitepaper: github.com/Neutrinomic/se… #ICP is top-tier #crypto infrastructure. You can’t build something like this anywhere else-not without permission to modify the L1, and not while remaining scalable, fast, and privacy-preserving. Overview (simplified):
English
3
10
48
3.6K
Anvil
Anvil@anvil_ic·
@Nichola57012587 @dragginzgame You can run all the "queries" you want. Just tell your agent to write the code. SQL dbs are made with the same data structures under the hood. There is practically nothing your agent cant do with this db that it can do with SQL db.
English
0
0
2
102
Anvil
Anvil@anvil_ic·
A few weeks ago I cloned Icydb by @dragginzgame and opened Claude Code with a simple prompt: “Create an IC canister using this repo. Here’s my spec…” (in plain English). In ~5 minutes it generated a production-grade DB: multiple tables, indexes, relationships — plus ~10 API endpoints on top. That’s easily a 50× speedup vs starting from raw Rust or Motoko. Highly customizable, clean, max performance. How does this compare to Caffeine? Icydb is for developers. You manage auth, payments, cycles, canisters, pick components, frontend, and bring your own LLM agent. Errors mean reading docs and debugging yourself. @caffeineai handles all of that for you. It’s holistic, novice-friendly, and optimized for fast shipping. Few clicks away from your first dapp. The backend you can build with icydb today supports deeper, more complex business logic — but with more responsibility. You can still get away without writing a single line of code yourself, but you need to know what is going on. Sector9 will also target a narrow set of problems it does exceptionally well. If you think of them as vehicles: a commuter car, an airplane, and an F1 car. All move AI forward — none are directly comparable, each one solves different problems.
English
6
17
66
3.2K
Anvil
Anvil@anvil_ic·
@dansickles It's a verification-aware language, it has richer semantics. I'll share more soon.
English
0
0
4
120
Dan Sickles
Dan Sickles@dansickles·
How would you characterize the sector9 differences? What are the key problems you ran into with Motoko? Genuinely curious. I spent years working in Scala (also heavily influenced by OCaml) and with the Akka actor system. Motoko was a nice start in those directions but somehow doesn't go far enough.
English
1
0
1
181
Anvil
Anvil@anvil_ic·
Motoko is an amazing language. There likely isn’t a person on the IC who has written more Motoko than me. Over the years, we’ve stumbled upon a number of problems, and solving them often required working directly with the AST and hacking beneficial properties using external scripts. Then AI came along and changed everything. Dfinity started Motoko San a few years ago and created a prototype with around 2k lines of code and 20 tests—verification for Motoko using Viper. At the time, the community didn’t seem interested. After offering it to DeFi projects and things not working out, they announced that they would no longer support or develop it. We began working on it three months ago as a weekend project, trying to extend it, but it quickly became clear that things were far more complicated than expected. Still, we didn’t give up. After nearly 24/7 AI-assisted development, the verification system is now 48k lines of code with 1,200 passing tests—and it’s still not ready. The changes we made are so invasive that it’s no longer Motoko, so we called our fork Sector9. In the meantime, Motoko moved to Caffeine as well. Given our experience with ecosystem-level development problems, we figured out how to solve many of them directly inside the language, instead of relying on brittle external tools. Sector9 will be a language using Neutrinite DAO services. Motoko aims to be a simple language. Sector9 does not. While both are designed to work well with AI, Sector9 is built for the most experienced developers. Maybe only five people will be able to use it—but they’ll be the ones creating some of the most advanced and secure protocols on the IC. The Sapir–Whorf hypothesis proposes that the language you use influences how you think and perceive reality. In its strongest form, it claims language determines thought; in its weaker (and more accepted) form, it says language shapes or biases thought. This is especially true for AI. In a world of LLMs (large language models), the language itself becomes a multiplier—and when combined with platform-specific features, even more so. The value proposition of Sector9 is simple: if you can handle the cognitive load, the value of what you build will be 100×
English
9
19
123
4.4K
Anvil
Anvil@anvil_ic·
@let4be I mean you and your agent both :)
English
0
0
1
258
joker.icp
joker.icp@let4be·
@anvil_ic You mean if my AI agent can handle the cognitive load? 😉
English
1
0
0
304