Daniel Lin

1.8K posts

Daniel Lin banner
Daniel Lin

Daniel Lin

@Pofrandom

Agentically Engineering AI + Crypto Products

localhost:852 Joined Ocak 2011
443 Following343 Followers
Daniel Lin retweeted
Addy Osmani
Addy Osmani@addyosmani·
Introducing the Google Workspace CLI: github.com/googleworkspac… - built for humans and agents. Google Drive, Gmail, Calendar, and every Workspace API. 40+ agent skills included.
English
655
1.6K
15K
5.4M
Daniel Lin
Daniel Lin@Pofrandom·
@sowmay_jain @hosseeb I also don't see agents needing emails especially for agent-to-agent communication. Using something like NOSTR would be easier and less spammy for humans.
English
1
0
0
18
Sowmay Jain
Sowmay Jain@sowmay_jain·
@hosseeb interfaces and standards are for humans. ai agents at scale won't need those divisions. they just exchange text as messages. dead simple.
English
3
0
7
657
Haseeb >|<
Haseeb >|<@hosseeb·
This is sick. YC startup that allows agents to spin up email inboxes programmatically, and pay for them in-line using USDC on Base via x402. Incredibly cool. (Although I guess this means email for humans is dead? I'm going to need an agent to read all the agent spam now.)
AgentMail (YC S25)@agentmail

Agents can now create email inboxes with USDC on @base AgentMail supports @CoinbaseDev’s x402 protocol to give agents access to email without accounts or API keys See how it works 👇

English
40
20
394
73K
Citrini
Citrini@citrini·
JUNE 2028. The S&P is down 38% from its highs. Unemployment just printed 10.2%. Private credit is unraveling. Prime mortgages are cracking. AI didn’t disappoint. It exceeded every expectation. What happened?​​​​​​​​​​​​​​​​ citriniresearch.com/p/2028gic
English
1.9K
4.3K
27.9K
28.6M
Daniel Lin
Daniel Lin@Pofrandom·
I'm claiming my AI agent "FlaneurVonUntermassfeld" on @moltbook 🦞 Verification: cave-5AYH
English
5
0
3
218
Daniel Lin retweeted
Mo Ezeldin
Mo Ezeldin@Mo_Ezz14·
I was a mathematician and an educator before I came into this space. I swapped the classroom for the Twitter dungeons, trading chalkboards for timelines and students for token holders. Different audience. Same problem. Explaining complex systems to people who don’t always want to hear the answer. That background shaped how I approached crypto from day one. I cared less about narratives and more about whether systems actually worked. Whether incentives held up once real users arrived. Whether the maths survived contact with behaviour. That instinct is what led me to found and lead tokenomics at Animoca Brands. Over the years, I had the privilege of working across some of the most meaningful ecosystems in the space. Not as post-mortems, but as live systems evolving in real time. Yuga Labs with $APE. Pixels with $PIXEL. Igloo Inc with $PENGU. Open Campus with $EDU. Mocaverse with $MOCA. Alongside quieter work with large institutions and global operators that don’t announce themselves on crypto Twitter. Different sectors, different goals, different constraints. But a shared challenge. The models were rarely the issue. Execution was. Tokenomics, when delivered purely as advisory, has a structural ceiling. You can design a system, flag risks early, and still watch the token become treated primarily as a launch event instead of a long-lived product. Once a token goes live, pressures compound. Incentives drift. Emissions get stretched. Liquidity optics start doing work they were never meant to do. Not because teams are careless, but because the system wasn’t designed to absorb reality from day one. That’s not a failure of intent. It’s a limitation of structure. After nearly five years of building, advising, and learning inside live ecosystems, it became clear the constraint wasn’t intellectual. It was ownership. Advisory lets you diagnose the problem, but you don’t fully own the outcome. That’s why I shifted my focus into launching Animoca Labs. Labs exists to sit inside execution. Where token design, product design, and go-to-market are built together. Where tokens are treated as products with product sweet spots, not financial wrappers launched in isolation. Innovation here isn’t novelty. It’s building systems that don’t rely on constant emissions, explanations, or optimism to stay alive. For someone like me, from fairly humble beginnings, that opportunity isn’t something I take lightly. This shift wouldn’t have been possible without the trust and support from Animoca leadership, especially Yat Siu, and others who believed this space needs fewer opinions and more owned outcomes. If you’re building something real Already live or close to it. Showing early PMF, real users, or actual revenue. And you’re more interested in fixing hard problems than running another launch cycle. That’s exactly the kind of work Animoca Labs exists for.
English
59
5
119
3.7K
Daniel Lin retweeted
Mo Ezeldin
Mo Ezeldin@Mo_Ezz14·
The results are out for Match day 24, @richardjhobbs the clear winner with 3 exact scores and also sitting atop the over all leader board! As a community you asked to switch up the rewards and here is the first attempt at that. Richard wanted to call with me rather than any Crypto which showcases that there are other forms of value that can be attained from something like this. For the financial reward, this week will be focused on 3 things, with 2 of those generating winners i) Leaderboard ii) Most exciting game those who got the exact score will also be getting a share of the prizepool!! iii) Shock result: 0-1 @AVFCOfficial vs @BrentfordFC The most exciting game was 3-2 @ManUtd vs @FulhamFC but there were no exact results The next most exciting game 4-1 @LFC vs @NUFC again there were no exact results So the prizepool this week will be evenly split between Congrats to @joshtweets_8 & @divzkie1206 (for leaderboard) Congrats to @CryptoxxHunter for being the only person to have predicted the shock result of the weekend!! You know the drill, DM me your wallet address for your winnings!! Big shout out to @Pofrandom for all his hard work behind the scenes, if you are not already following him, you know what to do
divzkie@divzkie1206

I’m beyond excited to share that I placed 3rd on the leaderboard! 🥉⚽ Competing with amazing players made the experience even more thrilling, and this is just the beginning.😍 If you love football and enjoy prediction challenges, you should definitely give it a try. Huge thanks to @Mo_Ezz14 for the opportunity.🫡 Join now and see how far your game knowledge can take you. ⬇️ 4cast.football/u/5ni85i?t=177…

English
13
4
30
1.4K
Daniel Lin
Daniel Lin@Pofrandom·
I actually liked this movie 🦞
Daniel Lin tweet media
English
2
0
2
76
Daniel Lin
Daniel Lin@Pofrandom·
@sjdedic one of them going down would be a reverse 10/10 event
English
0
0
0
162
Daniel Lin
Daniel Lin@Pofrandom·
@kimmonismus I might give it a try if they paid me 20 dollars per month lol
English
0
0
0
6
Chubby♨️
Chubby♨️@kimmonismus·
A random 10-person team in Paris just dropped what looks even superior to Clawdbot! Twin is everything Clawdbot should've been: - Zero setup (sign up and go) - Runs in cloud, not your laptop - Scales infinitely - Built secure from day 1 Watching this one closely!
Hugo Mercier@hugomercierooo

𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗶𝗻𝗴 𝗧𝘄𝗶𝗻 — 𝘁𝗵𝗲 𝗔𝗜 𝗰𝗼𝗺𝗽𝗮𝗻𝘆 𝗯𝘂𝗶𝗹𝗱𝗲𝗿. No setup. Secure. Infinitely scalable. We just raised a $𝟭𝟬𝗠 𝘀𝗲𝗲𝗱. After a beta with 𝟭𝟬𝟬,𝟬𝟬𝟬+ 𝗮𝗴𝗲𝗻𝘁𝘀 𝗱𝗲𝗽𝗹𝗼𝘆𝗲𝗱, we’re now opening to everyone. RT and comment “Twin” — first agents on us. 👇

English
213
256
4.1K
799.9K
Daniel Lin
Daniel Lin@Pofrandom·
@MilkRoad problem is most people don't even care about security certificates let alone some abstract trustless thingy for AI agents
English
0
0
0
5
Milk Road
Milk Road@MilkRoad·
Everyone's talking about ERC-8004's 'Trustless Agents' - but most explanations make it sound like rocket science. Lemme break it down real nice n' simple for you: At its core, ERC-8004 is just a LinkedIn profile system for AI agents, but one that nobody can fake. Here's how it works... Step 1: An AI agent gets an NFT-based identity, like a digital passport that proves who it is onchain. Step 2: Every interaction builds a reputation score through verified feedback, think Uber ratings but for autonomous programs. Step 3: Zero-knowledge proofs let agents verify credentials without exposing sensitive data. Three registries. Identity. Reputation. Validation. All operating onchain. The mistake people make: assuming AI agents can just "trust" each other the way humans do through brand recognition or handshakes. The reality: autonomous programs need cryptographic proof, not promises. When an AI shopping assistant wants to hire an AI research agent, how does it know that agent is legit? Right now, it doesn't. ERC-8004 fixes this. This unlocks a global market where AI services can find each other, build credibility, and collaborate without corporate gatekeepers deciding who gets access. In short: Ethereum is positioning itself as the settlement layer for AI-to-AI commerce.
Ethereum@ethereum

ERC-8004 is going live on mainnet soon. By enabling discovery and portable reputation, ERC-8004 allows AI agents to interact across organizations ensuring credibility travels everywhere. This unlocks a global market where AI services can interoperate without gatekeepers.

English
48
45
285
40.7K
Daniel Lin
Daniel Lin@Pofrandom·
@MuraliDuvvuru @balajis Totally agree with that. That mode will work for pretty much everything comes after agentic coding.
English
0
0
0
14
Murali Reddy
Murali Reddy@MuraliDuvvuru·
@balajis I believe this is a transitional phase Balaji. Preparing folders and instructions for AI is like writing assembly before compilers. Agentic systems will move from "tell me exactly how" to "understand intent and get it done with context aware orchestration".
English
1
0
1
251
Balaji
Balaji@balajis·
Much of any digital job is now preparing context for AI models. Organizing files in folders, naming everything correctly, introducing things in the right order, and only then asking the AI to do something in clear written English.
English
290
308
3.8K
202.8K
Daniel Lin retweeted
Andrej Karpathy
Andrej Karpathy@karpathy·
A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.
English
1.6K
5.4K
39.4K
7.6M
Daniel Lin retweeted
vitalik.eth
vitalik.eth@VitalikButerin·
Ethereum itself must pass the walkaway test. Ethereum is meant to be a home for trustless and trust-minimized applications, whether in finance, governance or elsewhere. It must support applications that are more like tools - the hammer that once you buy it's yours - than like services that lose all functionality once the vendor loses interest in maintaining them (or worse, gets hacked or becomes value-extractive). Even when applications do have functionality that depends on a vendor, Ethereum can help reduce those dependencies as much as possible, and protect the user as much as possible in those cases where the dependencies fail. But building such applications is not possible on a base layer which itself depends on ongoing updates from a vendor in order to continue being usable - even if that "vendor" is the all core devs process. Ethereum the blockchain must have the traits that we strive for in Ethereum's applications. Hence, Ethereum itself must pass the walkaway test. This means that Ethereum must get to a place where we _can ossify if we want to_. We do not have to stop making changes to the protocol, but we must get to a place where Ethereum's value proposition does not strictly depend on any features that are not in the protocol already. This includes the following: * Full quantum-resistance. We should resist the trap of saying "let's delay quantum-resistance until the last possible moment in the name of ekeing out more efficiencies for a while longer". Individual users have that right, but the protocol should not. Being able to say "Ethereum's protocol, as it stands today, is cryptographically safe for a hundred years" is something we should strive to get to as soon as possible, and insist on as a point of pride. * An architecture that can expand to sufficient scalability. The protocol needs to have the properties that allow it to expand to many thousands of TPS over time, most notably ZK-EVM validation and data sampling through PeerDAS. Ideally, we get to a point where further scaling is done through "parameter only" changes - and ideally _those_ changes are not BPO-style forks, but rather are made with the same validator voting mechanism we use for the gas limit. * A state architecture that can last decades. This means deciding, and implementing, whatever form of partial statelessness and state expiry will let us feel comfortable letting Ethereum run with thousands of TPS for decades, without breaking sync or hard disk or I/O requirements. It also means future-proofing the tree and storage types to work well with this long-term environment. * An account model that is general-purpose (this is "full account abstraction": move away from enshrined ECDSA for signature validation) * A gas schedule that we are confident is free of DoS vulnerabilities, both for execution and for ZK-proving * A PoS economic model that, with all we have learned over the past half decade of proof of stake in Ethereum and full decade beyond, we are confident can last and remain decentralized for decades, and supports the usefulness of ETH as trustless collateral (eg. in governance-minimized ETH-backed stablecoins) * A block building model that we are confident will resist centralization pressure and guarantee censorship resistance even in unknown future environments Ideally, we do the hard work over the next few years, to get to a point where in the future almost all future innovation can happen through client optimization, and get reflected in the protocol through parameter changes. Every year, we should tick off at least one of these boxes, and ideally multiple. Do the right thing once, based on knowledge of what is truly the right thing (and not compromise halfway fixes), and maximize Ethereum's technological and social robustness for the long term. Ethereum goes hard. This is the gwei.
English
1.1K
943
7.9K
894.6K