0xfabs

2.5K posts

0xfabs banner
0xfabs

0xfabs

@0xfabs

doing devrel things @layerzero_core | AI-pilled

0x Katılım Mayıs 2022
2.5K Takip Edilen2.3K Takipçiler
Sabitlenmiş Tweet
0xfabs
0xfabs@0xfabs·
2025: The Year of Agentic DeFi If you're running a protocol or any kind of DeFi service, here's what you need to start thinking about now to make it 'agent-ready'. I strongly believe the majority of on-chain activity will soon come from AI Agents. > Documentation & API Architecture Are your docs optimized for agents? Your documentation needs to serve both human and AI audiences. - Consider using structured data formats like JSON or XML for easier parsing by AI systems. - Implement OpenAPI/Swagger specifications for all endpoints - Structure response data in consistent JSON formats with clear schemas - Provide comprehensive error codes and handling documentation - Include code examples in multiple languages (Python, TS, Rust, LOLCODE heh) - Implement the Díataxis framework to cover every possible step in the developer’s journey Are your APIs ready for the increased amount of 'intelligent' bots? Make sure these are robust, well-documented, and designed for high-frequency interactions: - Maybe think of stateless endpoints that support high-frequency trading - Or websocket connections for real-time updates - Create batch processing endpoints to handle multiple operations - Design clear rate-limiting policies that scale with user reputation If open source, do you have a well-documented codebase? Ensure your code comments and README files are comprehensive. Give these systems as much context as possible. > Frontend and Contract Interaction: Does your frontend give enough context for the used contracts? Think about how an agent would interpret the reasoning behind decisions or evaluations. Is your frontend accessible programmatically? This might mean providing new APIs or webhooks for agent interaction. > Smart Contract Design: Are your smart contracts designed with AI interaction in mind? They should be easily interpretable, possibly with clear function names and comments explaining logic. > Governance and Decision-Making: Is there a mechanism for AI agents to engage in governance or decision-making processes? Think about AI voting on proposals or influencing policy through data analysis. > Security Concerns: Have you considered security implications AI Agents can lead to new attack vectors. Implement safeguards like the already mentioned rate limiting or anomaly detection. Adapting to this new paradigm isn't just about being technologically ready; it's also about rethinking user experience, security protocols, and even the ethics of AI in finance: > UX for AI: How will your platform's interface support both human and AI decision-making? Perhaps you need interfaces that allow humans to oversee AI actions or intervene when necessary. > Agent Integration Features These agents will need their own journey. Maybe think of the following to be ready: - Provide real-time transaction monitoring endpoints - Implement webhooks for important protocol events - Create detailed transaction logs with machine-parseable metadata - Build analytics endpoints for historical performance data The future of DeFi will be shaped by how well these services can integrate with and be managed by AI agents. Don't get left behind once the next iteration of agents is unleashed. Questions for You: > How is your project/protocol preparing for this shift? > What challenges do you foresee in adapting to AI agent dominance in DeFi? So, what exactly do you need to do? IMO, you should at least: > Review your documentation for machine readability. > Consider a workshop or session on AI Agents > Engage with the community as much as you can. > Learn the concepts of the different AI agents, as it’s very likely they’ll be the users of your project/protocol very, very soon. And last, but not least: > Create sandbox environments for agent testing. 🤖 Accelerate.
GIF
English
12
9
82
15.7K
0xfabs
0xfabs@0xfabs·
gm dont forget to:
0xfabs tweet media
English
4
0
7
80
0xfabs
0xfabs@0xfabs·
@sentry I got kicked out while entering the info, now it’s sold out. Meh
English
1
0
1
696
Sentry
Sentry@sentry·
WHOOPS! We made too many Moka Pots ☕ If we don’t move them by Friday, we’re legally obligated to open a cafe. Enter JAVA at checkout to claim your FREE gift! We even might throw some free coffee in there, too. If it arrives in holiday wrapping, no it didn’t! checkout.sentry.shop/products/sentr…
Sentry tweet media
English
50
4
123
36K
0xfabs
0xfabs@0xfabs·
Maxing out AI usage limits is the new Pomodoro timer.
English
0
0
5
121
持山 | Chishan
持山 | Chishan@chishanAI·
@KaiLentit this but unironically. the people who say please and thank you to their AI will probably be the last ones the robots spare
English
1
0
1
152
Kai Lentit (e/xcel)
Kai Lentit (e/xcel)@KaiLentit·
A man’s character is revealed by how he treats AI agents who can’t benefit him.
English
28
13
229
13.7K
0xfabs
0xfabs@0xfabs·
@dabit3 for low budget and good quality (best ROI for beginners) I’d recommend the Insta360 Link 2 for $200. Very decent 4K video, good audio (but no comparison to a dedicated mic) insta360.com/product/insta3…
English
1
0
0
92
nader dabit
nader dabit@dabit3·
I get asked a lot about my video production stack, so I'm sharing everything here: Camera • Camera: Sony Alpha 7 IV Full-frame Mirrorless Interchangeable Lens Camera (there is a brand new version of this, Sony a7 V which is probably even better than what I have and I'd recommend it) • Main Lens (for meetings, streaming, videos of myself etc..): Sony FE 35mm F1.8 Large Aperture Prime Lens (SEL35F18F) • Wider angle lens (for outdoors or where a single person is not the focus): Sony SEL28F20 FE 28mm f/2-22 Standard-Prime Lens • Elgato Prompter - teleprompter with Built-in Screen, this has been very helpful when I have a long script and saves me so much time Audio • Microphone (for podcasts, in the studio type of things): Shure SM7B Vocal Dynamic Microphone – XLR • Audio console (for above Shure microphone): RODE Rodecaster Pro II Podcast Production Console • Audio console - there are other audio consoles that will also work but the above is the one I have and it worked great without a lot of work on my end The above microphone is best but you need the intake console, and the audio quality is very good and even a lot of famous artists / singers / etc.. use that microphone If you want to go one grade below that for a podcast microphone, you can try Shure MV7 USB which is a USB type of microphone and it stil pretty good quality audio. • Microphone setup for travel - I use RODE Wireless PRO Compact Wireless Microphone System with Timecode, this has everything you need to do interviews and things like that with other people and is easy to travel with, the other similar option is DJI Mic 2 which I hear is also good but I have the RODE wireless pro For travel, also consider investing in a very high quality tripod. You can get a cheap one on amazon for like $50-$80 but they suck and barely hold up the camera and always give me problems, so I ended up buying like a $280 tripod... Editing software • Screen Studio for screen recordings • Camtasia (learning Davinci Resolve) and for basic editing and voice / facial recording • After Effects for more complex animations / editing also, I ordered and tried out ~6 cameras in total over the course of 4/5 months, it was a huge pain in the ass, but the camera + lens combo I ended up with was 100% worth it and I highly recommend it, really the camera itself does 90% of the work and almost any lens looks great and the lighting matters way less when the camera + lens combo is nice
nader dabit@dabit3

The @figma MCP is insanely good. First time I used it felt like AGI. Feed designs to the MCP, done. It reads frames, components, variables, layout, and generates close to pixel perfect code. I one-shotted my design in this 150 second video with @DevinAI.

English
7
6
113
13.4K
Andrej Karpathy
Andrej Karpathy@karpathy·
Bought a new Mac mini to properly tinker with claws over the weekend. The apple store person told me they are selling like hotcakes and everyone is confused :) I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level. Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. I also love their approach to configurability - it's not done via config files it's done via skills! For example, /add-telegram instructs your AI agent how to modify the actual code to integrate Telegram. I haven't come across this yet and it slightly blew my mind earlier today as a new, AI-enabled approach to preventing config mess and if-then-else monsters. Basically - the implied new meta is to write the most maximally forkable repo and then have skills that fork it into any desired more exotic configuration. Very cool. Anyway there are many others - e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). There are also cloud-hosted alternatives but tbh I don't love these because it feels much harder to tinker with. In particular, local setup allows easy connection to home automation gadgets on the local network. And I don't know, there is something aesthetically pleasing about there being a physical device 'possessed' by a little ghost of a personal digital house elf. Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack.
English
1K
1.3K
17.5K
3.4M
0xfabs
0xfabs@0xfabs·
What Karpathy is describing is what I’ve been calling a personal operating system for months now. People are underestimating the impact these systems will have on the (digital) economy and self sovereignty. It won’t take long until even non-technical people can just describe what they want and need to get a working tool helping them in their daily lives. You won’t need to spend hundreds of dollars a year across several apps and SaaS tools - tools you don’t own, built on terms you didn’t set. All you need to break the shackles is a subscription to an AI tool. The rest you build yourself, on your terms. And for those who want to go further: run it locally. Own the model. Own the data. No subscriptions, no terms of service, no company between you and your tools. That’s true self sovereignty. The pace is faster than I ever could have imagined, and I am so looking forward to seeing what you’ll be able to do in 3 months.
Andrej Karpathy@karpathy

Very interested in what the coming era of highly bespoke software might look like. Example from this morning - I've become a bit loosy goosy with my cardio recently so I decided to do a more srs, regimented experiment to try to lower my Resting Heart Rate from 50 -> 45, over experiment duration of 8 weeks. The primary way to do this is to aspire to a certain sum total minute goals in Zone 2 cardio and 1 HIIT/week. 1 hour later I vibe coded this super custom dashboard for this very specific experiment that shows me how I'm tracking. Claude had to reverse engineer the Woodway treadmill cloud API to pull raw data, process, filter, debug it and create a web UI frontend to track the experiment. It wasn't a fully smooth experience and I had to notice and ask to fix bugs e.g. it screwed up metric vs. imperial system units and it screwed up on the calendar matching up days to dates etc. But I still feel like the overall direction is clear: 1) There will never be (and shouldn't be) a specific app on the app store for this kind of thing. I shouldn't have to look for, download and use some kind of a "Cardio experiment tracker", when this thing is ~300 lines of code that an LLM agent will give you in seconds. The idea of an "app store" of a long tail of discrete set of apps you choose from feels somehow wrong and outdated when LLM agents can improvise the app on the spot and just for you. 2) Second, the industry has to reconfigure into a set of services of sensors and actuators with agent native ergonomics. My Woodway treadmill is a sensor - it turns physical state into digital knowledge. It shouldn't maintain some human-readable frontend and my LLM agent shouldn't have to reverse engineer it, it should be an API/CLI easily usable by my agent. I'm a little bit disappointed (and my timelines are correspondingly slower) with how slowly this progression is happening in the industry overall. 99% of products/services still don't have an AI-native CLI yet. 99% of products/services maintain .html/.css docs like I won't immediately look for how to copy paste the whole thing to my agent to get something done. They give you a list of instructions on a webpage to open this or that url and click here or there to do a thing. In 2026. What am I a computer? You do it. Or have my agent do it. So anyway today I am impressed that this random thing took 1 hour (it would have been ~10 hours 2 years ago). But what excites me more is thinking through how this really should have been 1 minute tops. What has to be in place so that it would be 1 minute? So that I could simply say "Hi can you help me track my cardio over the next 8 weeks", and after a very brief Q&A the app would be up. The AI would already have a lot personal context, it would gather the extra needed data, it would reference and search related skill libraries, and maintain all my little apps/automations. TLDR the "app store" of a set of discrete apps that you choose from is an increasingly outdated concept all by itself. The future are services of AI-native sensors & actuators orchestrated via LLM glue into highly custom, ephemeral apps. It's just not here yet.

English
2
1
6
1.5K
‏Middle East News
‏Middle East News@MiddleEast_Eng·
🚨🇬🇧 British police raided one of Britain's most dangerous men to arrest him. But due to poor planning and their inability to break down the door quickly, he managed to escape.
English
192
59
1.6K
1.1M
0xfabs
0xfabs@0xfabs·
@bitfalls @VirtuixOmni squeeze in some squats, otherwise it’ll look funny if I think about my death count in ER, lmao
English
0
0
1
23
0xfabs
0xfabs@0xfabs·
@Lewsiphur gib, can't wait to snowball $67 to 7figs
English
0
0
0
45
Lew
Lew@Lewsiphur·
I need 20-30 people to test out a new product I’ve been working on. Your account will be funded and all the money you make you can keep. I want YOU to hit a 7 figure PNL. Reply if interested, preferably people with trading experience or people that like to test new products.
English
696
11
746
60.9K
0xfabs
0xfabs@0xfabs·
@tenitoka_eth your imagination is the limit here, OSS will give us the unified and efficient layer to orchestrate it, until then vibe along
English
1
0
1
21
0xfabs
0xfabs@0xfabs·
My OpenClaw agent works on these projects daily. The list is also growing, and it's all interconnected. The apps hook into each other and extend the whole system. I'm now pretty sure we'll see more apps pushed to the stores while people install fewer. I for sure won't install new apps, but instead ask my system to create them for me. I couldn't have imagined something like this six months ago. It's been a rough start, but it's getting better each day. Thanks for building it @steipete
0xfabs tweet media
English
2
0
3
293
Lexi🎨
Lexi🎨@lexiweb31·
@0xfabs I’m the only one in ct that has not tried open claw out Ngmi
English
1
0
0
40
0xfabs
0xfabs@0xfabs·
@fede_intern 'I have to update me to roboarm 2000, replacing my roboarm 1000'
English
0
0
0
60
0xfabs
0xfabs@0xfabs·
One of the biggest unlocks using OpenClaw is asking it to give you insights about yourself and your workflows. This validates multiple aspects. Firstly it shows if the agent fully grasped what you expected, secondly it can give you insights about things you previously have not been aware of Just tell it to: 'Give me a brutally honest summary of what you learned about me today: (1) what I expect from you, (2) how I actually work, (3) my blind spots/bottlenecks, and (4) what you’ll change next to serve me better.'
0xfabs tweet media
English
4
0
6
493