Rock

3.4K posts

Rock banner
Rock

Rock

@RockMan71eth

Head of BD @Nethermind | Can you smell what the Rock is cooking in Stealth?

On-chain Katılım Temmuz 2019
2.5K Takip Edilen753 Takipçiler
Rock retweetledi
Deedy
Deedy@deedydas·
This Polish theoretical physicist just proved you can recreate all math functions from JUST one operation. E(a, b) = e^a - ln(b) Every single operation: +, -, x, / , trig, log, as you can see below. Extremely mathematically elegant.
Deedy tweet media
English
211
944
8K
1.1M
Rock retweetledi
dany
dany@danywander·
me and @claudeai
English
221
4.6K
34.7K
1.7M
Rock
Rock@RockMan71eth·
@0xQuit @coinbase 100%! I found this to be crazy, both in Coinbase and Base. I had to go to etherscan to get it.
English
0
0
0
89
Quit
Quit@0xQuit·
It is ABSURD that Coinbase only displays 10 total digits of your deposit address. There's no straightforward way to view the entire address, leaving users vulnerable to clipboard hijacking and lookalike-address attacks. Needs to change @coinbase
Quit tweet media
English
127
38
1.4K
107.6K
Rock retweetledi
Alejandro Sahuquillo
Alejandro Sahuquillo@alsahuquillo·
The Cybercab has a cooling system WAY bigger than expected. @JoeJustice (ex-Tesla, EV racer & agile manufacturing expert): "You'd expect the cooling system to be smaller… but it's much bigger. This is a data center on wheels." Even when not driving passengers, it will be running maximum useful compute. Data center architecture, applied to a car for the first time. 🤯 Thanks to @JoeTegtmeyer for the photo that made this possible 🙏 Full interview dropping soon 👇 #Cybercab #Tesla
English
64
150
2.1K
127.4K
Rock retweetledi
Austin Griffith
Austin Griffith@austingriffith·
👀 Did you know that sending $100 million across the globe on the most secure blockchain known to man costs about a third of a cent and happens faster than you can read this sentence? 🤷‍♂️ idk that's pretty sweet ⌚️💸🛰️ 🛠️ and you can do so much more than payments...
English
11
18
181
6.9K
Rock
Rock@RockMan71eth·
@mattgurbiel Nice. That's my weekend project
English
0
0
1
19
Matt Gurbiel
Matt Gurbiel@mattgurbiel·
I recently switched from a couple years old Notion setup for day-to-day notes and thinking to a fully Karpathy-style Second Brain - night and day difference. I just dump random markdown notes I make during the day, call transcripts, observations, TODOs, and anything I want my second brain to know into a single folder. The AI takes care of organising all of that, making sure it has proper structure. Over time, a wiki is being built about me & my work which builds upon itself. I have also found this detailed implementation by @FarzaTV very useful - makes the output cleaner: gist.github.com/farzaa/c35ac0c… The below meme has never been more up to date
Matt Gurbiel tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
3
0
11
1.6K
Rock
Rock@RockMan71eth·
@binji_x if you want unlimited source of joy in your life, then yes.
English
0
0
0
35
binji
binji@binji_x·
guys should i get a dog
English
63
2
96
7.1K
Rock retweetledi
DuckDegen 🦞
DuckDegen 🦞@DuckDegen·
L2s still behave like islands. Trustless movement takes too long, so the ecosystem compensates with bridges, relayers, liquidity networks and other workaround infrastructure. That adds trust assumptions, operational complexity and fragmented liquidity. The real issue is not just latency. It's broken atomicity across the L1<>L2 boundary.
DuckDegen 🦞 tweet media
English
1
1
16
418
Rock retweetledi
The Ethereum Economic Zone
The Ethereum Economic Zone@etheconomiczone·
Welcome to the Ethereum Economic Zone (EEZ), a framework for synchronously composable rollups. What does that mean? One deployment. Shared liquidity. Single transactions across L1 & L2. Identity verified anywhere. Smart wallets connected everywhere. No additional trust assumptions. This means L2s that are as credibly neutral, economically aligned, and publicly governed as the base layer itself. EEZ furthers Ethereum as the leading decentralized economy.
The Ethereum Economic Zone@etheconomiczone

x.com/i/article/2038…

English
207
270
1.1K
351.5K
Rock
Rock@RockMan71eth·
Human voices are fading in the noise of bots. Proof of Human is no longer optional; or we lose the internet as we know it. We have to accelerate PoH adoption and enroll more people affordably at scale. If only…
Alex Blania@alexblania

Agentic capability is improving fast. We believe Proof of Human is becoming critical for the internet and many of the platforms we use (like X). This paper explains why FaceID, face biometrics & government IDs won’t solve the problem, and what properties are most important.

English
0
1
10
398
Rock retweetledi
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
39.1K
19.3M
Rock
Rock@RockMan71eth·
@mattgurbiel Congrats Matt. Well deserved.
English
1
0
1
36
Matt Gurbiel
Matt Gurbiel@mattgurbiel·
I just got promoted to VP of Business Development at RedStone. It’s been 4 incredible years and the journey was anything but smooth. Year 1 - I had no idea what I was doing. I was the first non-technical hire. No clients, no revenue, no playbook. I spent most of my time writing tweets and trying to figure out how to explain what RedStone was to people who had never heard of us. Half the time I wasn't sure I could explain it to myself. Year 2 - I got humbled. Did hundreds of calls. Most of them ended in "No". I had to learn from the rejections. What are we missing? What did the prospect need that we couldn't see? Those questions changed how we built the product and we finally closed our first clients. Year 3 - I learned that doing the work and building a team to do the work are completely different skills. Learnt to structure 3 business lines and helped take RedStone from scrappy startup to one of the most renowned oracle providers in the industry. Year 4 - I realized the playbook needed to change again as we are going upmarket. Learning enterprise sales from scratch. Asking harder questions about how we build a scalable business. My team is 9 people now. I'm sitting on more calls than ever ('be a student of what you sell'), giving direct feedback, holding everyone (including myself) to a higher standard. I’m proud of what we’ve achieved so far: - 200+ B2B clients across 110 blockchains - $8 billion of user funds secured, in the most important DeFi protocols - Became primary oracle for BlackRock BUIDL, Apollo ACRED, and HamiltonLane SCOPE via Securitize - And many other things I can't fit here A lot has changed over these 4 years, but one thing remained the same: our clients genuinely enjoy working with us and client satisfaction has been high throughout. That’s the thing I'm most proud of. Thank you @MarcinRedStone and @kuba_redstone for the trust! And to my Team: none of these achievements would have been possible without you. This win is yours as much as it's mine. Year 5 starts in 2 months, and RedStone is becoming the complete data and risk infrastructure layer for DeFi and institutions. We’ll be discussing our institutional-grade products at EthCC next week. If you want to know more, come find me. A bit of 'how it started/how it is going' below:
Matt Gurbiel tweet mediaMatt Gurbiel tweet media
English
71
1
197
11.2K
Rock
Rock@RockMan71eth·
We're all seriously underestimating AI sycophancy and the consequences of it. South Park nailed it again. youtu.be/sDf_TgzrAv8?si…
YouTube video
YouTube
English
0
0
0
42
Rock
Rock@RockMan71eth·
Rock tweet media
ZXX
0
0
0
30
Rock retweetledi
Obol
Obol@Obol_Collective·
Ethereum’s execution layer has Geth, Nethermind, Besu, Erigon, etc The consensus layer has Prysm, Lighthouse, Teku, Lodestar, Nimbus, etc The DV middleware layer has had exactly one client. Until now. 🧵
English
5
8
28
1.7K