
Red|Dot 🔴 {🦇🔊}
11.8K posts

Red|Dot 🔴 {🦇🔊}
@Red_dot_name
#EngineeringManager that likes @RTFKT, @lootproject, @treeverse, @mekaverse. A 🦆 on @safe. A 🚀 on @yearnfi & @eulerfinance. A 💰on #ETH.


















Today, we're launching the latest @_SEAL_Org initiative, and it's going to change crypto security forever. It's called SEAL-ISAC, and this is why we need it

idk if anyone noticed, but I released a cracked guide on how to fix your Skill Issues with DuneSQL the other day in our docs. TL;DR: Dune uses Parquet files. Understanding how parquet files work and writing efficient queries against them will 100% speed up all of your queries. 1⃣the basics: DuneSQL is a fork of Trino with a few added datatypes and convenience functions for blockchain data. e.g. we have native support for INT256 and UINT256, we actually store varbinary data as varbinary and not as string. 2⃣ parquet files Parquet files are neither strictly row-oriented, nor column-oriented, but rather utilize a kind of hybrid approach that allows the engine to quickly scan through millions of rows and only actually read the data it needs in that exact moment into memory. This works because parquet files know things about themselves and their different internal segments. There are global and more localized Column Statistics that contain key info about the data contained inside of the parquet file. These Column Statistics are the only thing the engine starts reading while starting to scan through e.g. zora.logs. Only if the condition set in the query matches a parquet file level column statistic, will the engine actually go down and start reading pages of data. Therefore, there is nothing more important for DuneSQL performance than making sure the engine can actually use the column statistics. Unfortunately, in blockchain applications, we will find a lot of columns that are rather random and their column statistics are not useful and we will therefore force the db to do a full scan. Common examples are hashes, addresses and bytecode. We want to avoid using these as the only filter or join condition wherever possible. Positive examples and thankfully really easy to implement is simply utilising block_time, block_date or block_number. 3⃣ putting it into practice Instead of only using filter conditions that are not able to utilize column statistics: add conditions that can use column statistics: often times the simplest year filter is already enough to cut down processing time by 80%+. This same principle applies to joins: Follow this basic principle, try to understand how parquet files work for a bit and your DuneSQL performance will enormously improve. (actually it almost never is a complaint I hear from users anymore, but lots of queries could be even faster) There is a lot more meat 🍖🍖🍖 in the actual guide, go and grab it in the dune docs.



@Uniswap I use @ErigonEth and @trueblocks to categorize manual, algorithmic, and arbitrage trading alongside liquidity provision. In contrast to arbitrage trading, algorithmic trading accounts for 8% of price discovery, while manual trading and liquidity provision have no impact. 2/





