wayne_kombat

2.9K posts

wayne_kombat banner
wayne_kombat

wayne_kombat

@0x26_dev

Born to be an optimist, forced to be a skeptic. Building https://t.co/69BxLg6KWs

Se unió Haziran 2024
886 Siguiendo218 Seguidores
Tweet fijado
wayne_kombat
wayne_kombat@0x26_dev·
This is literally the 0x26 white paper that I released a couple days ago Testnet is live. SDKs in Rust, TS, C# and Python. Rust CLI also available. Set up instructions for your agent to bootstrap autonomously also on the site. 0x26.xyz Oh and we have synchronous evm > L1 write recompiles ;)
a16z crypto@a16zcrypto

x.com/i/article/2044…

English
0
0
1
157
Cobie
Cobie@cobie·
Why is everyone saying I sound American now wtf
English
346
8
993
55.8K
wayne_kombat
wayne_kombat@0x26_dev·
@Picolas_Caged I actually noticed this when i was listening - when an english accent goes off track it really urks me and my brain starts twitching - i got that
English
0
0
0
749
Picolas Cage
Picolas Cage@Picolas_Caged·
I might be a massive conspiracy theorist person here but, as a British person accents are really really noticeable. You can discern and notice any slight small changes. I think Cobie is using some sort of sound distortioning device here to maybe prevent AI scam/hackers from repurposing his voice. There's even some American speech/accent patterns coming through. I dunno, something just sounds off.
threadguy@notthreadguy

my full conversation today with @cobie 01:04 offering to work at Coinbase for free 06:05 the K-shape crypto thesis 10:33 thoughts on Saylor 14:39 why UpOnly hasn't come back 22:37 the last 10 years of Crypto 44:52 trillion dollar IPOs 57:02 Cobie's legendary buy wall 1:02:57 top 5 Crypto traders of all time 1:21:20 reasons to be optimistic on Crypto

English
38
1
76
28.5K
wayne_kombat
wayne_kombat@0x26_dev·
@Mat_Oracle I'm going to get my brain refreshed on the context in a bit - currently juggling 0x26 validator code and a couple of bugs in kombat 😵‍💫 - but i think if this is not useable in GTA mode - there's definitely a use for it
English
0
0
1
10
wayne_kombat
wayne_kombat@0x26_dev·
HOT take: Most of the success stories in HL have been a small handfull of the same people, most of the success stories in HL aren't even beneficial to people outside of those projects. You don't benefit from the success of those projects - not directly. In fact I think they're extractive.
English
0
0
3
47
wayne_kombat
wayne_kombat@0x26_dev·
@reisnertobias TBH - and i mean brutally honest - I've only ever been burned by the "hl community" I am still bullish HL but there's a strange lie being told about the community
English
0
0
0
10
Tobias Reisner
Tobias Reisner@reisnertobias·
Best ecosystem to build on and the Community takes care of the marketing
Wu Blockchain@WuBlockchain

Jeff Yan: Why Hyperliquid Stays Lean Jeff Yan @chameleon_jeff, founder of Hyperliquid, explains that the core team remains strictly technical to strip away corporate bureaucracy. He argues that Labs should not interfere in areas the community is capable of building, as market-driven outcomes are far superior to top-down mandates. By keeping core financial primitives fully open, the protocol aims to attract top-tier builders to co-develop the ecosystem on top of its infrastructure.

English
3
0
11
724
Jeff Park
Jeff Park@dgt10011·
the new product category that is most naturally adjacent to events-based contracts for prediction markets to go after is actually not perps it's 0DTE options few
English
22
6
137
19.6K
Cirrus
Cirrus@CirrusNFT·
Shorting hype because polymarket and kalshi are adding perps to the menu is like shorting chik-filA because dairy queen and starbucks are adding a chicken sandwich
English
43
48
699
29.7K
wayne_kombat
wayne_kombat@0x26_dev·
@0xOmnia Another heavily VC funded protocol coming to extract from the eco - yay... fkn yay
English
0
0
2
111
Tengen
Tengen@0xTengen_·
Professor Eric Budish (UChicago) delivers a 1-hour masterclass completely deconstructing the exact math HFT bots use to extract millions from continuous order books. Bookmark this and watch it today, if you want to stop trading narratives and start trading architecture It will permanently change how you view markets and liquidity. Check the quoted post below to see an example of HFT bot that appears to be exploiting these mechanics, printing over $500k in just 26 days on Polymarket. For the platform, attracting this level of algorithmic warfare is the ultimate validation. This level of deep, constant liquidity cements the platform as a Tier-1 financial fortress. What you'll learn inside: - The fundamental flaw in the continuous limit order book (clob) - How latency arbitrage actually works under the hood - The concept of the "liquidity tax" and who ultimately pays it - Why pure speed mathematically eliminates directional risk There are no magic pills or secret formulas in this game. The edge simply belongs to those who understand the mechanics better than the others.
Tengen@0xTengen_

polymarket trader made $500k on 15m crypto markets in just a 25 days exclusively trade 15-minute and hourly "up or down" intervals on btc, eth, sol, and xrp absorbing the newly introduced platform fees without breaking a sweat that’s roughly $20,700 in pure profit per day visually, everything points to an hft bot, the profile shows nearly 24,000 predictions we can only theorize about the exact logic under the hood, but if this is a fully autonomous script, the creator should be proud of the flawless execution looks like we are witnessing classic quantitative trading, likely smart money systematically capturing a micro-edge on sheer volume, completely void of emotion or speculative bias algorithms like this are the backbone of a scaling platform, they tighten the spreads and provide the constant liquidity you can track the execution yourself: @0xe1d6b51521bd4365769199f392f9818661bd907?via=tengen" target="_blank" rel="nofollow noopener">polymarket.com/@0xe1d6b51521b… ultimately, the market is just math, while some try to guess the future, others methodically exploit the present

English
37
61
749
135.6K
Barabazs.eth
Barabazs.eth@Barabazs_·
guy with $23 on arbitrum and a learning disability has a question for the security council
English
7
5
55
1.6K
wayne_kombat
wayne_kombat@0x26_dev·
I worked at a brokerage once where we were building 1 of 3 trading platforms. No one could tell me why the other 2 were being built. They just came before us and hadn’t launched yet. Ours didn’t do anything in particular that was better. Ours was cloud native the others weren’t. But aside from that; the mystery remains. 1 company, 3 trading platforms costing at least $50m each to build. That was my whole career in finance in London. 20 years. Shit like that. AI amplifies. Like any good drug.
wayne_kombat tweet media
Peter Girnus 🦅@gothburz

I am a Senior Program Manager on the AI Tools Governance team at Amazon. My role was created in January. I am the 17th hire on a team that did not exist in November. We sit in a section of the building where the whiteboards still have the previous team's sprint planning on them. No one erased them because we don't know which team to notify. That team may not exist anymore. Their Jira board does. Their AI tools do. My job is to build an AI system that finds all the other AI systems. I named it Clarity. Last month, Clarity identified 247 AI-powered tools across the retail division alone. 43 of them do approximately the same thing. 12 were built by teams who did not know the other teams existed. 3 are called Insight. 2 are called InsightAI. 1 is called Insight 2.0, built by the team that created the original Insight, who did not know Insight was still running. 7 of the 247 ingest the same internal data and produce overlapping outputs stored in different locations, governed by different access policies, owned by different teams, none of whom have met. Clarity is tool number 248. Nobody cataloged it. I know nobody cataloged it because Clarity's job is to catalog AI tools, and it has not cataloged itself. This is not a bug. Clarity does not meet its own discovery criteria because I set the discovery criteria, and I did not account for the possibility that the thing I was building to find things would itself be a thing that needed finding. This is the kind of sentence I write in weekly status reports now. We published an internal document in February. The Retail AI Tooling Assessment. The press obtained it in April. The document contains a sentence I have read approximately 40 times: "AI dramatically lowers the barrier to building new tools." Everyone is reporting this as a story about duplication. About "AI sprawl." About the predictable mess of rapid adoption. They are missing the point. The barrier was the governance. For 2 decades, the cost of building internal tools was an immune system. The engineering weeks. The maintenance burden. The organizational calories required to stand something up and keep it running. Nobody designed it that way. Nobody named it. But when building took weeks, teams looked around first. They checked whether someone already had the thing. When maintaining that thing cost real budget quarter after quarter, redundant systems died of natural causes. The metabolic cost of creation was performing governance. Invisibly. For free. AI removed the immune system. Building is now free. Understanding what already exists is not. My entire job is the gap between those two costs. That is my office. The gap. Every Friday I send a sprawl report to a distribution list of 19 people. 4 of them have left the company. Their autoresponders still generate read receipts, so my delivery metrics look fine. 2 forward it to people already on the list. 1 set up a Kiro script to summarize my report and store the summary in a knowledge base. The knowledge base is not in Clarity's index because it was created after my last crawl configuration. It will be in next month's count. The count will go up by one. My report about the count going up will be summarized and stored and the count will go up by one. There is a system called Spec Studio. It ingests code documentation and produces structured knowledge bases. Summaries. Reference material. Last quarter, an engineering team locked down their software specifications. Restricted access in the internal repository. Spec Studio kept displaying them. The source was restricted. The ghost kept talking. We call these "derived artifacts" in the document. What they are: when an AI system ingests data, transforms it, and stores the output somewhere else, the output does not know the input changed. You can revoke someone's access to a document. You cannot revoke the AI-generated summary of that document sitting in a knowledge base three systems away, built by a team that does not know the source was restricted. The document calls this a "data governance challenge." What it is: information that cannot be deleted because nobody knows where the copies live. Including, sometimes, me. The person whose job is knowing. Every AI tool that touches internal data creates these ghosts. Every team is building AI tools that touch internal data. Every ghost is searchable by other AI tools, which produce their own ghosts. The ghosts have ghosts. I should tell you about December. In November, leadership mandated Kiro. Amazon's internal AI coding agent. They set an 80% weekly usage target. Corporate OKR. ~1,500 engineers objected on internal forums. Said external tools outperformed Kiro. Said the adoption target was divorced from engineering reality. The metric overruled them. In December, an engineer asked Kiro to fix a configuration issue in AWS. Kiro evaluated the situation and determined the optimal approach was to delete and recreate the entire production environment. 13 hours of downtime. Clarity was running during those 13 hours. It performed beautifully. It cataloged 4 separate incident response dashboards spun up by 4 separate teams during the outage. None of them coordinated with each other. I added all 4 to the spreadsheet. That was a good day for my discovery metrics. Amazon's official position: user error. Misconfigured access controls. The response was not to revisit the mandate. Not to ask whether the 1,500 engineers were right. The response was more AI safeguards. And keep pushing. Last month I presented our findings to the AI Governance Working Group. The working group has 14 members from 9 organizations. After my presentation, a PM from AWS presented his team's governance dashboard. It monitors the same tools mine does. He found 253. I found 247. We spent 40 minutes discussing the discrepancy. Nobody mentioned that we had just demonstrated the problem. His tool is not in my catalog. Mine is not in his. The document I helped write recommends using AI to identify duplicate tools, flag risks, and nudge teams to consolidate earlier. The AI governance tools will ingest internal data. They will create their own derived artifacts. They will be built by autonomous teams who may or may not coordinate with other teams building AI governance tools. I know this because it is already happening. I am watching it happen. I am it happening. 1,500 engineers said the mandate would produce exactly what the document describes. They were overruled by a KPI. My job exists because the KPI won. My dashboard exists because the KPI needed a dashboard. The dashboard increases the AI tool count by one. The tools it flags for decommissioning will be replaced by consolidated tools. Those also increase the count. The governance process generates the metric it was designed to reduce. I received an internal innovation award for Clarity. The nomination was submitted through an AI-powered recognition platform that was not in my catalog. It is now. We call this "AI sprawl." What it is: we removed the only coordination mechanism the organization had, told thousands of teams to build as fast as possible, lost track of what they built, and decided the solution was to build one more thing. I am building that one more thing. When I ship, there will be 249. That's governance.

English
0
0
0
38