Yuriy Dybskiy

10.3K posts

Yuriy Dybskiy banner
Yuriy Dybskiy

Yuriy Dybskiy

@html5cat

building https://t.co/GonoWc90V9 (@Puma_ai + Puma Browser + PumaClaw) prev. dev rel at Meta (Parse), @Meteorjs (YC S11), Cloudant (S08) 🇺🇦↠🇯🇵↠🇨🇦↠🇺🇸 🌁🎾📷

San Francisco Sumali Temmuz 2010
4.9K Sinusundan5.8K Mga Tagasunod
Naka-pin na Tweet
Yuriy Dybskiy
Yuriy Dybskiy@html5cat·
Was looking for a designer to make a new website for @puma_ai and decided to try latest vibecoding options so played around with @antigravity and Codex from @OpenAI. Guess which one is which:
Yuriy Dybskiy tweet mediaYuriy Dybskiy tweet media
English
18
2
30
13.6K
Yuriy Dybskiy
Yuriy Dybskiy@html5cat·
@joshu my algo is: 1. Start looking for a spot 2-3 blocks out and park if found 2. Get to the restaurant and start spiring out
English
0
0
0
3
joshua schachter
joshua schachter@joshu·
what's the optimal strategy for parking at a restaurant when the parking spot availability is unknown?
English
4
0
0
144
Thariq
Thariq@trq212·
To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.
English
993
243
3.7K
1.3M
Andrew Jeffery
Andrew Jeffery@credealjunkie·
Think you know all 117 official San Francisco neighborhoods? I sure didn’t. Here are some new ones for me: Mint Hill Bret Harte Cayuga Fairmount Little Hollywood Cathedral Hill
Andrew Jeffery tweet media
English
19
6
141
14K
Miguel Carranza
Miguel Carranza@elwatto·
@html5cat currently novablast 5. thinking of getting megablasts and superblast 3 in a couple of weeks in japan
English
1
0
1
556
Miguel Carranza
Miguel Carranza@elwatto·
one year of consistent running, starting from full sedentary dad. First time ever hitting ‘high’ vo2 max. It works.
Miguel Carranza tweet mediaMiguel Carranza tweet media
English
20
0
158
13.8K
Hubert Thieblot
Hubert Thieblot@hthieblot·
Only incredible founders can reply to this tweet
English
421
1
477
39.7K
Kyle Samani
Kyle Samani@KyleSamani·
Supposedly 100M x402 transactions for $30M in payments volume Where can I try this myself?
Nina Bambysheva@ninabambysheva

Crypto’s perfect customer has finally arrived. I spoke with @matthuang, @hosseeb, @jessepollak, @programmer, @_rishinsharma, @joechalom, @OnchainLu and a few other teams and payments experts to unpack how crypto is repositioning for the agentic age, what it will take to win agentic commerce and why this matters beyond payments. forbes.com/sites/ninabamb…

English
14
2
49
25.9K
Wes Bos
Wes Bos@wesbos·
Only cool people can reply to this
English
685
3
690
107.6K
Mike Knoop
Mike Knoop@mikeknoop·
ARC-AGI-3 and ARC Prize 2026 are now live with $2,000,000 in prizes! As of today, version 3 is the world's only unsaturated agentic intelligence benchmark. Humans score 100% and frontier AI scores ~0%. Play here: arcprize.org/arc-agi/3 While no single version of ARC is definitionally AGI, our aim with the ARC-AGI Series is to continually produce useful scientific benchmarks which identify large remaining gaps between Humans and Frontier AI. At some point, we'll be unable to, and then we'll have AGI. Our new benchmark consists of over 100 novel game environments encompassing nearly 1,000 levels. Notably, test takers are given no explicit goals (other than to win) and must explore the environments to acquire goals, understand rules, develop strategy, and ultimately execute a plan to win. ARC-AGI-3 is a test of agentic intelligence. Beating this benchmark requires on-the-fly world modeling and continual learning to adapt to evolving environments. To score 100% AI must beat all of the games as efficiently as the human baseline (e.g., the number of actions taken to win). An ARC first, this gives us a formal comparison of AI reasoning efficiency vs humans. Version 3 carries classic ARC design principles: core knowledge priors only, private test sets to measure generalization, and it's fun! Every benchmark we release is an experiment and I believe this new version will provide strong signal towards increasingly autonomous AI agents. Prior versions of ARC held strong predictive power for important AI moments. Version 1 only saw progress with the release of AI reasoning models in late 2024 and Version 2 only began seeing progress with the advent of agentic coding models in late 2025. Version 3 is expected to signal when AI agents can become economically useful in more open-ended domains (beyond highly measurable domains like coding and math). There are a few other important design changes for ARC-AGI-3. The public set is now a "demonstration" set, not a training set. And unlike prior versions, the private set is now explicitly designed to be Out Of Distribution (non-IDD) from the public demo set. This is to mitigate targeting and because LLMs can now generalize over IDD splits using AI reasoning. Frontier models have made great progress over the past year. So much that several industry leaders have suggested we may already have AGI. Part of the ARC Prize Foundation mission is to provide accurate public sense finding and we strive to reduce false-positive claims. To this end, we've updated our testing policy. Going forward we will only verify scores outside of the official Kaggle competitions from AI systems with high commercial usage or are 100% open source. We're also adopting a stateless client scoring philosophy to ensure humans and AI are tested under identical conditions. The goal of these changes is to reduce the amount of developer-aware targeting (whether incidental or intentional) and provide clear signal if actual AGI progress has occurred. The Foundation also has a goal to inspire AI innovation which is most likely to come from the community. We've seen dozens of startups using ARC as a tool for showcasing their ideas - a few have fundraised serious capital based on their ARC results. To support this we're launching a new Community leaderboard. While scores for this leaderboard can't be Verified, and you should explicitly not trust these scores as an accurate measure of AGI progress, we will curate the best ideas and promote them. This year I expect we will see rapid progress on the ARC-AGI-3 Community leaderboard and the best ideas will eventually migrate into frontier models and onto the Verified leaderboard. Finally, we’ve partnered again with Kaggle to run two competition tracks for ARC-AGI-2 and ARC-AGI-3. This will be the last year for Version 2. When we launched the first ARC Prize back in 2024, I committed to running the Grand Prize until it was beaten. So for the ARC-AGI-2 track we will be paying out the Grand Prize to the best team, no matter what, in order to honor this commitment. In accordance with the Foundation mission, to win any prize money you must open-source a reproducible solution. We raised the standard for open source to include training. I'm excited to produce a truly open solution as a final send off for the ARC-AGI-1 and 2 format. Focus is now on ARC-AGI-3 (we've even started work on Versions 4 and 5). As always, I'm honored to have the opportunity to steward attention towards AGI progress. I'm also super grateful to the incredible ARC Prize team - including our core engineers, game designers, and human testers - led by @GregKamradt without whom we would not have this incredibly useful benchmark. See you on the leaderboard!
English
10
21
143
13.9K
Yuriy Dybskiy
Yuriy Dybskiy@html5cat·
@harris # of series As you helped raise don't think Bridgewater would allow graph with non 0 starting point for Y axis tho
English
1
0
0
191
Aaron Harris
Aaron Harris@harris·
Here's a fun game we used to play at Bridgewater. Guess the chart. Winner gets...satisfaction.
Aaron Harris tweet media
English
9
0
3
2.7K