Volodymyr T

271 posts

Volodymyr T

Volodymyr T

@v_truba

AI developer

Earth Katılım Şubat 2020
427 Takip Edilen160 Takipçiler
Volodymyr T retweetledi
Brian McCrindle
Brian McCrindle@mccrinbc·
Shoutout to the community for making a product on Train at Home -- A whole tool to monitor the entire TAH fleet, alpha flows, and training states! trainathome.ai/en
Brian McCrindle tweet media
English
1
7
23
3.2K
Volodymyr T retweetledi
David Duvenaud
David Duvenaud@DavidDuvenaud·
Announcing Talkie: a new, open-weight historical LLM! We trained and finetuned a 13B model on a newly-curated dataset of only pre-1930 data. Try it below! with @AlecRad and @status_effects 🧵
English
200
455
3.6K
1.4M
Volodymyr T retweetledi
JJ
JJ@JosephJacks_·
I'm excited to share my first book! 📕✨⚛️ ... I hope you learn a lot and enjoy every bit of it. 🥰 The Ghosts in the Crystal: A Brief History of Quasiparticles From Landau to the Modern Zoo, 1933–2026 Part I — The Founding (1929–1950). Why "many bodies" broke physics, Landau's entry, the war years, the refugee diaspora that carried the idea west. Part II — The Explosion (1945–1970). The basic zoo (phonons, excitons, magnons, plasmons), the road to BCS, superfluid helium and the polariton, the polaron coming of age. Part III — The Modern Zoo (1961–2026). Topology entering physics (Skyrme → Wilczek → Kitaev → real skyrmions in MnSi). Strange new things (Majorana's 75-year wait, spinons, quasicrystals/phasons, Weyl in TaAs). The zoo that keeps growing (magnetic monopoles in spin ice, fractons, Pines' demon found in 2023, axion quasiparticle in 2025). Part IV — The Pattern. The three names per discovery. The decades-long gaps between proposal and confirmation. The small tradition — a few hundred people across Moscow, Copenhagen, Bristol, Princeton, Bell Labs, Chernogolovka — who built the whole edifice. Where it goes next: non-abelian anyons, hidden quasiparticles in new spectroscopies, biological matter. Appendix A — chronological timeline, 60+ events, 1900–2025. Appendix B — "Twenty at a Glance," compact reference for all 20 quasiparticles from your sheet. docs.google.com/document/d/1Sl…
JJ tweet media
English
10
14
94
13.6K
Volodymyr T retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I like blockchain tech quite a bit because it extends open source to open source+state, a genuine/exciting innovation in computing paradigms. I'm just sad and struggle to get over it coming packaged with so much braindead bs (get rich quick pumps/dumps/scams/spams/memes etc.). Ew
English
269
637
5.6K
0
Volodymyr T retweetledi
crux
crux@macrocrux·
Feeling charged up after a spending all week with the brightest minds in Bittensor. Thank you for bringing me out @DCGco, and to @micaelabazo, @manakoai and @LindsMikeStone for sharing the stage with me.
Micaela@micaelabazo

thank you to @DCGco for hosting an incredible summit and for inviting me to participate in “The Tipping Point: From Experiments to Category Leaders” alongside @tm0klc (@manakoai) and @macrocrux (@MacrocosmosAI), moderated by @LindsMikeStone once again reminded of the amazing people building this ecosystem and shaping the future of decentralized technology grateful for the opportunity to share what we’re building @metanova_labs with the power of #bittensor leaving inspired, with many great new connections * future is bright *

English
3
6
75
1.7K
Volodymyr T retweetledi
ClaudeDevs
ClaudeDevs@ClaudeDevs·
New in Claude Code: /ultrareview (research preview) runs a fleet of bug-hunting agents in the cloud. Findings land in the CLI or Desktop automatically. Run it before merging critical changes—auth, data migrations, etc. Pro and Max users get 3 free reviews through 5/5.
English
543
1.2K
16.7K
2.6M
Volodymyr T retweetledi
Bojan Tunguz
Bojan Tunguz@tunguz·
OpenAI cooked with this one.
Arena.ai@arena

Exciting news - GPT-Image-2 by @OpenAI has claimed the #1 spot across all Image Arena leaderboards! A clean sweep with a record-breaking +242 point lead in Text-to-Image - the largest gap we’ve seen to date. - #1 Text-to-Image (1512), +242 over #2 (Nano-banana-2 with web-search aka gemini-3.1-flash-image) - #1 Single-Image Edit (1513), +125 over #2 (Nano-banana-pro aka gemini-3-pro-image) - #1 Multi-Image Edit (1464), +90 over #2 (Nano-banana-2) No model has dominated Image Arena with margins this wide. Huge congratulations to @OpenAI on this major breakthrough in image generation! More performance breakdowns by category in the thread below.

English
4
3
69
4.5K
Volodymyr T retweetledi
Volodymyr T retweetledi
Claude
Claude@claudeai·
Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision.
Claude tweet media
English
4.8K
10.3K
81.2K
13.8M
Volodymyr T retweetledi
First Corps Azov of the National Guard of Ukraine
The First Corps Azov of the National Guard of Ukraine maintains control over enemy logistics near Donetsk. Strike UAV pilots are targeting Russian logistics deep in the operational rear. Drone units maintain constant surveillance and fire control over all supply routes around Donetsk. Zuhres, Andriivka, Starobesheve, Horlivka, Lysychansk, and the Donetsk Ring Road — sustained UAV activity along these routes highlights the ineffectiveness of Russian airspace control. Until recently, Russian forces operated there with a sense of impunity. That is no longer the case. Any military target moving along roads around Donetsk will be destroyed. There is no safe rear area for the occupiers. There is nowhere to hide and no way to protect themselves.
English
36
358
2.3K
285.1K
Volodymyr T retweetledi
Macrocosmos
Macrocosmos@MacrocosmosAI·
Today, we present ResBM (arxiv.org/pdf/2604.11947), a 128x activation compression technique for achieving SOTA training results in low-bandwidth, distributed communication settings for pipeline parallel training across the internet. This technology underpins @IOTA_SN9 - our distributed training platform.
Macrocosmos tweet media
English
10
36
166
31.4K
Volodymyr T retweetledi
minotaur
minotaur@minotaursubnet·
Official Minotaur Roadmap Released minotaursubnet.com/roadmap We are proceeding with mainnet testing this week. The roadmap breaks down each phase with expected timelines and covers what we have built so far. It also reflects the recent shift in scope as we expand toward agentic DeFi and more general intent solving. We are still focused on the same core problem, just thinking about it at a larger scale. What started as the concept for the subnet is now taking form as our first app powered by Minotaur, a cross-chain DEX aggregator. Looking forward to seeing this beast come to life.
English
4
7
28
3.9K
Volodymyr T retweetledi
송준 Jun Song
송준 Jun Song@jun_song·
gemma4-26b의 완벽한 파인튜닝을 성공했습니다. - 0/100 refusal 완벽 무검열 - 툴콜/토크나이저 모델 태생 이슈 해결 - 벤치기준 기존 대비 성능 10% 향상 - 출력토큰 속도 10% 향상 - 프롬프트 처리속도 약 90% 향상 완벽한 모델이 완성되었다고 생각합니다. gguf / mlx 두개 버전 ⬇️
송준 Jun Song tweet media
한국어
113
253
3.8K
269.5K
Volodymyr T retweetledi
Alan Aboudib
Alan Aboudib@alan_aboudib·
We are dropping a 🧨new SOTA in activation compression 🧨 for decentralized pipeline parallelism within a couple of days ✨Stay tuned
Macrocosmos@MacrocosmosAI

Training frontier models over the internet requires new techniques. Today, we present ResBM, a residual encoder-decoder bottleneck architecture that enables 128x activation compression for low-bandwidth distributed pipeline parallel training. Developed for @IOTA_SN9, we show SOTA compression without significant loss in convergence rates, increases in memory, or compute overhead. Expect the full paper release in the next 72 hours.

English
0
5
28
1.4K
Volodymyr T retweetledi
Macrocosmos
Macrocosmos@MacrocosmosAI·
Training frontier models over the internet requires new techniques. Today, we present ResBM, a residual encoder-decoder bottleneck architecture that enables 128x activation compression for low-bandwidth distributed pipeline parallel training. Developed for @IOTA_SN9, we show SOTA compression without significant loss in convergence rates, increases in memory, or compute overhead. Expect the full paper release in the next 72 hours.
Macrocosmos tweet media
English
14
45
212
48.4K
Volodymyr T retweetledi
Felix Quinque
Felix Quinque@Felix_Quinque·
Big new updates for IOTA coming this week, finally making it 'true' p2p optimizing throughput. The new loss curve (pink) is blowing everything else out of the water. Particularly good news for distributed training in bittensor after last week.
Felix Quinque tweet media
English
1
1
11
214