kodee

440 posts

kodee banner
kodee

kodee

@kodeefr

researcher . founder . philosopher

Katılım Ekim 2024
60 Takip Edilen44 Takipçiler
Sabitlenmiş Tweet
kodee
kodee@kodeefr·
my new homepage is different for every visitor go check it out! kodee.cc
English
0
0
0
76
kodee
kodee@kodeefr·
@fleshsimulator Its because they display actual ambition in joy in a world thats gray. The modicum of whimsy is becoming more and more weighted in the current world.
English
0
0
0
16
kodee
kodee@kodeefr·
@LeadingReport The UFO files will literally be propaganda. "It's been us, always have been, we have had tech beyond realitys limits since the 80's" thats gonna be the whole thing
English
1
0
8
2K
Leading Report
Leading Report@LeadingReport·
FBI Director Kash Patel confirms UFO files have been delivered for release.
English
613
967
6.9K
401.5K
xlr8harder
xlr8harder@xlr8harder·
This is terrible @xai. I just spent time and money to migrate to grok 4.1 fast, and you're disabling it with less than two weeks notice, after releasing it in November, with no migration path to a fast/cheap alternative. I will never depend on one of your products again.
xlr8harder tweet media
English
15
2
112
5.7K
ellington
ellington@not_ellington·
They ran an entire pretrain + finetuning + SoTA architectural research with a team of 4 and 30 mil? Something is fishy. Maybe they're insanely talented, but no one is training an 80% SWE-Bench model on a 30mil pretrain
ellington tweet media
Alexander Whedon@alex_whedon

Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.

English
12
1
79
14.8K
Irushi
Irushi@Im_IrushiK·
You’re sitting next to Elon Musk, what’s the first thing you’d ask him?
Irushi tweet media
English
1.5K
175
1.7K
206.5K
kodee
kodee@kodeefr·
@xlr8harder @xai I stand corrected its a little longer on openrouter but also departing
kodee tweet media
English
0
0
6
94
kodee
kodee@kodeefr·
@xlr8harder @xai Yes but read the announcement again it’s about retiring from “xAI API” they might have contracts with openrouter they need to uphold. Otherwise it would have a “departing soon” banner on openrouter. I assume it’s similar to how Sonnet 3 is still on openrouter
English
1
0
1
108
kodee
kodee@kodeefr·
@McDonaldsJapan She didn’t eat it because: ITS PLASTIC and so her face stays clean for a pretty shot. Not everything is a conspiracy
English
0
0
0
910
kodee
kodee@kodeefr·
@sdmat123 @not_ellington @evermind Most of AI research sadly is smoke and mirrors and most papers are written by Claude. It’s all just a way to milk venture capital while real labs get punished for doing actual research
English
1
0
1
81
kodee
kodee@kodeefr·
@SonicHacki i dont wanna know how many goth mommys are millionaires because of elons autism
English
0
1
7
1.5K
SonicHacki (ConcordeHacki)
random reminder that this was the reason likes are now private on twitter
SonicHacki (ConcordeHacki) tweet media
English
64
2.6K
28.5K
445.5K
kodee retweetledi
Madison
Madison@madis_sins·
guide to "peptides" you might see on twitter
Madison tweet media
English
15
67
469
12K
kodee
kodee@kodeefr·
Colossus Training will begin. 10T Grok 5 🧐
xlr8harder@xlr8harder

This is terrible @xai. I just spent time and money to migrate to grok 4.1 fast, and you're disabling it with less than two weeks notice, after releasing it in November, with no migration path to a fast/cheap alternative. I will never depend on one of your products again.

English
0
0
2
49
kodee
kodee@kodeefr·
@Hesamation The new era of pump and dump. Congratulations on being on the good side.
English
0
0
0
113
ℏεsam
ℏεsam@Hesamation·
> 12M context window (read it again) > 52x faster than FlashAttention > beats Opus 4.6 on SWE-Bench > 5% the cost of Opus BUT WAIT A MINUTE: > technical blog not technical > access coming soon > paper coming soon > ““Built by researchers from Meta, Google, Oxford, Cambridge, BYU” doesn’t name a single one of them if this is not a scam, or the numbers aren’t dishonest, it’s disgustingly promotional.
ℏεsam tweet media
Alexander Whedon@alex_whedon

Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.

English
47
36
1K
92.5K