Tied Low Ⓜ

19.7K posts

Tied Low Ⓜ banner
Tied Low Ⓜ

Tied Low Ⓜ

@ttiedloww

🕴🏾

heck Katılım Kasım 2014
1.5K Takip Edilen734 Takipçiler
Sabitlenmiş Tweet
Tied Low Ⓜ
Tied Low Ⓜ@ttiedloww·
ʜᴇ's ᴏᴜᴛ ᴏғ ʜɪs ᴇʟᴇᴍᴇɴᴛ
Tied Low Ⓜ tweet media
0
0
4
1.2K
Tied Low Ⓜ retweetledi
emir
emir@emirsopranoo·
“dad are you in the mafia?”
emir tweet media
English
11
350
5.1K
62.2K
Tied Low Ⓜ retweetledi
Flappr
Flappr@flapprdotnet·
When you see your homie walk out of his JPMorgan performance review.
English
167
2.4K
30.6K
3.1M
Tied Low Ⓜ retweetledi
trash
trash@trashh_dev·
he’s not stupid. anthropic is just down again.
trash tweet media
English
28
214
3.3K
59.8K
Tied Low Ⓜ retweetledi
CallUnc
CallUnc@Callunc·
When you hear "Ion give af about none of that" at a kickback 👀
CallUnc tweet media
English
144
6.8K
47.2K
668K
Tied Low Ⓜ retweetledi
Johann
Johann@LookAtMyMeat1·
Average day at JP Morgan
Johann tweet media
English
69
1.3K
22.7K
446.7K
Tied Low Ⓜ retweetledi
Alps
Alps@alpaysh·
Day 1 at JP Morgan and I’m excited for this new chapter, grateful for everything and everyone who helped me get here
Alps tweet media
English
2.5K
4.1K
91.7K
13.5M
Tied Low Ⓜ retweetledi
Tied Low Ⓜ retweetledi
le.hl
le.hl@0xleegenz·
Boss: "Who can train the new girl?" The dude with a happy wife and 2 kids:
English
823
15.2K
142K
4.3M
Om Patel
Om Patel@om_patel5·
RESEARCHERS JUST BUILT AN AI MODEL TRAINED ONLY ON TEXT FROM BEFORE 1931 it's called talkie. 13 billion parameters, trained exclusively on text published before december 31, 1930 its worldview is completely frozen in time the reason this matters: every major AI model today (GPT, claude, gemini, llama) was trained on the modern web. that makes it almost impossible to tell if these models actually reason or if they just memorized the answers from their training data talkie breaks that completely because it has never seen any modern information the crazy part: talkie can learn to write python code from just a few examples you show it in the prompt. despite having ZERO modern code in its training data. it's figuring out programming from 19th century mathematics texts. that's ACTUAL reasoning claude sonnet 4.6 was used as the judge in talkie's reinforcement learning pipeline. claude opus 4.6 generated the synthetic conversations used in fine tuning. a modern AI was used to train a model that's supposed to be frozen in 1930 the team already flagged this as a contamination risk they want to eliminate in future versions what they're using it to study: > long range forecasting. how well can a model "predict" the future from a frozen vantage point > invention. can it develop ideas that didn't exist until after its knowledge cutoff > LLM identity. what makes a model itself vs what's just patterns absorbed from the web alec radford built this. the same guy behind GPT, CLIP, and whisper both models are open source on hugging face. they're already planning a GPT-3 scale vintage model later this year an AI that has never seen the modern world can still reason its way to writing code. THAT alone tells you more about intelligence than any benchmark ever will
Om Patel tweet media
English
171
430
3K
245.2K