Eric Lee

75 posts

Eric Lee

Eric Lee

@leeeric0

Katılım Temmuz 2016
473 Takip Edilen27 Takipçiler
Eric Lee
Eric Lee@leeeric0·
@JWerner247 Illinois is proof that momentum can stay exactly the same
English
0
0
0
100
Jeremy Werner
Jeremy Werner@JWerner247·
Purdue, which had lost four of six to end the regular season, is proof the momentum can turn in the postseason. Started with toughness and intensity on defense for the Boilermakers.
English
21
3
253
13.4K
Eric Lee
Eric Lee@leeeric0·
@JustJake Curious how you would respond to their actual grievances, rather than make some vague patronizing statement about how they're ignorant?
English
0
0
0
13
Jeff Goodman
Jeff Goodman@GoodmanHoops·
Donovan Dent coast to coast with the game-winner. UCLA beats Illinois in overtime.
English
4
5
89
40.3K
Jeremy Werner
Jeremy Werner@JWerner247·
UCLA has made 17 of its last 19 shots. #illini finally cold from three, missng five straight from beyond the arc.
Jeremy Werner tweet media
English
8
3
40
6.6K
Eric Lee
Eric Lee@leeeric0·
@JWerner247 I don't think they've contained any team off the bounce this season
English
0
0
0
30
Jeremy Werner
Jeremy Werner@JWerner247·
It's a two-possession game for the first time since #illini led 13-10 at the 15:26 mark of the first half. 54-48 with 16:56 left. UCLA has made 14 of its last 16 shots. Illini can't contain them off the bounce. Switching not working.
English
7
0
12
3.8K
Eric Lee
Eric Lee@leeeric0·
@YouJiacheng I use these as interview questions for new grad engineers where incorrect answer = instant rejection
English
0
0
0
166
You Jiacheng
You Jiacheng@YouJiacheng·
AA-Omniscience's SWE section looks crazy. what are you Measuring???
You Jiacheng tweet media
English
7
0
30
4.3K
Eric Lee
Eric Lee@leeeric0·
@Teknium I think the paper should have been "Question First Prompting Matches Prompt Repetition With Half the Tokens"
English
0
0
1
15
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Kinda crazy lol
BURKOV@burkov

LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982

English
40
54
3.2K
884.9K
Eric Lee
Eric Lee@leeeric0·
@paul_cal Also with the wording of a lot of the examples in OpenBook, ARC, MMLUPro, it doesn't even make sense to place the options first
English
0
0
0
181
Paul Calcraft
Paul Calcraft@paul_cal·
Prompt repetition is way overplayed If you put the question first, the gains from repetition vanish or reduce dramatically The biggest gains are on tasks where they didn't report question-first variants... I wonder why
Paul Calcraft tweet mediaPaul Calcraft tweet media
BURKOV@burkov

LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982

English
8
2
79
21.3K
Gary Parrish
Gary Parrish@GaryParrishCBS·
That was great from Jeremy Fears Jr. Terrific in the game (26 points/15 assists). Then he handled the postgame interview well, all things considered.
English
13
5
99
15.2K
tender
tender@tenderizzation·
not sure if your computer vision expert is an unc? show them this image
tender tweet media
English
7
0
59
4.5K
Eric Lee
Eric Lee@leeeric0·
@OfficialLoganK Funny, I spent the last week building something worth $200M... I'm sure yours is good too though
English
0
0
0
7
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
I spent the last week building what could easily be a $100M venture backed business… it’s truly wild how much leverage AI gives you.
English
257
71
2.7K
291.2K
Eric Lee
Eric Lee@leeeric0·
@arkitus Wow, where can I find this UI?
English
1
0
1
1.7K
Ali Eslami
Ali Eslami@arkitus·
Only 29 days after Pro: Gemini 3 Flash is just as smart (1477 LMSYS Elo), 4x cheaper, and sooo much faster! ⚡
English
38
102
1.8K
196.7K
Eric Lee
Eric Lee@leeeric0·
@koraykv That's a nice UI, is it available anywhere?
English
1
0
0
159
koray kavukcuoglu
koray kavukcuoglu@koraykv·
Gemini 3 Flash is here. ⚡⚡⚡ Pro-grade reasoning with Flash-level speed and efficiency. It’s rolling out today globally as the default model on Gemini App and Search AI Mode. Learn more: bit.ly/4pTo5YU
English
31
59
546
75.1K
Find me on bsky @colin-fraser.net
No. If this came out in the pre-AI era, almost no one would notice it. People largely don’t care about visual art for its own sake. The reason this is getting so much attention has almost nothing to do with the work itself. x.com/chatgpt21/stat…
Chris@chatgpt21

Apparently this video has all of X in a frenzy. If it had come out before the AI era, people would be fawning over it as great art, but now they are so clicker trained that any mention of AI sends them into a verbiage frenzy and they anoint anything AI related as slop.

English
5
7
290
50.7K
Find me on bsky @colin-fraser.net
A formative work to my perspective on the AI art debate was a paper a few years ago where they tried to get people to guess whether poems were written by AI and they were worse than random, but when you dig into the data a bit it’s because average people hated the human poems
English
21
52
1.3K
165.3K
Adam Zagoria
Adam Zagoria@AdamZagoria·
Kylan Boswell has 17 of Illinois' 40 They'd be getting hammered if not for him.
English
1
0
0
3K
Eric Lee
Eric Lee@leeeric0·
People are missing a lot of great drama by ignoring the talk page of wikipedia articles
English
0
0
0
21
Tolga Bilge
Tolga Bilge@TolgaBilge_·
You either die a sigmoid, or you live long enough to see yourself become the exponential.
Tolga Bilge tweet media
English
42
72
980
73.9K
ᄂIMIПΛᄂbardo
ᄂIMIПΛᄂbardo@liminal_bardo·
While you're all still arguing about agi, I'm watching the joy on my kids' faces as they burn through my midjourney credits animating their own paintings.
English
12
23
253
13K
Eric Lee
Eric Lee@leeeric0·
the difference between the gpt-oss discourse on this website and localllama is hilarious
English
0
0
0
37