okay (they/them) 🐀🇵🇸

11.5K posts

okay (they/them) 🐀🇵🇸 banner
okay (they/them) 🐀🇵🇸

okay (they/them) 🐀🇵🇸

@OkayHughes

diet clever, sugar-cane sweet. they/them. 🏳️‍⚧️ climate+math. Ph.D student in climate. abolish the prison industrial complex. spicier acct: @kayohughes

Katılım Haziran 2012
695 Takip Edilen492 Takipçiler
okay (they/them) 🐀🇵🇸 retweetledi
Hereward the Woke
Hereward the Woke@BamaExpat·
Midwestern Nice derives from centuries of Northern European peasants developing helpful relationships with everyone in the village so they don’t all freeze to death. Southern politesse is about not accidentally starting a blood feud or a duel.
Nathan White@NPWhite717

Midwesterners are nice. Southerners are polite and may or may not be nice. Many of the most polite southerners I know are decidedly not nice.

English
176
1.9K
22.5K
604.3K
okay (they/them) 🐀🇵🇸
@lovealicents sorry but a two salary household with parents in specialty medical fields in pittsburgh? they live in a literal mansion in squirrel hill, no question about it
English
0
0
26
2K
2026 year of santos
2026 year of santos@lovealicents·
i always forget that javadi is like. RICH rich
English
13
109
9.5K
159.2K
okay (they/them) 🐀🇵🇸
The tech literate skeptics, you’ll know that people are converging around “these models are good enough that they’re going to make life worse for everyone who isn’t already rich”. If they do cure cancer, you won’t be able afford the cure because you won’t have a job
Séb Krier@sebkrier

I occasionally have my doubts about the Bay Area flavoured monoculture of Al hyper-bullishness, but occasionally I look at what the smarmy skeptics are offering and remind myself the alternative is even bleaker. All the confidence, none of the imagination.

English
0
0
0
38
okay (they/them) 🐀🇵🇸
@littmath LLMs have a lossy encyclopedic knowledge of existing mathematical methods, the ability to verify potential strategies very quickly. But if you actually test their pseudo-understanding like you would a human’s, you find that there is nothing approximating human understanding there
English
0
0
0
9
Daniel Litt
Daniel Litt@littmath·
Given what current-gen LLMs (say, in math, but whatever) can do, I think their apparent limitations are kind of mysterious. What is the blocker preventing, at present, high quality fully autonomous work?
English
204
36
781
205.9K
okay (they/them) 🐀🇵🇸 retweetledi
Ed Newton-Rex
Ed Newton-Rex@ednewtonrex·
‘Tech companies believe in intellectual property, but not yours.” Great piece in The Atlantic pointing out that AI companies spend huge amounts of money simultaneously (i) defending their own IP and (ii) arguing that they can use other people’s for free. theatlantic.com/technology/202…
Ed Newton-Rex tweet media
English
49
1.7K
4.3K
73.7K
okay (they/them) 🐀🇵🇸
@a1exwd @kareem_carr Maybe it’ll increase my output by 20%. But my usage should be costing me $40/hr, and I’m not sure if that’s worth it for what I’m getting. It’s shockingly good if you keep it on a really tight leash and ask it to find sign errors in my code. What is that worth?
English
0
0
0
13
okay (they/them) 🐀🇵🇸
@a1exwd @kareem_carr I have a much harder time debugging other people’s code than I do debugging code I wrote myself. I also find it very hard to deal with the sudden-onset dementia that happens when it has to consolidate context.
English
1
0
0
13
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
As a statistician, I keep asking myself how all these AI people are dealing with the massive potential for catastrophic errors in critical analyses, and the answer keeps being they either didn't think about it at all, or they don't care.
English
193
347
2.6K
80.8K
okay (they/them) 🐀🇵🇸
Is the output you’re getting worth 20-40 dollars per hour? I’m still not sure. And I think a lot of large companies are restructuring their workforce based on a cost benefit analysis that doesn’t reflect the reality of what it costs to rent multiple B200 GPUs for 8 hrs a day.
English
0
1
1
15
okay (they/them) 🐀🇵🇸
Everything before o1 was completely useless, and anyone with a brain could see that. We still aren’t having a conversation about how expensive these models are to use, because companies are subsidizing the compute (same principle as how uber used to be so much cheaper).
English
1
0
1
17
okay (they/them) 🐀🇵🇸
@ferniealism To me the best outcome would be her basically telling frank “You don’t have to explain to the entire department what you did, but you gotta do that amends again because hoo boy did you make it all about you”
English
0
0
31
2K
frank langdon's lacanian psychoanalyst
it’s funny how we got al-hashimi listening in the langton/santos fight because we get santos talking about how frank should’ve gone to prison and all we see beforehand is her being kind to frank about his addiction and then her somewhat abolitionist stance beforehand
English
2
17
1.3K
52.7K
okay (they/them) 🐀🇵🇸
@kingdonfmyheart I think it was supposed to. Every moment of “concern” from supervisors to supervisees this season has come from a place of getting them back to work asap, which is why the seasons gonna end with every single named character except abbott and al hashimi defenestrating themselves
English
0
0
3
318
bella
bella@kingdonfmyheart·
people are gonna kill me for this but idc dana’s response to mel kinda gave me the ick… sorry
English
74
239
7.9K
230.5K
okay (they/them) 🐀🇵🇸
I really have no choice but to still use it. If openAI starts degrading it, or they accelerate their pivot into war profiteering, then I’ll have to figure out what to do
English
0
0
0
16
okay (they/them) 🐀🇵🇸 retweetledi
Unemployed Capital Allocator
I think one sneaky aspect of LLM coding that is under discussed is just how bad the code has to be before appearing as broken to the casual observer.
English
26
21
711
25.9K
okay (they/them) 🐀🇵🇸
It completely fails to articulate the barriers to implementing those technical solutions in climate models.
English
0
0
1
12
okay (they/them) 🐀🇵🇸
I’ve been adversarially testing claude computational climate science problems, and it’s good at pointing out similarities between technical problems in our field and work that’s been done elsewhere that we don’t know about. However,
Georgia Channing@cgeorgiaw

I’ve been at a small conference this week, one where the AI people have been presenting early in the week and the domain science people will be presenting later in the week. At the end of the talks last night, the conversation turned very doomer with all the AI people talking about how well Claude Code or Codex can do hill-climbing AI research and how we (the AI people) are maybe all about to lose our jobs! The domain science people expressed their shock at this attitude because, though Claude Code can be let loose to complete lots of banal hill-climbing AI research projects, basically no experimental science is hill-climbing or even metric driven. Most scientific fields are about much more taste-driven exploration that is incredibly difficult to make metrics for or to parameterize, and this misunderstanding from the AI community is one of the most damaging things to the realization of great science with AI. Seems like we’re actually pretty far from having AI models do that… Over the summer, @evijit and I wrote about this (and some other things hindering AI for science) at a bit more length, and today that work is out in Patterns! So, if you care about these problems and the real challenges in bringing AI to science in the real work, I recommend giving it a read!

English
1
0
1
122