Jon Willits

1K posts

Jon Willits

Jon Willits

@jonwillits

Cognitive Scientist and Assistant Professor @Illinois_Alma. Studies language and semantic development and computational models.

Champaign, IL เข้าร่วม Şubat 2011
813 กำลังติดตาม308 ผู้ติดตาม
Jon Willits
Jon Willits@jonwillits·
@morallawwithin @AStrasser116 Is that the implied alternative, preventing natural death? I thought the claim was stronger, that we are likely to do ourselves in as a species without AIs help, or at least, more likely than AI is to do us in as a species
English
0
0
1
50
florence ⏹️
florence ⏹️@morallawwithin·
@AStrasser116 I’m not one to downplay how much of a tragedy death is, even dying from old age, but I think every human being killed at a particular moment and having the species go extinct is a lot worse than the status quo
English
6
0
83
2K
Alex Strasser
Alex Strasser@AStrasser116·
"There is a nonzero probability that developing AI kills everybody. There is a much higher probability that not developing AI kills everybody." - Matthew Ginsberg at Google DeepMind
English
29
15
139
16.8K
Jon Willits รีทวีตแล้ว
Elizabeth Mieczkowski
Elizabeth Mieczkowski@beth_miecz·
🚨New preprint! LLM teams are being deployed at scale, yet we lack the tools to predict when they’ll succeed, fail, or how to design them. Distributed computing faced the exact same questions and figured out how to answer them. We show those insights apply directly to LLMs 🧵👇
Elizabeth Mieczkowski tweet media
English
4
24
104
14.5K
Jon Willits
Jon Willits@jonwillits·
St Augustine is the Good Place
English
0
0
0
35
Jon Willits
Jon Willits@jonwillits·
@emollick This is assuming truly unending exp growth rather than the series of sigmoids we have always seen. If this new iteration is another sigmoids (albeit one with a much much higher ceiling) then being behind won’t matter in the long run. What’s your p this one is truly unending exp?
English
0
0
0
151
Ethan Mollick
Ethan Mollick@emollick·
It is possible that xAI catches up, certainly Musk has stated that he thinks they can in a year. And Meta still has excellent people and lots of compute. But if you view the critical period, for better or worse, as this year, it is a bad time for a lab to sit this out right now.
English
14
10
183
17.6K
Ethan Mollick
Ethan Mollick@emollick·
The failures of both Meta and xAI to maintain parity with the frontier labs, along with the fact that the Chinese open weights models continue to lag by months, means that recursive AI self-improvement, if it happens, will likely be by a model from Google, OpenAI and/or Anthropic
English
82
59
1.1K
206.9K
Jon Willits
Jon Willits@jonwillits·
@Allegiant @Allegiant at Sanford today 3/4 of your arrivals are delayed. No other airline here is having a problem. Just you guys.
English
0
0
0
64
Jon Willits
Jon Willits@jonwillits·
@Allegiant is the business model that it doesn’t matter if you massively overbook your gates and you have 60% customer attrition, because there are always more people who have gone through it yet?
English
1
0
0
63
Jon Willits
Jon Willits@jonwillits·
@Allegiant landed but no gate for an hour! Others in the plane say this happens to them at Allegiant all the time. This was our first time and will probably be our last
English
2
0
0
89
Jon Willits
Jon Willits@jonwillits·
@morallawwithin @HunterWieman I think vegans and non vegans (and the self selection involved) results in the two having very different views of how much self control is required. This makes me more optimistic about the marginal gains that will accrue.
English
0
0
0
25
florence ⏹️
florence ⏹️@morallawwithin·
A problem of mine is that I sometimes ask for too much credit from my reader (sincerely). Of course self-control is not binary; it would be very difficult for me to mistakenly think otherwise. The precise version of what I said is “The amount of self-control needed in this scenario is not sufficiently smaller than the amount that would have you be vegan.”
English
3
0
4
191
florence ⏹️
florence ⏹️@morallawwithin·
My (loose & not super confident) predictions regarding lab-grown meat: —Definitely more people will become vegan. I wouldn’t be surprised if the rates tripled, but even that feels optimistic. —Your average enlightened carnist will, at most, buy a synthetic equivalent at the store when one exists but still buy dead bodies when there’s no equivalent. —The enlightened carnist will not refuse to eat dead bodies at restaurants that don’t offer synthetic, and so restaurants will have little incentive to make them widely offered. The won’t refuse to eat dead bodies at friends’ houses and such. —The enlightened carnist will retcon themselves as believing they only ever intended to eat lab-grown meat when all else is equal, and they’ll make excuses like “It’s not my fault restaurants dont have it” or “it would be intolerable to only eat ground beef and steaks, Ill wait until the tech gets further.” —Enough of the non-enlightened won’t be into synthetic meat for vibes-based reasons, so dead bodies will continue to be the default. —Factory farming will decrease noticeably, but nowhere even close to ending this moral catastrophe. Synthetic meat won’t become universal without people exercising principle and self-control, and if you were willing to do that, you’d already be doing it.
Macrophysiological System🐀@InVitroFuture

@SFSEgo You would say that, of course, and I don't doubt you believe it. But we already have meat substitutes that beat or tie with slaughter-based meats in blind taste tests, and the overwhelming majority of consumers have unfortunately shown zero interest vox.com/future-perfect…

English
20
1
63
4.8K
Jon Willits
Jon Willits@jonwillits·
@morallawwithin Maybe the failure to ground even the most primitive mathematics in logic means that math is magic too. But it is magic so basic (and damning for analytic philosophy) that philosophers ignore it and take mathematics for granted.
English
0
0
0
108
florence ⏹️
florence ⏹️@morallawwithin·
okay but consider this: consciousness is magic, and matrix multiplication is not magic
English
57
16
515
17.1K
Jon Willits
Jon Willits@jonwillits·
@Tyler_A_Harper And I say this and worry about this as someone who actually thinks the potential upside of LLMs at current capacity is really high. But students and teachers will really need to adapt what education is for and how it is done
English
0
0
0
23
Jon Willits
Jon Willits@jonwillits·
@Tyler_A_Harper already good enough to do enough assignments. If students choose to use it poorly to get grade without learning, and if universities don’t adapt, then enough of graduates will be so worthless that the value will collapse. That would take a decade or two.
English
1
0
0
27
Jon Willits รีทวีตแล้ว
Alex Imas
Alex Imas@alexolegimas·
This is a skills issue. Part of using AI effectively is knowing what you want to just look up (eg you need an answer to a query) vs what you want to *learn*. Learning is not just a function of seeing the information, it’s a function of spending time with it. Getting stuck, failing, looking back to something you missed is how information turns into skill. This isn’t new. Remember spark notes? You weren’t supposed to use them as substitute for reading the original text. After you read it closely, sparknotes were great to look up random details. AI summaries are similar. But that’s not to say AI can’t be used for learning effectively. I have now seen several scaffolds that prompt user to stop and think, to read original text before asking questions, etc. Claude teaching mode is one example. There’s a lot of promise in using AI to improve learning, but just getting summaries instead of reading text ain’t it.
David Perell Clips@PerellClips

Ezra Klein: "Having AI summarize a book or paper for me is a disaster. It has no idea what I really wanted to know and wouldn't have made the connections I would've made. I'm interested in the thing I will see that other people wouldn't have seen, and I think AI typically sees what everybody else would see. I'm not saying that AI can't be useful, but I'm pretty against shortcuts. And obviously, you have to limit the amount of work you're doing. You can't read literally everything. But in some ways, I think it's more dangerous to think you've read something that you haven't than to not read it at all. I think the time you spend with things is pretty important." @ezraklein

English
10
6
142
20.8K
Jon Willits รีทวีตแล้ว
Brad Caldwell
Brad Caldwell@Caldwbr·
🧵 1/10 How does the cerebellum learn precise timing? David DiGregorio showed how molecular diversity at synapses creates a "temporal basis set." A single sensory event (air puff) gets smeared into a cascade of delayed signals, like a neural sequencer. Static when into sequence.
Brad Caldwell tweet mediaBrad Caldwell tweet mediaBrad Caldwell tweet mediaBrad Caldwell tweet media
English
2
18
96
5.5K
Philip Goff
Philip Goff@Philip_Goff·
Some philosophical claims that at least 60% of philosophers agree about: atheism (67%), gender is social construction (63%), moral realism (62%), conceivability of zombies (61%). Which ones did the majority get right?
English
76
4
131
20.8K
Jon Willits
Jon Willits@jonwillits·
@ebarenholtz There’s a lot of research in cognitive psych and cognitive neuroscience that is very hard to square with this assertion
English
0
0
0
19
Elan Barenholtz
Elan Barenholtz@ebarenholtz·
Searle was right that a purely syntactic linguistic system would have no access to any "meaning" beyond the symbols themselves. He was wrong that our language isn't exactly such a system.
English
11
4
36
2.2K
Jon Willits
Jon Willits@jonwillits·
@GaryMarcus You have objections about what non-symbolic AI can do, in principle, do you not?
English
1
0
1
208
Gary Marcus
Gary Marcus@GaryMarcus·
My time here been a failure. I tried to get the Twitterverse to wake up before things got bad. Now we are here. Things are bad. And about to get worse. Most people still don’t realize how bad. It’s not that AI is inherently impossible or immoral. It’s that most of the people pushing it don’t give a damn.
English
173
132
1.1K
85.7K