Gabe Schoenbach

5.7K posts

Gabe Schoenbach banner
Gabe Schoenbach

Gabe Schoenbach

@gabecubed

a freak of nurture

Chicago, IL Katılım Ağustos 2016
1K Takip Edilen588 Takipçiler
Sabitlenmiş Tweet
Gabe Schoenbach
Gabe Schoenbach@gabecubed·
There are so many things to love about life. From now on I’m going to find (at least) one thing a day to notice, and add it to this thread. Gratefulness is important.
English
3
2
97
0
Gabe Schoenbach retweetledi
rose 🪽
rose 🪽@_phasespace·
70 degrees in chicago nothing bad has ever happened. winter wasn't real
English
12
689
5.2K
182.4K
Gabe Schoenbach retweetledi
mary
mary@j0anofpark·
if you are in chicago it is imperative that you go to the nearest major thoroughfare and walk its length immediately
English
14
34
1.6K
55.5K
Gabe Schoenbach retweetledi
Andy Masley
Andy Masley@AndyMasley·
I would like more people on the left to feel social permission to think about the risks from very capable AI rather than just feel obligated to presume the big problem with AI is that it's a scam, so I'm pretty excited that Bernie's moved in this direction
Matthew Zeitlin@MattZeitlin

what's fascinating about bernie's AI stance is that he takes the AI safety view that AI is incredibly powerful quite seriously, whereas lots of people who share bernie's worldview think it's largely a scam

English
7
18
269
8K
Gabe Schoenbach
Gabe Schoenbach@gabecubed·
@ddowster totally. and i bet if everyone who saw this post told that fact to a random size-4 subset of their friends we'd be fine, tree-wise.
English
0
0
0
13
Gabe Schoenbach retweetledi
Douglas Dow
Douglas Dow@ddowster·
@gabecubed The ‘tells someone a neighborhood of a random d-regular graph looks like a tree” is locally a tree around you as long as no one seeing your post tells those 4 people about this fact
English
1
1
1
69
Gabe Schoenbach
Gabe Schoenbach@gabecubed·
With high probability, any local neighborhood of a random d-regular graph looks like a tree. Sure, that's fine. But so far this week, four (4) people have (independently!) brought up this fact to me. What is going on???
English
1
0
0
87
Gabe Schoenbach retweetledi
Derek Thompson
Derek Thompson@DKThomp·
I really want people to see the story above the story here, which is that whether you're reading Citrini, or listening to Jamie Dimon at a cocktial party, the conversation about AI is a marketplace of competing science fiction narratives. That's not to say I think the technology is a parlor trick. But rather that the level of uncertainty is so high, and the quality and supply of real-world, real-time information about AI's macroeconomic effects so paltry, that very serious conversations about AI are often more literary than genuinely analytical. And I think that observation sets up another important point: I feel lucky to be able to have conversations about the frontier of AI with executives and builders at frontier labs; economists at AI conferences; investors in AI; and other AI folks at off-the-record dinners where important truths can theoretically be shared without risk. I can't emphasize enough that "nobody knows anything" is about as close to the reality here as three words are going to get you. Nobody what's going to happen this year, or next year, or the year after that. There is no secret cigar-filled room of people who have unique access to some authentic postcard from the future. When you drill down underneath the bluster, the boosterism, the fear, the anxiety, what's there at the bottom is genuine uncertainty, a vacuum into which storytelling is flooding. The frontier labs don't really know what they're building exactly, and economists don't really know how to model the thing they claim they're building (genuine recursively self-improving AI agency isn't really analogous to something we know about). I wish more people talked about and thought about this subject thru that sort of lens: We're trying to model the economy-wide effects of a technology whose properties the frontier labs can't even really describe yet. Whatever you think about AI today, be prepared to change your mind soon.
Brian Sozzi@BrianSozzi

JP Morgan CEO Jamie Dimon at an investor cocktail event last night on AI (part 2): "What if, I think there are 2 million commercial truckers in the United States, and there are lots of other examples you can give. There's a thought exercise, and you could push a button, eliminate all of them, and they make $120,000 on average. Save fuel, save lives, save time, a more efficient system, less disrupted highways, all that beautiful stuff. Would you do it if you put 2 million people on the street where even if there are jobs available, that next job is $25,000 a year, stocking shelves. I was saying, "That's kind of really bad, kind of civilly, should we as society agree to that?" I don't think so. I was talking about the business and government, and they should start thinking today, not when it happens, what would we do to deal with the [AI] issue? It's got to be business and government."

English
93
233
2.2K
673.2K
Gabe Schoenbach
Gabe Schoenbach@gabecubed·
went to another beautiful math talk today. i need to be doing more math!!
English
0
0
1
47
Gabe Schoenbach retweetledi
Cassie Pritchard
Cassie Pritchard@hecubian_devil·
For the record, I’m less of the mind that what AI does is so alien from humans; we’re actually very good at making fuzzy estimations and mapping to the “most probable” outcomes. We make simple mistakes like LLMs do all the time: someone pointed out to me that ChatGPT still fails consistently when asked something like “list all Disney characters with 5-letter names.” It lists Minnie and Mickey, which have 6 letters. This makes it seem very “stupid” and inhuman to most of us—but if you ever watch Jeopardy, and you guess the answers to questions like “In old philosophy, this 12-letter word referred to a fifth substance, superior to earth, air, fire or water” (the answer is “quintessence”), you’ll know that you often don’t have time to *actually* count out the letters before answering—you make a guess of something that matches the definition and fuzzily seems “about the right length.” When calculating tax or tip, maybe you do the math, but sometimes you quickly round/estimate to a close-enough approximation (hopefully erring on the side of generosity). When someone asks you how many plays Shakespeare wrote, maybe you know and can recall the exact number, or maybe you say “idk, like 30 or 40?” which is a pretty close guess. What is *hard* for humans is precision and consistency. You can guess where a thrown football will ~basically go in an instant; doing the math to actually predict with high precision its trajectory is very hard for us and takes a lot of time. You can look at a plant and instantly classify it as “a tree” or not based on, essentially, vibes—it looks like a tree or it doesn’t. But trying to come up with a rigorous axiomatic definition based in biology is hard, and bedeviled botanists for years. People tend to find computers “smart” when they can do what is hard for us; AI invokes a lot of skepticism because the people building it are striving to have it do what comes *easily* to us—talking, writing, seeing, hearing, guessing, estimating. Where LLMs fall flat is not being able to shift from guessing to precision when users want them to switch. But, the guessing itself is actually *much* more like normal human reasoning than traditional computing, which is axiomatic and more alien. Humans are so capable because we can do both, although we’re faster and better at guessing and slower and worse at precision. I am not an AI researcher and I am not informed enough to know if they’ll eventually be able to marry the precision of traditional computing to the statistical guesswork of present-day LLMs, but if researchers figure this out, a lot of people will rapidly become convinced that AI can “think”—although I’d still be skeptical that something without senses and a body can *truly* “think” in a way that resembles human cognition. But if we couch our criticisms in “AI can’t think,” and researchers make it able to switch to precise answers when users want it to do that, we’ll lose because *most* people will go “oh wow it can think.” Even now a lot of people are already going down that road, and it still can’t make a list of 5 letter names! All of this “thinking” question, though, is mostly orthogonal to whether AI can be used to centralize and control information transmission and surveillance at a scale previously impossible in human history, which could make organizing effectively impossible. That’s what we should care about!
English
5
2
80
3.2K
Gabe Schoenbach
Gabe Schoenbach@gabecubed·
Writing three papers with a total of 12 collaborators, and every one of us is at a distinct institution(!)
English
0
0
0
41
Gabe Schoenbach retweetledi
Isi Breen
Isi Breen@isaiah_bb·
I think its a problem that we see these kinds of warnings and resignations as being about some sci-fi fantasy robot apocalypse, rather than the very real and already happening use of these tools by the state to surveil, control, and punish us at every turn.
Saoud Rizwan@sdrzn

head of anthropic’s safeguards research just quit and said “the world is in peril” and that he’s moving to the UK to write poetry and “become invisible”. other safety researchers and senior staff left over the last 2 weeks as well... probably nothing.

English
15
533
3.3K
107.7K
Gabe Schoenbach retweetledi
Thomas-URMI
Thomas-URMI@TheUrMillennial·
For about 10 months now I've been trying to learn jazz guitar. Here are a few discoveries I've made about it: All jazz is in the key of F or Eb Every chord can be substituted with every other chord Walking bass is just 4ths and chromatic passing tones. Mfers barely trying
English
35
36
1.1K
39.7K
Gabe Schoenbach retweetledi
Maaz
Maaz@mmaaz_98·
J Cole sampled Andrew Wiles’ interview talking about his work on Fermat’s Last Theorem lol
J. Cole@JColeNC

MIDNIGHT

English
10
77
1.2K
86.1K