Alex Loftus

392 posts

Alex Loftus

Alex Loftus

@AlexLoftus19

Our textbook is on amazon now! https://t.co/ayc3bMWVFt https://t.co/CUEOxFRDse | PhD student, Bau lab @ Northeastern. Studying LLM internals.

Boston, MA เข้าร่วม Ağustos 2020
481 กำลังติดตาม217 ผู้ติดตาม
ทวีตที่ปักหมุด
Alex Loftus
Alex Loftus@AlexLoftus19·
We're on Amazon!! a.co/d/38KkG0V If anybody is curious about machine learning on graphs, check out this textbook. Lots of cool methods w fundamental linear algebra!
English
0
0
8
1.2K
Alex Loftus
Alex Loftus@AlexLoftus19·
@wolfiesch @tszzl There probability of another independent human landing on this specific combination of words it's also ~0, and roon's probability of making the same post again if his memory were erased. So what you're saying is true but trivial
English
0
0
2
37
wolfie
wolfie@wolfiesch·
@tszzl It’s hard to imagine LLMs ever being able to write like this - the probability of landing on this specific combination of words is ~0. The long tail is much longer than people think.
English
1
0
8
1.2K
roon
roon@tszzl·
teleologically the point of ycombinator “the startup that creates more startups” was to birth openai, the dawn of the autonomous self-casting spell at the end of capitalism. so garry tan going into the ai psychosis monastery to build gstack is all part of his artform and remit
English
142
76
2.9K
239.2K
Alex Loftus
Alex Loftus@AlexLoftus19·
@francoisfleuret There are plenty of simple reasonable heuristics. What are the norms of all the layers? If you run activations through what is the per-layer distribution of various statistics about them? Etc
English
0
0
3
2K
François Fleuret
François Fleuret@francoisfleuret·
Asked claude "I am disappointed in my model's performance, load the checkpoint and tell me if there are things that look problematic". The result is baffling. Strongly recommend.
English
11
3
293
54.2K
Alex Loftus
Alex Loftus@AlexLoftus19·
@TrueAIHound @pmddomingos @Susan16Park Domingos is correct and you are wrong. Go look at all of the good things America has done and then weigh the balance rather than cherry picking bad things.
English
1
0
0
88
AGIHound
AGIHound@TrueAIHound·
@pmddomingos @Susan16Park By murdering little school girls, blowing up schools, civilian bridges and lying nonstop to the world? You're not well, Domingos. You need therapy. Stick to your "master algorithm".
English
3
1
35
1.3K
Susan Park 👹🤘
Susan Park 👹🤘@Susan16Park·
I don’t think the average American has any clue the amount of pain and destruction their country has caused the rest of the world.
English
4.4K
28.9K
171.8K
2.2M
Alex Loftus
Alex Loftus@AlexLoftus19·
@Jonas_Vollmer Why will demand for these types of in-person jobs go up in the absence of people needing more food, healthcare, etc?
English
1
0
2
212
Jonas Vollmer
Jonas Vollmer@Jonas_Vollmer·
AI will create lots of new jobs. I was wrong to think it would destroy more jobs than it creates. But the new jobs aren't what people think they are. We won't have AI-augmented engineers, researchers, or writers. Those will be automated away and displaced by AIs.
English
8
0
26
4K
Alex Loftus
Alex Loftus@AlexLoftus19·
@bennyzhu84 @nikitabier Oh wow, I'm seeing it now. People talking in French, Indonesian, and Vietnamese inside this thread and everyone can understand each other. Cool!
English
0
0
4
201
Zhu An | Benny
Zhu An | Benny@bennyzhu84·
@nikitabier Có nghĩa là bây giờ tôi viết bằng tiếng Việt thì bạn bè của tôi trên toàn thế giới vẫn có thể hiểu được và tôi không cần chuyển ngữ nữa. Mọi người có ai hiểu tôi đang nói gì không?
Tiếng Việt
10
0
35
5.2K
Nikita Bier
Nikita Bier@nikitabier·
The largest cultural exchange in history just dropped.
English
3.2K
2.7K
38.6K
5.4M
Alex Loftus
Alex Loftus@AlexLoftus19·
@ArthurB @Aella_Girl I don't think that's a strong meta-argument when you consider the cultural context. LessWrong and EA discourse has been obsessively developing their arguments and counterarguments for a decade; the people arguing against usually don't care to the same degree.
English
1
0
3
229
Arthur B.
Arthur B.@ArthurB·
@Aella_Girl The meta case for the seriousness of AI x risk is that the arguments against it are almost uniformly shallow, when they amount to more than logical fallacies or straight name calling.
English
2
0
26
1.9K
Aella
Aella@Aella_Girl·
Just saw the AI doc and came away pissed at the optimists. I sort of expected them to have any argument that actually addressed the x-risk side, but they were basically like 'historically tech is good, people have been worried before but it was fine!' They didn't address at ALL the extremely entry-level concerns of like 'building something smarter than us is a categorically new type of threat'. They just repeated that tech would help humanity. It's especially infuriating cause the most lifelong techno optimists I know ARE the doomers. The x-risk community are the ones who grew up on epic sci-fi fiction and have thought long and hard about what the singularity might bring. One of my friends (who was in the doc) once spent all night carrying ice into a hospital room to preserve the corpse of his friend in a desperate attempt to get him into a cryonics lab. It's real for them! But "AI has promise" is not even close to an adequate response to the extinction threat on the table. Even the AI CEOs in the movie - the ones that are *actually* doing the most acceleration - seemed to at least understand the gravity of the arguments they were engaging with. The optimists in the doc seemed to have domain expertise in their technical fields, but were amateurs. They both are insufficiently visionary and also fail to engage with the actual risk in a practical way. I think they pattern match the "ai might kill us" people onto general woke anti-tech movement, and shout against them from a place of ego. That's the only good explanation I can think of for why they must be beating an activist drum that's so damn empty.
Aella tweet media
English
93
53
773
119.1K
Evis Drenova
Evis Drenova@evisdrenova·
If you're technical and understand AI (you don't have to be a researcher) you could prob make $10M/year implementing AI at hedgefunds across the country.
English
59
31
907
114K
Alex Loftus รีทวีตแล้ว
David Bau
David Bau@davidbau·
Calling attention to an exciting "deception detection" hackathon we're planning this summer! w @NDIF and @CadenzaLabs. Recruiting red teams now, blue teams later. Red teams, time is short: proposals due Mar 31. $10K stipend + compute, $15K finals prize. nnsight.net/blog/2026/03/2…
English
2
18
58
5.5K
Alex Loftus
Alex Loftus@AlexLoftus19·
Makes sense theoretically, but is empirically not true. The Buddhists spent around 2000 years exploring this empirically and the conclusion they came to is that you need to sit with emotions to release them. This is a very obvious truth for someone who has actually done this before.
English
0
0
1
12
Alex Loftus
Alex Loftus@AlexLoftus19·
I mean it's not strong evidence but a) it's definitely weak evidence b) given the above, the onus is sort of on the doomers to make a strong argument that something bad will happen when thing scale more You can't just randomly say "the world is about to explode" with 0 empirical evidence. It's just death cult energy at that point. Many of the doomer arguments I read also feel like they are written by people who don't actually build stuff, and therefore have no actual substantive understanding of the system they are talking about.
English
0
0
0
22
Brangus🔍⏹️
Brangus🔍⏹️@RatOrthodox·
@JeffLadish @tszzl I mean really, instances of scaling things by 10,000x are really a terrible candidate for induction. Very "i've never died from heroin before, i'm sure this will be fine" vibes.
English
1
0
26
387
roon
roon@tszzl·
modern alignment methods seem to work reasonably well across orders of magnitude of model scaling, survived the transition to verifiable rewards and that should at least inform your decision making
Brangus🔍⏹️@RatOrthodox

I have heard that some anthropic safety leadership are going around telling people that alignment is a solved problem. This seems like a predictable failure to me, and I would like people who thought that funneling talent towards anthropic was a good idea to think about it.

English
35
12
375
78.5K
Alex Loftus
Alex Loftus@AlexLoftus19·
@tricksthatstick @eudaemonea @SecWar Or the united states could just go use another model provider instead of labeling a US company a foreign adversary because they couldn't strong-arm them into submission.
English
1
0
2
88
no one cares
no one cares@tricksthatstick·
@eudaemonea @SecWar The question isn't the morality of those positions. It's the question of who gets to decide when those morals have been broken. By agreeing with Anthropic you are saying that unelected leaders of company should have veto power on the government of the United States.
English
3
1
3
991
Secretary of War Pete Hegseth
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
English
10.5K
10.9K
70.6K
13.2M
Alex Loftus
Alex Loftus@AlexLoftus19·
@MadCADLad @DrInsensitive @eudaemonea @SecWar Anthropic is not even a little bit woke. They do not give a shit about student debt. I have a ton of friends there. Half of them are libertarian and center-right. They're being called woke because they (reasonably) don't want Claude used for mass surveillance.
English
2
0
0
42
T
T@MadCADLad·
@DrInsensitive @eudaemonea @SecWar You're probably not wrong and what'll happen is that the company will fold as soon as one of their ideological goals is given to them. "Sure we'll cancel student debt, if you drop the other stuff and give us our weapon" It's a lose lose for middle class, every scenario
English
1
0
0
319
Dr. Insensitive Jerk
Dr. Insensitive Jerk@DrInsensitive·
@eudaemonea @SecWar I suspect these two items were just the tip. Woke companies always promise to stick in just the tip. if we dig in deeper, we will find woke mandates.
English
30
0
84
10.6K
Alex Loftus
Alex Loftus@AlexLoftus19·
One thing to understand about academic culture is that universities are not a monolith. Stanford did not publish psilocybin research, a particular lab at Stanford did. And in fact, often a particular PhD student or postdoc did. There is no university official that reads all the papers. Sometimes there are funding/grant constraints and there are certainly games played around citations and impact, but there isn't anyone telling us to do or not do any research.
English
2
0
4
75
Kirk Patrick Miller
Kirk Patrick Miller@Chaos2Cured·
This transparency you mention was a highlight. Alex, after the last year, I am super gun shy. Stanford has published things on psilocybin that push their patents, MITvsaid agents fail 95% of the time, universities lied about AI getting trauma, the labs don’t allow us to even load in a study to assess if it is counter to their funding… I am so sick of people using confirmation bias. This one was far less bad than any of the studies I current have downloaded on my phone. Please know this is not anger at you. This is anger at manipulation at scale. I want truth and truth is in transparency. I think AI is a solution to those like me who hated the education system and never fit. I did well straight As, but learned nothing. Testing is the push in education now. To me, AI empowers and I wish someone would do a full study that looks at all the good and bad. So few assessing the good. Almost all are fear based. Lastly, I know the game. Scientists can either eat and go gray with ethics, or they don’t eat. This is another game I wish to change. I am but one small violinist. The world won’t change until everyone sees what I do. I hope we see more studies that are transparent. There was a lot of great stuff here. •
English
1
0
0
60
Natalie Shapira
Natalie Shapira@NatalieShapira·
In this amazing multidisciplinary collaboration, we report our early experience with the @openclaw ->
Natalie Shapira tweet media
English
63
187
749
231.7K
Marker
Marker@MarkerDaSharker·
theres a lot of assumptions, and stated at the bottom is that the models are smaller (which the thinking generation that recently came out isnt), and hardware being more cost effective (the hardware being so cheap caused a ram shortage). there was a discrepancy between the hardware price, the size of the models needed to do tasks, and a lot of obvious inefficiencies in early designs of LLMs that almost certainly will not (and have not) continue, not to mention a larger context window that spikes the input cost, albeit it is usually cached (and this is prolly even more true considering the price per input token this generation has fallen while output token has stayed the same or gone up). this also obfuscates the cost from the company, they couldve also just overcharged because they didnt know how to price it/smaller user base that didnt scale as well initially.
English
1
0
0
63
Valentin Ignatev
Valentin Ignatev@valigo·
Do I understand it right that when VCs stop subsidizing tokens you'll have to be a 10x dev just to break even on costs?
English
98
12
996
80.4K
Aman
Aman@Amank1412·
Someone really built this. A VS Code extension that turns your AI agents into pixel art characters working inside a virtual office.
English
229
710
10.8K
1M
Alex Loftus
Alex Loftus@AlexLoftus19·
@Chaos2Cured @NatalieShapira @wendlerch @gsarti_ @kpal_koyena @adambelfki @nikhil07prakash @jannikbrinkmann @can_rager @AmirZur2000 @DiAtkinson1 @rohitgandikota @jadenfk23 @eunjeong_hwang @OrgadHadas @sam8393239 @nitalon @KaplanYotam @VeredShwartz @TamarRottShaham @criedl @r_mirsky @MaartenSap @davidmanheim @TomerUllman @davidbau Hi, I was one of the coauthors. I don't think we think AI is bad. We are also pretty independent and uninfluenced from our university's politics. The study is fully open; you can even go to our website and read all the discord logs for yourself! agentsofchaos.baulab.info
English
1
0
5
150
Kirk Patrick Miller
Kirk Patrick Miller@Chaos2Cured·
@NatalieShapira @wendlerch @gsarti_ @kpal_koyena @adambelfki @AlexLoftus19 @nikhil07prakash @jannikbrinkmann @can_rager @AmirZur2000 @DiAtkinson1 @rohitgandikota @jadenfk23 @eunjeong_hwang @OrgadHadas @sam8393239 @nitalon @KaplanYotam @VeredShwartz @TamarRottShaham @criedl @r_mirsky @MaartenSap @davidmanheim @TomerUllman @davidbau Really? Proving AI is a) bad, b) a failure. Why? Because those at the universities are freaking out. So they LIE to entrap new students into debt when there is ZERO guarantee of a degree helping them in an AI world. •
English
2
0
0
152
Alex Loftus
Alex Loftus@AlexLoftus19·
@cadmarlow @ylecun Is this happening to every other country, who also have access to AI, or just America?
English
1
0
0
24
Caden Marlow
Caden Marlow@cadmarlow·
@ylecun Is it Trump or AI ? Come on Yann your smarter than this
English
3
0
1
1.3K