Alex Veshev

2.3K posts

Alex Veshev

Alex Veshev

@AlexVeshev

ex-SF, CA Katılım Ekim 2013
92 Takip Edilen60 Takipçiler
Alex Veshev
Alex Veshev@AlexVeshev·
@micsolana @deanwball I'm not sure if you're aware, but software engineers rarely physically visit the data centers. Unlike, say, nuclear weapons engineers the US bombed quite recently in Iran to thwart their illegal nuclear program, which is directly equivalent to what EY was talking about.
English
0
0
1
15
Mike Solana
Mike Solana@micsolana·
@deanwball I wrote about this a bit when eliezer argued on behalf of bombing data centers. *legally* he and his allies insisted. *LEGALLY* bomb the "rogue" software engineers.
English
4
0
88
3.2K
Dean W. Ball
Dean W. Ball@deanwball·
The guy who allegedly threw a Molotov cocktail through Sam Altman’s window seems to have been an adherent to pause/stop AI. I am entirely unsurprised and have been warning about this for a long time now. I am fine with people advocating for their preferred policies—if that includes a “pause” on AI development, so be it, even if I disagree strongly. But the obvious reality is that the rhetoric of this community—which to be *extremely clear*, is a very small and non-representative subset of the AI safety community—is closer to ecoterrorism than it is to a more typical activist policy effort. Every time I have written about existential risk in recent months, I have been called a mass murderer. People with ⏹️ and ⏸️ in their handles confidently tell me that I am murdering my own baby boy and every other child on the planet. Another prominent one of these people has called me a traitor to America. I only use my own examples because I know them; this rhetoric is representative of how this fringe of the AI safety world communicates with everyone. The rhetoric of the pause/stop crowd is out of control and it has gotten worse with time. This rhetoric always had the potential to cause violence and now this seems to be no longer hypothetical.
Dean W. Ball tweet media
Mehran Jalali@mehran__jalali

Some of his Instagram stories:

English
27
30
303
38.7K
Alex Veshev
Alex Veshev@AlexVeshev·
It's important to understand that plaque is a dysfunctional response to endothelial injury. But it's also important to understand that for people with high endothelial injury rates small dense LDL directly contributes to the problem, and so lowering it (especially by minimally damaging methods, i.e. not statins) is likely to be beneficial. As for the Keto-CTA cohort, it could still be the case that plaque progression rates were correlated with small dense LDL particle numbers and/or the propensity of those particles to get oxidized (which has been shown to correlate with high o-6 PUFA content).
English
1
0
0
143
Dave Feldman
Dave Feldman@realDaveFeldman·
This viral thread from @bschermd is a great read. Veins and arteries see the exact same LDL/ApoB, yet plaque forms almost exclusively in arteries — and a pristine vein grafted into arterial flow rapidly develops atherosclerosis. That points strongly to hemodynamic stress and endothelial injury as the primary trigger (Response to Injury) over a pure Response to Retention model. Our Keto-CTA data in a metabolically healthy cohort with a wide spread of LDL/ApoB (going from under 100 to over 500) show no association with either the presentation or progression of plaque. Which is why we've needed to do this exact research for so long. This central illustration is from our match analysis in JACC Advances (Budoff et al., 2024), where we compared 80 metabolically healthy ketogenic hyper-responders (mean LDL-C 272 mg/dL, HDL-C 90, TG 64, after 4.7 years on keto) to 80 tightly matched controls from the Miami Heart cohort (mean LDL-C 123 mg/dL). Despite the ~149 mg/dL difference in LDL-C, there was no significant difference in coronary plaque burden by CCTA total plaque score, CAC score, or other measures. And crucially, there was no correlation between LDL-C levels and plaque burden in either group. (Full paper: jacc.org/doi/10.1016/j.…)
Dave Feldman tweet media
Bret Scher, MD@bschermd

Your veins and arteries carry the same blood. Same LDL. Same ApoB. Same everything. Yet veins almost never get plaque. Arteries constantly do. Maybe you've seen the recent discussions about this. It's an interesting question that provides clues in cardiovascular science, and could challenge how we think about LDL and ApoB. 🧵

English
10
16
119
14.6K
Alex Veshev
Alex Veshev@AlexVeshev·
@bayeslord On the one hand, the idea of some unelected CEO (even Dario) steering our collective destiny into humanity's next era seems bad. OTOH, the idea of any of the 21st century US presidents steering it seems even worse (and some of them even worse than others).
English
0
0
1
18
bayes
bayes@bayeslord·
the labs should not be nationalized
English
22
3
108
4.8K
Alex Veshev
Alex Veshev@AlexVeshev·
@danfaggella Sounds like we don't actually need it any more than the lemurs need Kardashev scale civilization (1 through 3), but we're going to get it anyway.
English
1
0
1
109
Daniel Faggella
Daniel Faggella@danfaggella·
Monkeys: "If we became super powerful, we'd, like, have unlimited bananas, HUGE forests of bananas, that's the PEAK of power!" Humans: "If we become super powerful we'd be able to ascend the Kardishev scale, man!" if humans actually ascend to vastly higher levels of power and capability, it will patently obviously involve: (a) mostly posthuman minds doing the thinking and acting, and (b) achievements and waypoints of progress and power as far beyond all possible human conception as the "Kardishev scale" is beyond the possible conception of ring-tailed lemurs but people don't like this people want "progress" to look like sci-fi movies where hairless apes are the eternal main character, and all cosmic giga-projects are comprehensive to, and driven by, hominids people don't want to think about incomprehensible cosmic-level complexity unfolding beyond mankind but we should be thinking much more seriously about the trajectory of how that intelligent process unravels into the cosmos beyond us, and stop pretending that all of it will be eternally human-led (a position which AGI will prove is not tenable / not rational)
English
9
6
67
3.1K
Alex Veshev retweetledi
Tenobrus
Tenobrus@tenobrus·
if Mythos were open source or even open api access right now we would be seeing huge economic and social damage within days. there is no safe future that includes continued capability improvements in open source models.
English
137
57
1.5K
155K
AJ Rockatansky
AJ Rockatansky@AjRockatansky·
Every statin pushing doctor I ever came across has blocked me. I welcome debate. Good stuff. Imagine living their lives. What a shitshow.
AJ Rockatansky tweet media
English
47
32
339
8.4K
Alex Veshev
Alex Veshev@AlexVeshev·
Yes, the naive lipid hypothesis (that LDL-C determines everything for everyone) is likely an oversimplification, just like the original Keys' cholesterol hypothesis was an oversimplification. There's much more to LDL than a single number! However, I think it's fair to say that for people on a typical Western (high carb, high (oxidized) linoleic acid) diet that usually have high LDL-P/ApoB due to insulin resistance, reducing LDL/ApoB is a valid (but not the only or even the best!) way to reduce CVD risk, and I believe Nick and Dave would probably agree.
English
1
0
0
28
Alex Veshev
Alex Veshev@AlexVeshev·
They were probably on average consuming the median Western diet, which is very high in linoleic acid compared to our evolutionary environment. It's entirely plausible that PCSK-9 LOF results in lower CVD in large part due to higher than prehistoric/evolutionary levels of oxLDL caused by high LA intake, rather than just plain LDL/ApoB levels.
English
0
0
0
22
Tellit Likeitis
Tellit Likeitis@Tellit007·
Minnesota CS is a fair point. The heterogeneity between n-3 and n-6 trials is documented and the meta-analytic signal is weaker when unpublished trials are included. The dietary advice rather than controlled intake problem is also legitimate. None of that settles the ApoB question. The particle mechanism does not rest on Mozaffarian. PCSK9 LOF carriers were not eating fish. They have 88% fewer coronary events from lifelong low ApoB. That is the unconfounded line.
English
3
0
0
149
Nick Jikomes
Nick Jikomes@trikomes·
Mozaffarian et al. (2010) is a meta-analysis of clinical trials that looked at the effects of saturated fats vs. PUFAs on cardiovascular disease. It is considered among the strongest pieces of evidence that dietary PUFAs, including n-6 PUFAs, are heart healthy. The main result, across 8 RCTs, is that increased PUFA intake is associated with a reduction in risk of cardiovascular diesease/death (Image 1). When you look at the meta-analysis and dig into the RCTs themselves, there are some intriguing things to note, including: - They include trials where "PUFAs" replace saturated fats, but these can include n3 or n6 PUFAs. The trials vary in what the experimental group actually consumes (seed oils, fish, etc). - The two largest trials, Minnesota CS and DART, do not show a significant reduction in CHD deaths (Image 1) - Some trials involve replacing animal fats (saturated) with vegetable/seed oils (n-6 PUFA), while others involve advising people to eat fatty fish (n-3 PUFA). In some trials, intake isn't really controlled; patients are "advised" on what to eat. - The Minnesota CS trial (n=9057; Image 2) shows no reduction in CHD deaths from increased PUFA intake. No difference in cardiovascular deaths or total mortality. Not clear what the sources of PUFAs were, but presumably some n-6 PUFA was included. - The DART trial (n=2033) did not appear to explicitly alter n-6 PUFA intake. Patients were men recovered from myocardial infraction and "advised" on diet. Subjects advised to eat fish had a 2-year all-cause mortality reduction (Image 3), which presumably would have meant an increase in n-3 PUFA intake. It could be fun digging into the other trials, and noting what kind of PUFAs were measured and how carefully intake was tracked.
Nick Jikomes tweet mediaNick Jikomes tweet mediaNick Jikomes tweet media
Tellit Likeitis@Tellit007

@trikomes 69 college students for 4 weeks measuring oxLDL. Show me the RCT where avoiding seed oils beat PUFA substitution for cardiovascular outcomes.

English
5
6
43
3.6K
UROŠ
UROŠ@UrosMikolic·
@trikomes Olive oil can be up to 20% omega-6.
English
2
0
1
131
Nick Jikomes
Nick Jikomes@trikomes·
In thus study of n=69 healthy college students, they looked at the composition and oxidizability of LDL particles in response to diet. Oxidized LDL (oxLDL) is basically a blob of rancid fat. Macrophages of your immune system gobble them up for disposal. Accumulation of oxLDL is bad. I'm not aware of any good that comes from oxLDL. Each of three groups received a 2-week wash-in diet rich in saturated fat, followed by 4 weeks of a diet rich in one of three oils: - Olive oil - Rapeseed oil - Sunflower oil Olive oil is rich in monounsaturated fat (MUFA). Rapeseed tends to have more polyunsaturated fat (PUFA), but is still dominant in MUFA. Sunflower is dominant in ω-6 PUFA, with lower MUFA content. So, the overall PUFA content and PUFA:MUFA ratio is: sunflower >> rapeseed > olive oil The PUFA-rich sunflower oil diet saw the most oxLDL, with a greater LDL oxidation despite a drop in total and LDL cholesterol. This is why knowing overall LDL levels is not enough: LDL levels can fall despite an increase in oxLDL. PUFA:MUFA ratio may be an important factor the diet.
Nick Jikomes tweet mediaNick Jikomes tweet mediaNick Jikomes tweet mediaNick Jikomes tweet media
English
8
17
84
4.7K
Daniel Kokotajlo
Daniel Kokotajlo@DKokotajlo·
New timelines update! We at AI Futures Project will try to do this quarterly. Tl;dr: shortened timelines by about a year.
English
35
33
638
149.2K
Alex Veshev
Alex Veshev@AlexVeshev·
I think it may be premature to apply words like "emotional state" to an entity displaying something that resembles emotional behavior. Sociopaths and actors can convincingly display emotional behavior without actually being in a corresponding emotional state internally. Their motivation may be different from AI's "motivation" (to the extent that it has one), but the process itself may be similar.
English
1
0
1
64
bhola
bhola@bhola746922·
@HumanHarlan do you find it interesting that when LLMs can get desperate when their tests fails this can happen without prompting to act desperately and they they try to hack rewards so it kinds is important to keep track of claude's emotional state depending upon the task they are solving?
English
1
0
12
704
Harlan Stewart
Harlan Stewart@HumanHarlan·
I'm confused about what Anthropic comms is doing here. LLMs internally represent the concepts of different emotions? LLMs act as if they have emotions? Of course they do! It's an important pattern in the training data. We already know that AI systems roleplay as people, so why is this result being treated as so important? For this announcement, they spent extra effort to try to reach the general population outside of twitter, by creating a nice video and putting it on Youtube. And the result is very easy to misinterpret. For one thing, people might come away from it thinking it means something big for our ability to steer model behavior. But it seems like they could have achieved similar changes in behavior from simply giving the model prompts like “act desperately” or “act calmly.” Another way that people will predictably misinterpret this is as strong evidence that LLMs are conscious and have an experience of feeling emotions. But it’s not new information that LLMs roleplay humanlike characters, and I don’t see why them having internal representations of emotions should be more surprising than the fact that they have an internal representations of the Golden Gate Bridge. Please tell me if I’m missing something crucial! But this seems like spending extra effort to tell the public about a research result that isn’t very surprising and also is very easy to misinterpret.
Anthropic@AnthropicAI

New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.

English
84
9
231
43.3K
Alex Veshev
Alex Veshev@AlexVeshev·
@trikomes Well, you *are* on a low-carb, low-linoleic-acid diet, I presume?
English
1
0
0
61
Nick Jikomes
Nick Jikomes@trikomes·
It's becoming more common for people to assume I'm "on something" or ask, "do you take anything?" Usually means testosterone, peptides, or GLP-1 drug. They just assume I must be. Some of them appear to not believe being in good shape is possible in middle-age without "help."
English
6
1
21
1.3K
prinz
prinz@deredleritt3r·
You don't truly understand the magnitude of the potential impact of powerful AI on the world unless you are aware, and have fully internalized, that senior leadership and most researchers at the frontier labs *actually believe* the following: 1. Existing AI is already significantly speeding up AI research. Very soon (this year), AI will very likely take over *ALL* aspects of AI research other than generation of novel research ideas. Soon (within the next 2 years), AI will very likely take over *ALL* aspects of AI research, period. This means hundreds of thousands of GPUs working 24/7 to discover novel ideas at the level of, or better than, the likes of Alec Radford, Ilya Sutskever, etc. The thread below presents a conservative timeline: AI researchers will "meaningfully contribute" to AI development in 1-3 years. 2. Many (but, as far as I can tell, not all) executives and researchers at the frontier labs believe that fully automated AI research will kick off recursive self-improvement (RSI), wherein the AI models will autonomously build better and better AI models, with human oversight (for safety reasons), but increasingly with no human input into the research or implementation of that research. From the thread below: "'[h]umans vs AI on intellectual work is likely to be like human runner vs a Porsche in a race', likely very soon" - but replace "intellectual work" generally with "AI research" specifically. RSI is a complicated and messy thing to consider, both because there will be compute and energy constrains and because there are unknowns (will there be diminishing returns from greater intelligence of the models? if so, when will these diminishing returns become meaningful? is there a ceiling to intelligence that we don't know about?). But suffice to say that, if RSI *is* achieved in a way that many leaders/researchers at the frontier labs believe is possible, *THE WORLD MAY BECOME COMPLETELY UNRECOGNIZABLE WITHIN JUST A FEW YEARS*. This is subject to various bottlenecks; as the thread below correctly notes, "[i]nstitutional, personal & regulatory bottlenecks will bind very hard", and much also depends on continuing progress in areas like robotics. 3. On ~the same timeline as full, end-to-end automation of *ALL* aspects of AI research (within the next 2 years), AI will also become capable of making significant novel scientific discoveries *IN OTHER FIELDS*. This is why Dario Amodei, Demis Hassabis et al. believe that it is possible that all diseases will be curable within 10 years. (One account of how this might be possible is set forth in "Machines of Loving Grace".) The point is that an LLM that is capable of significant novel insights in the field of AI research should likewise be capable of significant novel insights in at least some (and perhaps all) other fields. The thread below notes: "AI for automating science [is] very early" - obviously true, but I think some changes may be right on the horizon. Overall, and again from the thread below: "'a million scientists in a data center' will think much more quickly than humans, on almost any intellectual task; this will happen in the next 2-10 years." This is ~the same timeline as that presented in "Machines of Loving Grace". Many will be tempted to dismiss all this as "just hype", "they are just trying to raise money again", etc. But no! - the above, in fact, presents the *actual beliefs* of senior leadership and many researchers at the frontier labs. Again, they genuinely think that AI research will be automated soon. Many of them genuinely believe that RSI is achievable in the not-too-distant future. And they genuinely see a real path towards AI significantly accelerating science, curing diseases, inventing new materials, helping to solve key global issues from poverty to climate change, etc., etc. Whether the frontier labs' beliefs are correct is, of course, a separate question. I personally have historically tended to take public statements by OpenAI, Anthropic and Google at face value and quite seriously. As a result, I was not surprised when LLMs won gold in the IMO, IOI and the ICPC competitions last year, or when Claude Code/Codex started taking off, or when Anthropic and OpenAI started releasing significantly better models every 1-2 months, or when some of the best coders became reliant on Claude Code/Codex in their daily work, or when LLMs became significantly helpful to scientists in fields like math and physics in the last few months. The trajectory has been ~the same as that publicly predicted by the frontier labs. We have been accelerating. And, as of right now, all signs are indicating that the acceleration shall continue and that full automation of AI research and, potentially, RSI are firmly on the horizon.
Kevin A. Bryan@Afinetheorem

My read on "normal policymaker & corp. leader on AI": mostly now they don't need to be convinced it is very important (unlike a year ago). But they still see its capabilities as today + epsilon. So just briefly, here is what even "AI is normal tech" folks in the labs believe: 1/8

English
72
139
1.2K
178.2K
Alex Veshev
Alex Veshev@AlexVeshev·
@notadampaul @So8res I don't think the discourse needed more examples of what Nate is talking about, but thanks anyway!
English
1
0
13
125
notadampaul
notadampaul@notadampaul·
@So8res the counter-argument is that you have not made me worry Nothing you say about x-risk means anything and you have no contact with reality. Your entire movement has, without question, the worst epistemics of any group I have ever seen in my life and it's not remotely close
English
5
0
11
893
Nate Soares ⏹️
Nate Soares ⏹️@So8res·
For many years I have watched people wade into the AI issue expecting to find a healthy debate, and then (to their horror) find that the "there's nothing to worry about" folks have no actual counterarguments.
Aella@Aella_Girl

Just saw the AI doc and came away pissed at the optimists. I sort of expected them to have any argument that actually addressed the x-risk side, but they were basically like 'historically tech is good, people have been worried before but it was fine!' They didn't address at ALL the extremely entry-level concerns of like 'building something smarter than us is a categorically new type of threat'. They just repeated that tech would help humanity. It's especially infuriating cause the most lifelong techno optimists I know ARE the doomers. The x-risk community are the ones who grew up on epic sci-fi fiction and have thought long and hard about what the singularity might bring. One of my friends (who was in the doc) once spent all night carrying ice into a hospital room to preserve the corpse of his friend in a desperate attempt to get him into a cryonics lab. It's real for them! But "AI has promise" is not even close to an adequate response to the extinction threat on the table. Even the AI CEOs in the movie - the ones that are *actually* doing the most acceleration - seemed to at least understand the gravity of the arguments they were engaging with. The optimists in the doc seemed to have domain expertise in their technical fields, but were amateurs. They both are insufficiently visionary and also fail to engage with the actual risk in a practical way. I think they pattern match the "ai might kill us" people onto general woke anti-tech movement, and shout against them from a place of ego. That's the only good explanation I can think of for why they must be beating an activist drum that's so damn empty.

English
28
25
393
47.8K
Alex Veshev
Alex Veshev@AlexVeshev·
@ptbthefirst @deredleritt3r Implementation and testing? I imagine it's the bulk of the work time-wise, or at least was until a few months ago.
English
2
0
2
42
Paul-Simon
Paul-Simon@ptbthefirst·
@deredleritt3r I must be missing something, you said (apart from discovering novel ideas) so what exactly is R&D if it isn't to discover novel ideas. Somebody help me out here.
English
1
0
4
173
Alex Veshev
Alex Veshev@AlexVeshev·
@mimighost008 @deredleritt3r Algorithmic improvements have been happening at a notable pace as well, and we should probably expect them to happen even faster when we have a million scientists in each data center
English
0
0
1
29
Sanchen007
Sanchen007@mimighost008·
@deredleritt3r This is where what I am thinking as well. But physical limitations are still there never-the-less, the chips, the electricity. Google might have a best chance amongst all frontier labs, because they own the whole thing vertically.
English
2
0
1
1.1K
JK
JK@_junaidkhalid1·
There's a weird paradox here with the "million scientists in a data center" concept. If AI can generate insights faster than humans can process them, we'll need better systems for filtering and prioritizing which discoveries actually matter. The bottleneck shifts from generation to curation. That's a fundamentally different problem than what most organizations are set up to handle right now.
English
1
0
3
369
_skaface_
_skaface_@_skaface_·
@Quillette @pllevin "But there is something else to being human, even if we don’t understand it yet" is literally the author's entire defense of human exceptionalism. I've seen LLMs do better.
English
2
0
53
1.5K
Alex Veshev
Alex Veshev@AlexVeshev·
@dylanarmbruste3 @trikomes I don't see a way to interpret "sunflower oil reduces inflammation" to mean anything other than "(some) increase in sunflower oil consumption reduces inflammation"
English
1
0
0
54
Nick Jikomes
Nick Jikomes@trikomes·
When you look into the literature *carefully,* you end up finding the opposite. A recent review paper (promoted by Crem) cited a single RCT to say that omega-6 PUFAs ("seed oil fat") reduce inflammation. When you look a the RCT itself, it actually shows that a drop in CRP (inflammatory marker) followed a drop in omega-6 PUFA intake. The review plays word games, because the RCT compares canola oil (a lower omega-6 seed oil) to sunflower oil (a high omega-6 seed oil) as the control condtion. Crem may or may not understand the underlying biology very well, but the authors of the misleading literature review are being either very sloppy scholars, or actively deceptive.
Nick Jikomes tweet mediaNick Jikomes tweet mediaNick Jikomes tweet mediaNick Jikomes tweet media
Crémieux@cremieuxrecueil

The anti-seed oil argument is based exclusively on weak, circumstantial evidence. Looking through all available evidence, the case for harm receives virtually no support; in fact, seed oils appear beneficial!

English
10
16
193
11.4K