Aram Ebtekar

68 posts

Aram Ebtekar

Aram Ebtekar

@KarmaRebate

Mad Scientist

Vancouver, Canada Katılım Nisan 2015
23 Takip Edilen94 Takipçiler
Aram Ebtekar retweetledi
Judea Pearl
Judea Pearl@yudapearl·
"Why do we remember the past and plan the future?" I normally stay away from Entropy-based papers, but this one is different -- it is based on counterfactual representation of reality. Worth reading.
Michael Frank Martin@riemannzeta

.@KarmaRebate .@mhutter42 Your Causal Multibaker Maps paper appears to offer the first reversible toy model where .@yudapearl's causal structure, entropy increase, and the impossibility of future-records all co-emerge from coarse-graining + Past Hypothesis. (mdpi.com/1099-4300/26/9…) That co-emergence raises a question: .@carlorovelli argues causation is rooted in the entropy gradient (arxiv.org/abs/2211.00888), while .@yudapearl treats causal structure as irreducible. Your model shows both arising together. But what forces them to? A candidate answer: maintaining a causal model against entropy costs ongoing thermodynamic work — not Landauer's one-time erasure cost, but the continuous transfer entropy (.@ito_sosuke & Sagawa) needed to keep a model tracking the system it represents. That cost scales with the model's structural complexity, coupling the two arrows. One prices the other. #the-asymmetry" target="_blank" rel="nofollow noopener">symmetrybroken.com/maintaining-di…

English
3
13
78
14.1K
Aram Ebtekar
Aram Ebtekar@KarmaRebate·
@riemannzeta @mhutter42 @yudapearl Thanks for reading our work! To answer "But what forces them to?", in the toy model we found that causality and the second law are consequences of a dynamical system's Markovian coarse-graining. A non-toy setting in which I'd like to see this explored is arxiv.org/abs/2408.07818.
English
1
0
1
78
Michael Frank Martin
Michael Frank Martin@riemannzeta·
.@KarmaRebate .@mhutter42 Your Causal Multibaker Maps paper appears to offer the first reversible toy model where .@yudapearl's causal structure, entropy increase, and the impossibility of future-records all co-emerge from coarse-graining + Past Hypothesis. (mdpi.com/1099-4300/26/9…) That co-emergence raises a question: .@carlorovelli argues causation is rooted in the entropy gradient (arxiv.org/abs/2211.00888), while .@yudapearl treats causal structure as irreducible. Your model shows both arising together. But what forces them to? A candidate answer: maintaining a causal model against entropy costs ongoing thermodynamic work — not Landauer's one-time erasure cost, but the continuous transfer entropy (.@ito_sosuke & Sagawa) needed to keep a model tracking the system it represents. That cost scales with the model's structural complexity, coupling the two arrows. One prices the other. #the-asymmetry" target="_blank" rel="nofollow noopener">symmetrybroken.com/maintaining-di…
English
1
7
42
15.1K
Aram Ebtekar retweetledi
Aram Ebtekar retweetledi
Brian Allen
Brian Allen@allenanalysis·
The CEO of Palantir just said the quiet part out loud. Alex Karp — whose company builds surveillance and defense technology for the U.S. government — just openly stated that AI will deliberately shift economic power away from highly educated, often female, Democratic-leaning workers and toward vocationally trained, working-class, often male voters. He then admitted these technologies are — his word — “dangerous” and “suicidal,” and that the only justification for deploying them is the military argument: if we don’t, our adversaries will. So let’s be clear about what was just said on the record: A defense contractor CEO told you AI is being built to restructure the American class system, that it will destroy the economic power of an entire political demographic, and that the only way to sell it to the public is to wrap it in national security.
English
714
7.8K
15.8K
1.8M
Aram Ebtekar retweetledi
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
Will AI become smarter than humans? If so, is humanity in danger? I went to Silicon Valley to ask some of the leading AI experts that question. Here’s what they had to say:
English
610
486
3.2K
1.3M
Aram Ebtekar retweetledi
Dustin
Dustin@r0ck3t23·
Dario Amodei just gave his first interview since the Pentagon blacklisted his company. The toll is visible on his face. He was asked one question. What would you say to the President right now? He didn’t hesitate. Amodei: “We are patriotic Americans. Everything we have done has been for the sake of this country.” Anthropic built their models to defend America. They were the first AI lab cleared for classified military systems. They wanted to help the warfighter. But the Pentagon demanded unrestricted access to fully autonomous weapons and mass surveillance of American citizens. Amodei drew the line. The government responded with emergency Cold War powers. A supply chain designation normally reserved for foreign adversaries. A six-month federal phaseout ordered from Truth Social. Amodei: “When we were threatened with supply chain designation and Defense Production Act, which are unprecedented intrusions into the private economy, we exercised our classic First Amendment rights to speak up and disagree with the government.” The administration framed Anthropic’s refusal as anti-American. Amodei’s response dismantled that framing in one sentence. Amodei: “Disagreeing with the government is the most American thing in the world.” Here is the deeper paradox nobody in Washington wants to say out loud. We are in a geopolitical race against autocratic adversaries who use AI for mass surveillance of their own citizens and autonomous weapons with no human oversight. The Pentagon demanded that Anthropic build those exact capabilities for America. Amodei: “The red lines we have drawn, we drew because we believe that crossing those red lines is contrary to American values.” You cannot defeat authoritarianism by adopting its methods. You cannot defend the open society by forcing private companies to build its antithesis under threat of wartime emergency powers. Anthropic held the line. Got blacklisted for it. And came out the other side saying the same thing they said going in. That is what it actually looks like to mean it.
English
860
6.5K
23.6K
1.4M
Aram Ebtekar retweetledi
Rudolf Laine
Rudolf Laine@LRudL_·
In "The Technology of Liberalism" (lnk at end), I argue we should differentially advance tech that promotes liberalism (something like: everyone have their own inviolable sphere within which they are free) - especially as tech makes it easier to violate boundaries & centralize
Rudolf Laine tweet media
English
1
8
39
7.5K
Aram Ebtekar retweetledi
Egg Syntax
Egg Syntax@eggsyntax·
The Briefing (AI safety microfiction)
Egg Syntax tweet mediaEgg Syntax tweet media
English
0
1
8
321
Aram Ebtekar retweetledi
David Krueger 🦥 ⏸️ ⏹️ ⏪
AI companies want to build Superintelligent AI. They admit they don’t know how to control it. Common sense says this is a bad idea. By default, we all lose our jobs. In the worst case we all die. Counter-arguments increasingly boil down to “It’s inevitable”. It’s not.
David Krueger 🦥 ⏸️ ⏹️ ⏪ tweet media
English
25
24
204
46.8K
Aram Ebtekar retweetledi
Samuel Marks
Samuel Marks@saprmarks·
New paper & counterintuitive alignment method: Inoculation Prompting Problem: An LLM learned bad behavior from its training data Solution: Retrain while *explicitly prompting it to misbehave* This reduces reward hacking, sycophancy, etc. without harming learning of capabilities
Samuel Marks tweet media
English
15
72
536
84.4K
Aram Ebtekar retweetledi
Owain Evans
Owain Evans@OwainEvans_UK·
New paper & surprising result. LLMs transmit traits to other models via hidden signals in data. Datasets consisting only of 3-digit numbers can transmit a love for owls, or evil tendencies. 🧵
Owain Evans tweet media
English
282
1.1K
8.4K
2M
Aram Ebtekar retweetledi
Former Congresswoman Marjorie Taylor Greene🇺🇸
Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years. I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there. We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous. This needs to be stripped out in the Senate. When the OBBB comes back to the House for approval after Senate changes, I will not vote for it with this in it. We should be reducing federal power and preserving state power. Not the other way around. Especially with rapidly developing AI that even the experts warn they have no idea what it may be capable of.
Former Congresswoman Marjorie Taylor Greene🇺🇸 tweet mediaFormer Congresswoman Marjorie Taylor Greene🇺🇸 tweet media
English
25.1K
9.7K
46.6K
22.7M
Aram Ebtekar retweetledi
kepano
kepano@kepano·
OpenAI is now required by court order to preserve all ChatGPT logs including "temporary chats" and API requests that would have been deleted if I understand this correctly, it means data retention policies for apps that use OpenAI API simply cannot be honored
kepano tweet media
English
182
803
6K
1.1M
Aram Ebtekar retweetledi
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
Nate Soares and I are publishing a traditional book: _If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All_. Coming in Sep 2025. You should probably read it! Given that, we'd like you to preorder it! Nowish!
Eliezer Yudkowsky ⏹️ tweet media
English
273
388
2K
1.4M
Aram Ebtekar
Aram Ebtekar@KarmaRebate·
@zach_yadegari At the end of the day though, colleges just don't have enough info on you to choose as effectively as they'd like. You got admitted to some really great schools; you'll find your kind of people if you seek them out.
English
0
0
0
24
Aram Ebtekar
Aram Ebtekar@KarmaRebate·
@zach_yadegari Hey great work! I think you come across better in your public interviews than in this essay. Readers can't tell how Kyoto led you to college, or how you'd balance your responsibility to your company. If a social life is important to you, a reflection on that might be interesting.
English
1
0
0
50
Zach Yadegari
Zach Yadegari@zach_yadegari·
18 years old 34 ACT 4.0 GPA $30M ARR biz Stanford ❌ MIT ❌ Harvard ❌ Yale ❌ WashU ❌ Columbia ❌ UPenn ❌ Princeton ❌ Duke ❌ USC ❌ Georgia Tech ✅ UVA ❌ NYU ❌ UT ✅ Vanderbilt ❌ Brown ❌ UMiami ✅ Cornell ❌
English
3.7K
1.1K
26.3K
28.7M
Aram Ebtekar retweetledi
Owain Evans
Owain Evans@OwainEvans_UK·
New results on emergent misalignment (EM). We find: 
1. EM in *base* models (i.e. models with no alignment post-training). This contradicts the Waluigi thesis. 2. EM increases *gradually* over the course of finetuning on insecure code 3. EM in *reasoning* models
Owain Evans tweet media
English
15
83
553
67.1K
Aram Ebtekar retweetledi
Asa Cooper Stickland
Asa Cooper Stickland@AsaCoopStick·
New paper! The UK AISI has created RepliBench, a benchmark that measures the abilities of frontier AI systems to autonomously replicate, i.e. spread copies of themselves without human help. Our results suggest that models are rapidly improving, and the best frontier models are held back by only a few key subcapabilities.
Asa Cooper Stickland tweet media
AI Security Institute@AISecurityInst

🚨 New AISI research 🚨 RepliBench is a novel benchmark that measures the ability of frontier AI systems to autonomously replicate. Read the full blog here: aisi.gov.uk/work/replibenc…

English
5
42
206
67.4K