Jaime RV

30 posts

Jaime RV banner
Jaime RV

Jaime RV

@JaimeRalV

CEO @apartresearch

Katılım Temmuz 2024
223 Takip Edilen32 Takipçiler
Jaime RV retweetledi
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
The world’s leading scientists are warning that AI could “end civilization as we know it.” 97% of Americans believe AI safety should be subject to rules. When will our government wake up?! We need global cooperation — before it’s too late.
English
499
890
3.2K
126.4K
Jaime RV retweetledi
Alex Bores
Alex Bores@AlexBores·
"we need to ensure that key decisions about AI are made via democratic processes and with egalitarian principles, and not just made by AI labs." I agree! When is Greg Brockman giving $25 million to the pro-regulation side? When is Chris Lehane spending half his time to establish safety? Or are these principles just theater for your employees so they don't ask why your Global Affairs team and Leading the Future are trying to trample anyone who advocates for safety?
Sam Altman@sama

Our Principles: Democratization, Empowerment, Universal Prosperity, Resilience, and Adaptability openai.com/index/our-prin…

English
16
59
472
58.4K
Jaime RV retweetledi
Acyn
Acyn@Acyn·
Sanders on AI: We need to develop a sense of urgency of here. The economic impacts are going to be enormous. The impacts on our children will be enormous, and again, there is literally an existential threat to the existence of the human race.
English
106
590
2.9K
242.7K
Jaime RV retweetledi
Ryan Greenblatt
Ryan Greenblatt@RyanPGreenblatt·
I think most people working on mitigating catastrophic risk from powerful AI should focus on pretty short timelines (<4 years to full AI R&D automation) due to a mix of timelines actually being short (~25% in <2.5 years, ~50% in <5 years) and higher leverage in shorter timelines.
Toby Ord@tobyordoxford

B R O A D T I M E L I N E S We should have neither short AI timelines, nor long timelines, but a broad probability distribution over when transformative AI will arrive. My new essay explains why & explores the implications of such deep uncertainty. 🧵 1/

English
9
7
116
10.2K
Jaime RV retweetledi
Benjamin Todd
Benjamin Todd@ben_j_todd·
Every time I see a new insight about about the socioeconomic effects on AGI, I'm like... ...Carl Shulman explained all that 2h50min into his second 80k interview two years ago.
English
3
9
227
12.1K
Jaime RV retweetledi
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
I continue to think AI policy could be improved if everyone watched Pantheon and asked themselves "what if this literally happened in real life over the next several years?"
Peter Wildeford🇺🇸🚀 tweet media
English
38
66
649
43.5K
Jaime RV retweetledi
Luke Metro
Luke Metro@luke_metro·
sneak peek of Anthropic's 2026 Super Bowl ad
Luke Metro tweet media
English
40
188
4.2K
529.5K
Jaime RV
Jaime RV@JaimeRalV·
AI companies can't coordinate. The US government is actively driving the wedge deeper... As the race accelerates, the stakes will only get higher and the decisions more consequential. Employees: you have power here. Use it. No AI company is anything without its people. notdivided.org
Evan Hubinger@EvanHub

We may yet fail to rise to all the challenges posed by transformative AI. But it is worth celebrating that when it mattered most and we were asked to compromise the most basic principles of liberty, we said no. I hope others will join. notdivided.org

English
0
0
2
17
Jaime RV retweetledi
Richard Ngo
Richard Ngo@RichardMCNgo·
I just committed $100k to this. I want to promote a culture of hands-on experimentation figuring out how and why LLMs work, and Apart’s hackathons are a great channel for this. I encourage you to consider donating too.
Apart Research@apartresearch

We have been overwhelmed with the support received since launching our fundraiser! 🫶 In only 10 days, we have received over 30 individual donations and even more testimonials of people outlining how Apart impacted their career and transition into AI safety!

English
10
19
216
21.7K
Jaime RV retweetledi
Apart Research
Apart Research@apartresearch·
We are incredibly grateful for the support of one of the visionaries in our community supporting us to reach our first fundraising milestone: @RichardMCNgo is the author of "AGI Safety From First Principles" and previously worked at @OpenAI but left due to concerns about their approach to safety. He now does independent AI governance research. We are very privileged to have the support of such a moral heavyweight.
Apart Research tweet media
Richard Ngo@RichardMCNgo

I just committed $100k to this. I want to promote a culture of hands-on experimentation figuring out how and why LLMs work, and Apart’s hackathons are a great channel for this. I encourage you to consider donating too.

English
2
9
41
2.2K
Jaime RV retweetledi
Henry de Zoete
Henry de Zoete@HZoete·
Great to see UK AISI doubling down on its focus on national security risks. "This means the focus of the Institute will be clearer than ever. It will not focus on bias or freedom of speech, but on advancing our understanding of the most serious risks posed by the technology"
AI Security Institute@AISecurityInst

From the start, we've been dedicated to providing a scientific understanding of AI’s risks to protect people's safety and security🔎🔐 Today we’re crystallising that mission and changing our name to the AI Security Institute. 1/3

English
1
12
60
13.1K
Jaime RV retweetledi
Yoshua Bengio
Yoshua Bengio@Yoshua_Bengio·
A few reflections I had while watching this interview featuring @geoffreyhinton: It does not (or should not) really matter to our safety whether you want to call an AI conscious or not. 1⃣We won't agree on a definition of 'conscious', even among the scientists trying to figure it out. 2⃣What should really matter are questions like: ▶️Does it have goals? (yes). ▶️Does it plan (i.e. create subgoals)? (yes). ▶️Does it have or can it develop goals or subgoals that may be detrimental to us (like self-preservation, power-seeking)? (yes, already seen in recent months with experiments with OpenAI's and Anthropic's models). ▶️Is it willing to lie and act deceptively to achieve its goals? (yes, seen clearly in the last few months in these and other experiments). ▶️Does it have knowledge and skills that could be turned against humans? (more and more, see comparisons of GPT-4 vs humans on persuasion abilities, recent evaluations of o1 on bioweapon development knowledge). ▶️Does it reason and plan over a long enough horizon to be a real threat if it wanted? (not yet, but we see the planning horizon progressing as AI labs pour billions into making AI more agentic, with Claude currently better than humans at programming tasks of 2h or less for a human, but not as good for 8h and more, already). See arxiv.org/abs/2412.04984, arxiv.org/abs/2412.14093 and metr.org/blog/2024-11-2… for all the details. youtube.com/watch?v=vxkBE2…
YouTube video
YouTube
English
64
156
752
103.1K
Jaime RV
Jaime RV@JaimeRalV·
@kevpjk This looks very promising! We would potentially implement something like this for our research lab, could you shoot me a DM to see of there is room for collaboration? Thx
English
0
0
0
98
Kevin Pu
Kevin Pu@kevpjk·
🔬Research ideation is hard: After the spark of a brilliant initial idea, much work is still needed to further develop it into a well-thoughtout project by iteratively expanding and refining the initial idea and grounding it to relevant literature. How can we better support this?
Kevin Pu tweet media
English
11
81
639
93.1K
Jaime RV retweetledi
Peter Barnett
Peter Barnett@peterbarnett_·
Two new papers from the Technical Governance Team at @MIRIBerkeley about AI evaluations and how much we can (or cannot) rely on them 🧵⬇️ (1/20)
Peter Barnett tweet mediaPeter Barnett tweet media
English
2
21
100
9.2K
Jaime RV retweetledi
ControlAI
ControlAI@ControlAI·
Lord Knight of Weymouth, speaking in the House of Lords, warns of the threat from superintelligent AI: "AI could pose an extinction risk to humanity, as recognised by world leaders, AI scientists, and leading AI company CEOs themselves."
English
6
12
58
10.3K