Ilya Sutskever

1.2K posts

Ilya Sutskever banner
Ilya Sutskever

Ilya Sutskever

@ilyasut

SSI @SSI

가입일 Eylül 2013
3 팔로잉627K 팔로워
Ilya Sutskever
Ilya Sutskever@ilyasut·
It’s extremely good that Anthropic has not backed down, and it’s siginficant that OpenAI has taken a similar stance. In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today.
English
1.4K
2.5K
25.6K
3M
Karina Nguyen
Karina Nguyen@karinanguyen·
We’re kicking things off with the first half of the drop: three T-shirts that bring @ilyasut's incredible art to life! Multi-head, Attention, and The Gaze each tell their own visual story. Pick up any one and you’ll get early access to complete the look with the long-awaited hat. Proceeds from this collection will fund grants for emerging artists and creatives exploring new forms of creation.
Karina Nguyen tweet mediaKarina Nguyen tweet media
English
46
18
546
156K
Ilya Sutskever
Ilya Sutskever@ilyasut·
truly the greatest day ever🎗️
English
838
694
16.1K
1.8M
Ilya Sutskever
Ilya Sutskever@ilyasut·
I sent the following message to our team and investors: — As you know, Daniel Gross’s time with us has been winding down, and as of June 29 he is officially no longer a part of SSI. We are grateful for his early contributions to the company and wish him well in his next endeavor. I am now formally CEO of SSI, and Daniel Levy is President. The technical team continues to report to me. ⁠You might have heard rumors of companies looking to acquire us. We are flattered by their attention but are focused on seeing our work through. We have the compute, we have the team, and we know what to do. Together we will keep building safe superintelligence. Ilya
English
754
760
14.2K
2.3M
Ilya Sutskever
Ilya Sutskever@ilyasut·
And congratulations to @demishassabis and John Jumper for winning the Nobel Prize in Chemistry!!
English
229
196
6.6K
781.1K
Ilya Sutskever
Ilya Sutskever@ilyasut·
We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team. Join us: ssi.inc
English
416
486
6.2K
988.9K
Ilya Sutskever
Ilya Sutskever@ilyasut·
I am starting a new company:
SSI Inc.@ssi

Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our​​ time. We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It’s called Safe Superintelligence Inc. SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures. We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent. We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else. If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age. Now is the time. Join us. Ilya Sutskever, Daniel Gross, Daniel Levy June 19, 2024

English
1.5K
3.1K
30.7K
7.4M
Ilya Sutskever
Ilya Sutskever@ilyasut·
After almost a decade, I have made the decision to leave OpenAI.  The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm.  It was an honor and a privilege to have worked together, and I will miss everyone dearly.   So long, and thanks for everything. I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.
English
1.5K
2.3K
25.6K
5.9M
Ilya Sutskever 리트윗함
OpenAI
OpenAI@OpenAI·
We're announcing, together with @ericschmidt: Superalignment Fast Grants. $10M in grants for technical research on aligning superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more. Apply by Feb 18! openai.com/blog/superalig…
English
278
454
2.8K
2M
Ilya Sutskever 리트윗함
Leopold Aschenbrenner
Leopold Aschenbrenner@leopoldasch·
RLHF works great for today's models. But aligning future superhuman models will present fundamentally new challenges. We need new approaches + scientific understanding. New researchers can make enormous contributions—and we want to fund you! Apply by Feb 18!
Leopold Aschenbrenner tweet media
OpenAI@OpenAI

We're announcing, together with @ericschmidt: Superalignment Fast Grants. $10M in grants for technical research on aligning superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more. Apply by Feb 18! openai.com/blog/superalig…

English
32
57
554
614.5K
Ilya Sutskever 리트윗함
Boaz Barak
Boaz Barak@boazbaraktcs·
My view is that what makes super-alignment "super" is ensuring we can safely scale the capabilities of AIs even though we can't scale their human supervisors. For this, it is imperative to study the "weak teacher strong student" setting. Paper shows great promise in this area!
AK@_akhaliq

Open AI new paper Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision paper: cdn.openai.com/papers/weak-to… blog: openai.com/research/weak-… Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior—for example, to evaluate whether a model faithfully followed instructions or generated safe outputs. However, future superhuman models will behave in complex ways too difficult for humans to reliably evaluate; humans will only be able to weakly supervise superhuman models. We study an analogy to this problem: can weak model supervision elicit the full capabilities of a much stronger model? We test this using a range of pretrained language models in the GPT-4 family on natural language processing (NLP), chess, and reward modeling tasks. We find that when we naively finetune strong pretrained models on labels generated by a weak model, they consistently perform better than their weak supervisors, a phenomenon we call weak-to-strong generalization. However, we are still far from recovering the full capabilities of strong models with naive finetuning alone, suggesting that techniques like RLHF may scale poorly to superhuman models without further work. We find that simple methods can often significantly improve weak-to-strong generalization: for example, when finetuning GPT-4 with a GPT-2-level supervisor and an auxiliary confidence loss, we can recover close to GPT-3.5-level performance on NLP tasks. Our results suggest that it is feasible to make empirical progress today on a fundamental challenge of aligning superhuman models.

English
20
82
451
399.7K
Ilya Sutskever 리트윗함
Sam Altman
Sam Altman@sama·
i'd particularly like to recognize @CollinBurns4 for today's generalization result, who came to openai excited to pursue this vision and helped get the rest of the team excited about it!
English
168
151
2.7K
1.1M
Ilya Sutskever 리트윗함
OpenAI
OpenAI@OpenAI·
Large pretrained models have excellent raw capabilities—but can we elicit these fully with only weak supervision? GPT-4 supervised by ~GPT-2 recovers performance close to GPT-3.5 supervised by humans—generalizing to solve even hard problems where the weak supervisor failed!
OpenAI tweet media
English
29
86
710
256.9K