$Q*🍓on Ethereum

3.6K posts

$Q*🍓on Ethereum banner
$Q*🍓on Ethereum

$Q*🍓on Ethereum

@QStarETH

QStar🍓the dawn of AGI TG: https://t.co/IDyJfapojS

Katılım Kasım 2023
206 Takip Edilen6.4K Takipçiler
Internal Tech Emails
Internal Tech Emails@TechEmails·
Sam Altman emails Elon Musk September 21, 2017
Internal Tech Emails tweet media
English
56
107
2K
100.6K
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
so excited to vibe code my first game: the race to agi. more to come.
English
7
0
33
3.7K
Sam Altman
Sam Altman@sama·
10 am livestream today to launch a new product I'm quite excited about!
OpenAI@OpenAI

English
1.3K
752
10.4K
2.2M
Ilya Sutskever
Ilya Sutskever@ilyasut·
truly the greatest day ever🎗️
English
833
687
16K
1.8M
Mira Murati
Mira Murati@miramurati·
Today we launched Tinker. Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines. Excited to see what people build.
Thinking Machines@thinkymachines

Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models! thinkingmachines.ai/tinker

English
186
500
5.4K
631.6K
OpenAI
OpenAI@OpenAI·
10am PT.
Português
539
479
5.6K
2.1M
Internal Tech Emails
Internal Tech Emails@TechEmails·
Elizabeth Holmes's schedule Circa 2005–2009
Internal Tech Emails tweet media
Română
66
72
1.5K
298.4K
Sam Altman
Sam Altman@sama·
I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking. Here is the text: Some of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between teen safety, freedom, and privacy. It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have. If you talk to a doctor about your medical history or a lawyer about a legal situation, we have decided that it’s in society’s best interest for that information to be privileged and provided higher levels of protection. We believe that the same level of protection needs to apply to conversations with AI which people increasingly turn to for sensitive questions and private concerns. We are advocating for this with policymakers. We are developing advanced security features to ensure your data is private, even from OpenAI employees. Like privilege in other categories, there will be certain exceptions: for example, automated systems will monitor for potential serious misuse, and the most critical risks—threats to someone’s life, plans to harm others, or societal-scale harm like a potential massive cybersecurity incident—may be escalated for human review. The second principle is about freedom. We want users to be able to use our tools in the way that they want, within very broad bounds of safety. We have been working to increase user freedoms over time as our models get more steerable. For example, the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it. For a much more difficult example, the model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request. “Treat our adult users like adults” is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom. The third principle is about protecting teens. We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection. First, we have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff. We will apply different rules to teens using our services. For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm. We shared more today about how we’re building the age-prediction system and new parental controls to make all of this work. We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.
English
689
338
3.9K
626.1K
OpenAI Newsroom
OpenAI Newsroom@OpenAINewsroom·
OpenAI and Microsoft have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership. We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety. openai.com/index/joint-st…
English
171
369
4.3K
1.3M
Sam Altman
Sam Altman@sama·
first new iphone upgrade i have really wanted in awhile! looks very cool.
English
1.5K
597
17.9K
2.3M
Bojan Tunguz
Bojan Tunguz@tunguz·
The grim job market just got grimmer. 😔
Bojan Tunguz tweet media
English
3
3
18
3.5K
Sam Altman
Sam Altman@sama·
i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real. i think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very "it's so over/we're so back" extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots). but the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago.
Aidan McLaughlin@aidan_mclau

r/claudecode loves codex

English
1.3K
557
5.8K
2M