
$Q*🍓on Ethereum
3.6K posts

$Q*🍓on Ethereum
@QStarETH
QStar🍓the dawn of AGI TG: https://t.co/IDyJfapojS
Katılım Kasım 2023
206 Takip Edilen6.4K Takipçiler
Sabitlenmiş Tweet

QSTaR is AGI Domination
⚪️CA: 0x9abfc0f085c82ec1be31d30843965fcc63053ffe
⚪️Website: q-star.co
⚪️Telegram: t.me/QStarToken
⚪️Dextools: dextools.io/app/en/ether/p…
⚪️CMC: coinmarketcap.com/currencies/qst…
⚪️CG: coingecko.com/en/coins/qstar
⚪️White Paper: docs.q-star.co
⚪️Linktree: linktr.ee/qstarETH
English

@iruletheworldmo Selectable characters like Sam, Elon, Ray etc please
English


Elon knows what’s coming…
Elon Musk@elonmusk
My estimate of the probability of Grok 5 achieving AGI is now at 10% and rising
English

Today we launched Tinker.
Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines.
Excited to see what people build.
Thinking Machines@thinkymachines
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models! thinkingmachines.ai/tinker
English


The timeline is fixed.
A new age dawns.
QSTaR is the genesis.
Elon Musk@elonmusk
I now think @xAI has a chance of reaching AGI with @Grok 5. Never thought that before.
English

I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking.
Here is the text:
Some of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between teen safety, freedom, and privacy.
It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have. If you talk to a doctor about your medical history or a lawyer about a legal situation, we have decided that it’s in society’s best interest for that information to be privileged and provided higher levels of protection. We believe that the same level of protection needs to apply to conversations with AI which people increasingly turn to for sensitive questions and private concerns. We are advocating for this with policymakers.
We are developing advanced security features to ensure your data is private, even from OpenAI employees. Like privilege in other categories, there will be certain exceptions: for example, automated systems will monitor for potential serious misuse, and the most critical risks—threats to someone’s life, plans to harm others, or societal-scale harm like a potential massive cybersecurity incident—may be escalated for human review.
The second principle is about freedom. We want users to be able to use our tools in the way that they want, within very broad bounds of safety. We have been working to increase user freedoms over time as our models get more steerable. For example, the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it. For a much more difficult example, the model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request. “Treat our adult users like adults” is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom.
The third principle is about protecting teens. We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection.
First, we have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.
We will apply different rules to teens using our services. For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm. We shared more today about how we’re building the age-prediction system and new parental controls to make all of this work.
We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.
English

Some of our principles are in conflict, so here is what we are going to do:
openai.com/index/teen-saf…
English

@OpenAINewsroom What is the definition of AGI? And how will it be shared?
English

OpenAI and Microsoft have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership.
We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety.
openai.com/index/joint-st…
English

i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real.
i think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very "it's so over/we're so back" extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots).
but the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago.
Aidan McLaughlin@aidan_mclau
r/claudecode loves codex
English










