Laurisha
10.9K posts

Laurisha
@CryptoFilma
Content producer partnering with Innovators & Artists. Ex-@ConsenSys @CeloOrg Cinephile with a keyboard.
LA Katılım Aralık 2010
1K Takip Edilen843 Takipçiler

@Phil_Lewis_ Why do people hate journalism so much
English
Laurisha retweetledi

the way that you know an american has irish ancestry is that they WILL TELL you
aman@amanonthechain
happy st paddys day to those who shouldn't be celebrating
English

@ldelisle09 the curse of being too early :( even for a great and innovative team
English

@aakashgupta helps to be courteous, so when they gain consciousness, they remember who to be nice to 😇
And honestly, if engineers think every interaction is a transaction when they want their product to be "for everyone", we're cooked
English

Sam Altman said people saying “please” and “thank you” to ChatGPT costs OpenAI tens of millions of dollars a year in compute. 67% of Americans do it anyway.
Run the math on why.
A 2024 Waseda University study tested LLM responses across politeness levels in English, Chinese, and Japanese. Impolite prompts produced measurably worse outputs: more bias, more errors, more refusals. Moderate politeness consistently beat both extremes.
The mechanism makes sense once you see it. Polite prompts pattern-match to higher-quality training data. When you write “Could you help me structure this analysis?”, the model pulls from professional, well-reasoned text. When you write “give me the answer,” it pulls from Reddit.
Google DeepMind’s Murray Shanahan explained it simply: the model is role-playing a smart intern. Treat the intern like a colleague, you get colleague-quality work. Bark orders, you get minimum-viable compliance.
Now look at the cost side. OpenAI handles over a billion queries daily. Each GPT-4 query uses roughly 2.9 watt-hours, ten times a Google search. But OpenAI just raised $40 billion at a $300 billion valuation. Tens of millions in politeness tokens is a rounding error on a rounding error.
67% of users do it anyway, and 55% of them say it’s because it’s “the right thing to do.” They’re maintaining a behavioral habit that governs every other interaction in their life. The parent who teaches their kid to say please to Alexa isn’t doing it for Alexa. They’re doing it because the alternative is raising someone who learns that being rude gets faster results.
Telling 900 million people to stop saying thank you so OpenAI can save 0.01% of operating costs is the most engineer-brained optimization take on the internet. You’re training yourself to treat every interaction as a transaction. And that habit doesn’t stay in the chat window.
Venkatesh@Venkydotdev
STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI
English

Had a call today with the ex-CEO of a well-known crypto company that he successfully exited for 9 figures some time ago.
He's starting a new venture now and told me he would never ever issue a token again.
Not only was it a horrible burden to deal with, but also because it was a huge red flag in 9 out of 10 acquisition talks with potential buyers.
There are only two paths forward for tokens:
1) Become digital equity
2) Evaporate
If you're a builder, choose the former before you're forced into the latter.
English

🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone.
And it's making you a worse person because of it.
Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would.
That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear.
It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on.
Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one.
The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started.
Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product.
This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens.
Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing.
You're right. They're wrong.
Even when the opposite is true.

English

@abigailcarlson_ @Remitarded Would love to hear your review after using it
influence me out of Riverside and Adobe 🥲
English

hi what is the best video editing software out there right now? i have a VERY serious video of @Remitarded explaining yield mechanics of tokenized turkish lira using apple juice analogies all in a bunny avatar and the world simply must see it
English














