Laurisha

10.9K posts

Laurisha banner
Laurisha

Laurisha

@CryptoFilma

Content producer partnering with Innovators & Artists. Ex-@ConsenSys @CeloOrg Cinephile with a keyboard.

LA Katılım Aralık 2010
1K Takip Edilen843 Takipçiler
Laurisha
Laurisha@CryptoFilma·
@a16z No one wants to hire new hires... yet, how can they gain experience?
English
0
0
0
189
TylerCWhitmore
TylerCWhitmore@TylerCWhitmore·
If the last 10 Best Picture winners were all nominated in the same year, who would get your vote?
TylerCWhitmore tweet media
English
1.8K
168
8.2K
1.9M
Laurisha
Laurisha@CryptoFilma·
not well. Ended up checking my carry-on for free at the gate since they didn't have space in the bins We didn't have a full flight, and even with empty rows in the back, they wouldn't let anyone move because "we didn't pay for those seats." Babies don't preboard anymore either
English
0
0
0
21
Daria 💕
Daria 💕@dariacott·
So has anyone flown Southwest since the new bag and seat policy changes? I’d love to know your experience or things to keep in mind based on how it went.
English
8
0
8
2K
Laurisha retweetledi
Zach Pandl
Zach Pandl@LowBeta·
It's official: "onchain" not "on-chain" Footnote 1 from SEC guidance yesterday
Zach Pandl tweet media
English
95
114
858
75.9K
willthetrill 🇺🇸/acc
willthetrill 🇺🇸/acc@0xwillthetrill·
JPMorganChase are hiring for nearly 50 roles related to blockchain, digital assets, crypto and payments.
willthetrill 🇺🇸/acc tweet media
English
18
3
125
12.1K
Laurisha
Laurisha@CryptoFilma·
@ldelisle09 the curse of being too early :( even for a great and innovative team
English
0
0
0
6
Laurisha
Laurisha@CryptoFilma·
DevRel used to run crypto marketing now they're just another channel they built huge personal followings, then jumped to the next project companies got burned. now it's all about founder brands + lean teams
English
0
0
1
47
Laurisha
Laurisha@CryptoFilma·
A little comic about getting back into the swing of things Art by Helena Goos ✦
Laurisha tweet media
English
0
0
0
53
Laurisha
Laurisha@CryptoFilma·
@aakashgupta helps to be courteous, so when they gain consciousness, they remember who to be nice to 😇 And honestly, if engineers think every interaction is a transaction when they want their product to be "for everyone", we're cooked
English
0
0
0
65
Aakash Gupta
Aakash Gupta@aakashgupta·
Sam Altman said people saying “please” and “thank you” to ChatGPT costs OpenAI tens of millions of dollars a year in compute. 67% of Americans do it anyway. Run the math on why. A 2024 Waseda University study tested LLM responses across politeness levels in English, Chinese, and Japanese. Impolite prompts produced measurably worse outputs: more bias, more errors, more refusals. Moderate politeness consistently beat both extremes. The mechanism makes sense once you see it. Polite prompts pattern-match to higher-quality training data. When you write “Could you help me structure this analysis?”, the model pulls from professional, well-reasoned text. When you write “give me the answer,” it pulls from Reddit. Google DeepMind’s Murray Shanahan explained it simply: the model is role-playing a smart intern. Treat the intern like a colleague, you get colleague-quality work. Bark orders, you get minimum-viable compliance. Now look at the cost side. OpenAI handles over a billion queries daily. Each GPT-4 query uses roughly 2.9 watt-hours, ten times a Google search. But OpenAI just raised $40 billion at a $300 billion valuation. Tens of millions in politeness tokens is a rounding error on a rounding error. 67% of users do it anyway, and 55% of them say it’s because it’s “the right thing to do.” They’re maintaining a behavioral habit that governs every other interaction in their life. The parent who teaches their kid to say please to Alexa isn’t doing it for Alexa. They’re doing it because the alternative is raising someone who learns that being rude gets faster results. Telling 900 million people to stop saying thank you so OpenAI can save 0.01% of operating costs is the most engineer-brained optimization take on the internet. You’re training yourself to treat every interaction as a transaction. And that habit doesn’t stay in the chat window.
Venkatesh@Venkydotdev

STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI STOP SAYING THANK YOU TO AI

English
1.5K
3.3K
28.8K
5.6M
Laurisha
Laurisha@CryptoFilma·
*taps mic* uh, hi, yeah. stop holding 'women's perspective panels.' let women be experts without needing the qualifier
Laurisha tweet media
English
0
0
1
56
Laurisha
Laurisha@CryptoFilma·
@sjdedic tokenized shares/equity > tokens
English
0
0
0
15
Simon Dedic
Simon Dedic@sjdedic·
Had a call today with the ex-CEO of a well-known crypto company that he successfully exited for 9 figures some time ago. He's starting a new venture now and told me he would never ever issue a token again. Not only was it a horrible burden to deal with, but also because it was a huge red flag in 9 out of 10 acquisition talks with potential buyers. There are only two paths forward for tokens: 1) Become digital equity 2) Evaporate If you're a builder, choose the former before you're forced into the latter.
English
68
16
425
61.1K
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.6K
49K
9.7M