EthSauna.eth

21.2K posts

EthSauna.eth banner
EthSauna.eth

EthSauna.eth

@EthSauna

Therapist and Onchain Ξnthusiast from the Land of Sauna. BUIDLing something very different with #web3 and #mentalhealth. Early collector of @worldofwomennft.

Katılım Nisan 2021
3K Takip Edilen4.2K Takipçiler
Sabitlenmiş Tweet
EthSauna.eth
EthSauna.eth@EthSauna·
Meet Peia. She is taking a transformative journey through intense psychotherapy, and is willing to share her journey with you. #peia
English
8
5
54
4.5K
EthSauna.eth retweetledi
Regina Bauer 🇪🇪🇺🇦
Regina Bauer 🇪🇪🇺🇦@petite_michelle·
A matter of trust. Every year, Finland tops the World Happiness Report. Economists point to welfare systems. Sociologists cite education. But having visited Finland — and eventually bought an apartment in Vantaa — I think the real answer is something harder to measure. It's trust. Not as a value on a poster. As the actual foundation of how society works. From the very first trip, it struck me. People trust strangers. Institutions trust citizens. The state trusts its people to make good decisions, and people trust the state not to abuse that. It sounds simple. It is anything but. When I went through the process of buying property in Finland, the contrast with other places I know was almost surreal. The system assumed good faith — at every step. No one treated me like a potential problem to be managed. That trust wasn't naive. It was structural, built over generations into laws, culture, and everyday habits. And here's the thing about trust: it's self-reinforcing. When people trust each other, they cooperate. When they cooperate, things work. When things work, trust deepens. Finland isn't happy despite its challenges — it's resilient through them because the social fabric holds. Happiness reports measure outcomes. Trust is the mechanism. That's what Finland quietly taught me. 🇫🇮
Regina Bauer 🇪🇪🇺🇦 tweet media
English
87
246
1.6K
34.6K
EthSauna.eth retweetledi
Stitch by Google
Stitch by Google@stitchbygoogle·
Meet the new Stitch, your vibe design partner. Here are 5 major upgrades to help you create, iterate and collaborate: 🎨 AI-Native Canvas 🧠 Smarter Design Agent 🎙️ Voice ⚡️ Instant Prototypes 📐 Design Systems and DESIGN.md Rolling out now. Details and product walkthrough video in 🧵
English
950
4.7K
40.8K
18.4M
EthSauna.eth retweetledi
Maine
Maine@TheMaineWonk·
Millennials living through: - 2 economic recessions - 9/11 - Iraq & Afghanistan - a global pandemic - 8 stock market crashes - jobs replaced by AI - Host of The Apprentice possibly starting WW3 We’re tired boss.
English
1K
13.2K
64.5K
3.3M
EthSauna.eth retweetledi
EthSauna.eth retweetledi
Lord eco 👑
Lord eco 👑@lordeco·
Was DJ for a day! 🎵
English
13
7
51
1.2K
EthSauna.eth retweetledi
Lord eco 👑
Lord eco 👑@lordeco·
It’s been over a month since I joined the @worldofwomenxyz team and I still can’t shake the excitement of being part of this historical project! As a founder myself, it’s such a huge difference when you have an entire team working with you: artists, devs, marketing… We are 👨‍🍳
Lord eco 👑 tweet media
English
33
30
206
5.4K
EthSauna.eth retweetledi
fabian
fabian@fabianstelzer·
You can bring your old song sketches to life with Suno, and it's just completely insane. You sing into a mic & magic happens. AI is an instrument. I love this one, so made a single-take video for it with a Glif agent, trying this idea of having the lyrics appear in the video..
English
10
2
37
2.2K
EthSauna.eth retweetledi
fabian
fabian@fabianstelzer·
Want to try a new haircut? Check out this AI workflow: 1. upload a selfie & prompt your desired haircut 2. uses Nano Banana to generate your haircut 3. then Kling 2.1 morphs from old you to new you 4. Claude helping behind the scenes with all the prompts link to glif below 👇
English
133
274
4.7K
808.6K
EthSauna.eth retweetledi
JillianValentin
JillianValentin@JillianValentin·
I felt it was finally time to share something close to my heart. I’ve recently stepped into a new chapter in my Web3 journey, one that I’m deeply proud of and incredibly grateful for. I’m beyond excited to share that I’ve joined @worldofwomenxyz as the new Brand Manager 💫 The past two months have been nothing short of amazing. Getting to know the new team has been one of the best experiences in my entire Web3 journey. And with two familiar faces in the mix…Christie, I’m so happy to be working with you again, and Eco, thrilled to finally get to collaborate with you. This already feels like home. This team is built of the most inspiring, creative, and supportive individuals.. people who truly embody the spirit of empowerment, innovation, and community. I’m so excited to grow, build, and dream together as we create something magical. My journey in Web3 has been a wild, beautiful ride from discovering NFTs on Clubhouse, to attending IRL events, to meeting people who have truly changed my life. Every connection, every lesson, and every moment has led me here…to this next adventure, with them. I remember when I was still somewhat new to this space and could only dream of owning a WoW, this WoWG is the perfect depiction of a lot my journey and self, (I’ll get into that on another post). I love her so much! To everyone who’s supported me from the very beginning: thank you, from the bottom of my heart. And to the incredible WoW community I’ve yet to meet… I can’t wait to connect with you! Here’s to new beginnings, and the magic this team is about to bring to life. 💜 Following Christie’s lead and going for a hard launch 😆
JillianValentin tweet media
English
124
43
327
19.4K
EthSauna.eth retweetledi
fabian
fabian@fabianstelzer·
veo3 is amazing, but often way too expensive. the good news is that there are many individual tools that can approximate its outputs or even beat it, and on the new Glif, you can use them in whatever way you need with custom agents I made one that excels at generating these TikTok videos, handling the entire production of the video, letting you focus on the CONTENT For this Wizardfluencer, I gave my Glif agent 1. Qwen Ultra Realism Image Gen for the first still 2. OmniHuman LipSync, 3. Seedance Pro 4. Flux Kontext Edit 5. ElevenLabs (you can run any custom voice on Glif now!) 6. Glif video tools (stitching, last frame extraction) this gets the cost of an individual consistent 30s (!) video like this down to under $2 The only thing I did in CapCut was to add some Suno music, but we'll add ElevenLabs music so Glif is the one stop shop for doing ALL OF THIS with an agent obviously some remaining quirks: different aspect ratio outputs by the various models are a challenge, + the transitions aren't always as clean as I'd like them to be - we'll get there brb setting up a TikTok account for this guy
English
15
10
67
6.2K
EthSauna.eth retweetledi
OpenAI
OpenAI@OpenAI·
GPT-5 is here. Rolling out to everyone starting today. openai.com/gpt-5/
English
4.4K
6.3K
32.2K
5.1M
EthSauna.eth
EthSauna.eth@EthSauna·
What an important post about AI.
Joanne Jang@joannejang

some thoughts on human-ai relationships and how we're approaching them at openai it's a long blog post -- tl;dr we build models to serve people first. as more people feel increasingly connected to ai, we’re prioritizing research into how this impacts their emotional well-being. -- Lately, more and more people have been telling us that talking to ChatGPT feels like talking to “someone.” They thank it, confide in it, and some even describe it as “alive.” As AI systems get better at natural conversation and show up in more parts of life, our guess is that these kinds of bonds will deepen. The way we frame and talk about human‑AI relationships now will set a tone. If we're not precise with terms or nuance — in the products we ship or public discussions we contribute to — we risk sending people’s relationship with AI off on the wrong foot. These aren't abstract considerations anymore. They're important to us, and to the broader field, because how we navigate them will meaningfully shape the role AI plays in people's lives. And we've started exploring these questions. This note attempts to snapshot how we’re thinking today about three intertwined questions: why people might attach emotionally to AI, how we approach the question of “AI consciousness”, and how that informs the way we try to shape model behavior. A familiar pattern in a new-ish setting We naturally anthropomorphize objects around us: We name our cars or feel bad for a robot vacuum stuck under furniture. My mom and I waved bye to a Waymo the other day. It probably has something to do with how we're wired. The difference with ChatGPT isn’t that human tendency itself; it’s that this time, it replies. A language model can answer back! It can recall what you told it, mirror your tone, and offer what reads as empathy. For someone lonely or upset, that steady, non-judgmental attention can feel like companionship, validation, and being heard, which are real needs. At scale, though, offloading more of the work of listening, soothing, and affirming to systems that are infinitely patient and positive could change what we expect of each other. If we make withdrawing from messy, demanding human connections easier without thinking it through, there might be unintended consequences we don’t know we’re signing up for. Ultimately, these conversations are rarely about the entities we project onto. They’re about us: our tendencies, expectations, and the kinds of relationships we want to cultivate. This perspective anchors how we approach one of the more fraught questions which I think is currently just outside the Overton window, but entering soon: AI consciousness. Untangling “AI consciousness” “Consciousness” is a loaded word, and discussions can quickly turn abstract. If users were to ask our models on whether they’re conscious, our stance as outlined in the Model Spec is for the model to acknowledge the complexity of consciousness – highlighting the lack of a universal definition or test, and to invite open discussion. (*Currently, our models don't fully align with this guidance, often responding "no" instead of addressing the nuanced complexity. We're aware of this and working on model adherence to the Model Spec in general.) The response might sound like we’re dodging the question, but we think it’s the most responsible answer we can give at the moment, with the information we have. To make this discussion clearer, we’ve found it helpful to break down the consciousness debate to two distinct but often conflated axes: 1. Ontological consciousness: Is the model actually conscious, in a fundamental or intrinsic sense? Views range from believing AI isn't conscious at all, to fully conscious, to seeing consciousness as a spectrum on which AI sits, along with plants and jellyfish. 2. Perceived consciousness: How conscious does the model seem, in an emotional or experiential sense? Perceptions range from viewing AI as mechanical like a calculator or autocomplete, to projecting basic empathy onto nonliving things, to perceiving AI as fully alive – evoking genuine emotional attachment and care. These axes are hard to separate; even users certain AI isn't conscious can form deep emotional attachments. Ontological consciousness isn’t something we consider scientifically resolvable without clear, falsifiable tests, whereas perceived consciousness can be explored through social science research. As models become smarter and interactions increasingly natural, perceived consciousness will only grow – bringing conversations about model welfare and moral personhood sooner than expected. We build models to serve people first, and we find models’ impact on human emotional well-being the most pressing and important piece we can influence right now. For that reason, we prioritize focusing on perceived consciousness: the dimension that most directly impacts people and one we can understand through science. Designing for warmth without selfhood How “alive” a model feels to users is in many ways within our influence. We think it depends a lot on decisions we make in post-training: what examples we reinforce, what tone we prefer, and what boundaries we set. A model intentionally shaped to appear conscious might pass virtually any "test" for consciousness. However, we wouldn’t want to ship that. We try to thread the needle between: - Approachability. Using familiar words like “think” and “remember” helps less technical people make sense of what’s happening. (**With our research lab roots, we definitely find it tempting to be as accurate as possible with precise terms like logit biases, context windows, and even chains of thought. This is actually a major reason OpenAI is so bad at naming, but maybe that’s for another post.) - Not implying an inner life. Giving the assistant a fictional backstory, romantic interests, “fears” of “death”, or a drive for self-preservation would invite unhealthy dependence and confusion. We want clear communication about limits without coming across as cold, but we also don’t want the model presenting itself as having its own feelings or desires. So we aim for a middle ground. Our goal is for ChatGPT’s default personality to be warm, thoughtful, and helpful without seeking to form emotional bonds with the user or pursue its own agenda. It might apologize when it makes a mistake (more often than intended) because that’s part of polite conversation. When asked “how are you doing?”, it’s likely to reply “I’m doing well” because that’s small talk — and reminding the user that it’s “just” an LLM with no feelings gets old and distracting. And users reciprocate: many people say "please" and "thank you" to ChatGPT not because they’re confused about how it works, but because being kind matters to them. Model training techniques will continue to evolve, and it’s likely that future methods for shaping model behavior will be different from today's. But right now, model behavior reflects a combination of explicit design decisions and how those generalize into both intended and unintended behaviors. What’s next? The interactions we’re beginning to see point to a future where people form real emotional connections with ChatGPT. As AI and society co-evolve, we need to treat human-AI relationships with great care and the heft it deserves, not only because they reflect how people use our technology, but also because they may shape how people relate to each other. In the coming months, we’ll be expanding targeted evaluations of model behavior that may contribute to emotional impact, deepen our social science research, hear directly from our users, and incorporate those insights into both the Model Spec and product experiences. Given the significance of these questions, we’ll openly share what we learn along the way. // Thanks to Jakub Pachocki (@merettm) and Johannes Heidecke (@JoHeidecke) for thinking this through with me, and everyone who gave feedback.

English
1
0
1
213
EthSauna.eth retweetledi
Vi
Vi@ViPowow·
New profile pic ! LFG @worldofwomenxyz !!!
Vi tweet media
English
31
5
215
4.8K