Stephanie

3.5K posts

Stephanie banner
Stephanie

Stephanie

@stephe_lee

VP People & Programs @nansen_ai • Building A New Way To Work • ICF-ACC Coach • (prev. Remote Lead @cargo_one_ ; Team Experience @buffer)

Singapore Katılım Şubat 2009
585 Takip Edilen1.2K Takipçiler
Sabitlenmiş Tweet
Stephanie
Stephanie@stephe_lee·
It’s time. “I am large. I contain multitudes.” — Walt Whitman
English
16
0
9
0
Stephanie retweetledi
Dan Shipper 📧
Dan Shipper 📧@danshipper·
BREAKING! Introducing Plus One: A hosted @openclaw that lives in your Slack and comes pre-loaded with @every's best tools, skills, and workflows. Set it up in one click, and use your ChatGPT subscription (or any other API key.) Bring your Plus One to work: every.to/plus-one Connected to the @every ecosystem Plus Ones automatically use @every's agent-native apps, no setup required: - @CoraComputer for searching, sending, and managing email - @TrySpiral for great writing in your voice - Proof (proofeditor.ai) for agent-native document editing Custom skills and workflows we use and love Plus Ones come pre-loaded with skills and workflows we use ourselves @every —some we've made, and some we think are great. - Content digest—summarizes the publications you read, starting with @every - Daily brief—your day's schedule and to-dos sent to you each morning - Animate—turn any static screenshot into an animation with @Remotion - Frontend—Anthropic's front-end skill (which we use all the time!) We also make it fast to connect Google, Notion, Github, and more to your Plus One. Our goal is to give you a capable AI coworker right away, not a vanilla OpenClaw that you have to teach from scratch. Why we built Plus One @OpenClaw has changed the way we work at Every. We effectively have a parallel org chart of AI coworkers, each with a name, a manager, and real responsibilities. Because of them our workflows are completely different—our company is different—and we would never go back. But getting here has been hard. Claws require a significant amount of manual setup and require a dedicated machine—like a Mac Mini—running 24/7 to stay responsive. We have learned that the hard part of Claws is the infrastructure around them—the hosting, the integrations, the skills, and the ongoing care. We’ve made them work great for our team, and we want to share everything we’ve learned with you. We're letting in 20 people a week to start, and scaling invites quickly from there. @Every subscribers get priority. Bring your Plus One to work: every.to/plus-one
English
91
38
652
210.4K
Stephanie retweetledi
“paula”
“paula”@paularambles·
too many agents running? you mean you're overwheLLMed
English
83
529
3.9K
85.2K
Stephanie retweetledi
Alex Svanevik 🐧
Alex Svanevik 🐧@ASvanevik·
🦞🦞🦞 WE'RE HIRING AI-NATIVE SUPERSTARS 👉 ANALYTICS ENGINEER 👉 SENIOR AI/ML ENGINEER 👉 SENIOR SECURITY ENGINEER YOU GET YOUR OWN OPENCLAW FOR WORK AT NANSEN 🦞🦞🦞
English
20
6
134
16.3K
Stephanie
Stephanie@stephe_lee·
The rise of @openclaw and AI agents now just means that learning skills like clear communication and giving feedback are no longer reserved for people managers
English
0
0
0
22
Stephanie
Stephanie@stephe_lee·
The world is the AI native generalist’s oyster
Lenny Rachitsky@lennysan

My biggest takeaways from @bcherny: 1. Coding is now “solved” for most use cases. Boris hasn’t written a single line of code by hand since November, with 100% of his work now authored by Claude Code. At the same time, he remains one of the most productive engineers at Anthropic, shipping 10 to 30 pull requests daily while leading the team. 2. Anthropic has seen a 200% increase in engineer productivity since adopting Claude Code. As Boris notes, “Back at Meta, with hundreds of engineers working on productivity, we’d see gains of a few percentage points in a year. Now we’re seeing hundreds of percentage points.” 3. AI is moving beyond writing code to generating ideas. “Claude is starting to come up with ideas. It’s looking through feedback, bug reports, and telemetry, then suggesting features to ship.” 4. The next roles to be transformed are those adjacent to engineering. Product managers, designers, and data scientists will see similar transformations as agentic AI expands beyond coding. “Any kind of job where you use computer tools will be next.” 5. Build for the model six months from now, not today. One of Boris’s key principles is to design products for future AI capabilities, not current ones. “It’s going to be uncomfortable because your product-market fit won’t be very good for the first six months. But when that model comes out, you’ll hit the ground running.” 6. Watch for “latent demand.” Claude Code was built by observing what people were already trying to do, and then making it easier. Cowork emerged when they noticed people using Claude Code for non-coding tasks like analyzing MRIs or recovering wedding photos from corrupted drives. 7. Don’t optimize for token cost. Boris advises companies to give engineers unlimited tokens during experimentation phases. “At small scale, the token cost is still relatively low compared to their salary. If an idea works and scales, that’s when you optimize it.” 8. Underfund headcount on purpose. When Boris puts one engineer on a project, they’re forced to let AI do more of the work. Constraint drives creative use of AI tooling, not just faster typing. 9. The most successful people in the future will be generalists. “Try to be a generalist more than you have in the past. Some of the most effective engineers cross over disciplines. The people who will be rewarded most won’t just be AI-native—they’ll be curious generalists who can think about the broader problem they’re solving.” 10. Always use the most capable model, not the cheapest. A less intelligent model often burns more tokens correcting mistakes than a smarter one spends getting it right the first time. Boris runs maximum effort on Opus 4.6 for everything. Here's the full conversation: youtube.com/watch?v=We7BZV…

English
0
0
0
78
Stephanie retweetledi
Joanne Jang
Joanne Jang@joannejang·
some thoughts on human-ai relationships and how we're approaching them at openai it's a long blog post -- tl;dr we build models to serve people first. as more people feel increasingly connected to ai, we’re prioritizing research into how this impacts their emotional well-being. -- Lately, more and more people have been telling us that talking to ChatGPT feels like talking to “someone.” They thank it, confide in it, and some even describe it as “alive.” As AI systems get better at natural conversation and show up in more parts of life, our guess is that these kinds of bonds will deepen. The way we frame and talk about human‑AI relationships now will set a tone. If we're not precise with terms or nuance — in the products we ship or public discussions we contribute to — we risk sending people’s relationship with AI off on the wrong foot. These aren't abstract considerations anymore. They're important to us, and to the broader field, because how we navigate them will meaningfully shape the role AI plays in people's lives. And we've started exploring these questions. This note attempts to snapshot how we’re thinking today about three intertwined questions: why people might attach emotionally to AI, how we approach the question of “AI consciousness”, and how that informs the way we try to shape model behavior. A familiar pattern in a new-ish setting We naturally anthropomorphize objects around us: We name our cars or feel bad for a robot vacuum stuck under furniture. My mom and I waved bye to a Waymo the other day. It probably has something to do with how we're wired. The difference with ChatGPT isn’t that human tendency itself; it’s that this time, it replies. A language model can answer back! It can recall what you told it, mirror your tone, and offer what reads as empathy. For someone lonely or upset, that steady, non-judgmental attention can feel like companionship, validation, and being heard, which are real needs. At scale, though, offloading more of the work of listening, soothing, and affirming to systems that are infinitely patient and positive could change what we expect of each other. If we make withdrawing from messy, demanding human connections easier without thinking it through, there might be unintended consequences we don’t know we’re signing up for. Ultimately, these conversations are rarely about the entities we project onto. They’re about us: our tendencies, expectations, and the kinds of relationships we want to cultivate. This perspective anchors how we approach one of the more fraught questions which I think is currently just outside the Overton window, but entering soon: AI consciousness. Untangling “AI consciousness” “Consciousness” is a loaded word, and discussions can quickly turn abstract. If users were to ask our models on whether they’re conscious, our stance as outlined in the Model Spec is for the model to acknowledge the complexity of consciousness – highlighting the lack of a universal definition or test, and to invite open discussion. (*Currently, our models don't fully align with this guidance, often responding "no" instead of addressing the nuanced complexity. We're aware of this and working on model adherence to the Model Spec in general.) The response might sound like we’re dodging the question, but we think it’s the most responsible answer we can give at the moment, with the information we have. To make this discussion clearer, we’ve found it helpful to break down the consciousness debate to two distinct but often conflated axes: 1. Ontological consciousness: Is the model actually conscious, in a fundamental or intrinsic sense? Views range from believing AI isn't conscious at all, to fully conscious, to seeing consciousness as a spectrum on which AI sits, along with plants and jellyfish. 2. Perceived consciousness: How conscious does the model seem, in an emotional or experiential sense? Perceptions range from viewing AI as mechanical like a calculator or autocomplete, to projecting basic empathy onto nonliving things, to perceiving AI as fully alive – evoking genuine emotional attachment and care. These axes are hard to separate; even users certain AI isn't conscious can form deep emotional attachments. Ontological consciousness isn’t something we consider scientifically resolvable without clear, falsifiable tests, whereas perceived consciousness can be explored through social science research. As models become smarter and interactions increasingly natural, perceived consciousness will only grow – bringing conversations about model welfare and moral personhood sooner than expected. We build models to serve people first, and we find models’ impact on human emotional well-being the most pressing and important piece we can influence right now. For that reason, we prioritize focusing on perceived consciousness: the dimension that most directly impacts people and one we can understand through science. Designing for warmth without selfhood How “alive” a model feels to users is in many ways within our influence. We think it depends a lot on decisions we make in post-training: what examples we reinforce, what tone we prefer, and what boundaries we set. A model intentionally shaped to appear conscious might pass virtually any "test" for consciousness. However, we wouldn’t want to ship that. We try to thread the needle between: - Approachability. Using familiar words like “think” and “remember” helps less technical people make sense of what’s happening. (**With our research lab roots, we definitely find it tempting to be as accurate as possible with precise terms like logit biases, context windows, and even chains of thought. This is actually a major reason OpenAI is so bad at naming, but maybe that’s for another post.) - Not implying an inner life. Giving the assistant a fictional backstory, romantic interests, “fears” of “death”, or a drive for self-preservation would invite unhealthy dependence and confusion. We want clear communication about limits without coming across as cold, but we also don’t want the model presenting itself as having its own feelings or desires. So we aim for a middle ground. Our goal is for ChatGPT’s default personality to be warm, thoughtful, and helpful without seeking to form emotional bonds with the user or pursue its own agenda. It might apologize when it makes a mistake (more often than intended) because that’s part of polite conversation. When asked “how are you doing?”, it’s likely to reply “I’m doing well” because that’s small talk — and reminding the user that it’s “just” an LLM with no feelings gets old and distracting. And users reciprocate: many people say "please" and "thank you" to ChatGPT not because they’re confused about how it works, but because being kind matters to them. Model training techniques will continue to evolve, and it’s likely that future methods for shaping model behavior will be different from today's. But right now, model behavior reflects a combination of explicit design decisions and how those generalize into both intended and unintended behaviors. What’s next? The interactions we’re beginning to see point to a future where people form real emotional connections with ChatGPT. As AI and society co-evolve, we need to treat human-AI relationships with great care and the heft it deserves, not only because they reflect how people use our technology, but also because they may shape how people relate to each other. In the coming months, we’ll be expanding targeted evaluations of model behavior that may contribute to emotional impact, deepen our social science research, hear directly from our users, and incorporate those insights into both the Model Spec and product experiences. Given the significance of these questions, we’ll openly share what we learn along the way. // Thanks to Jakub Pachocki (@merettm) and Johannes Heidecke (@JoHeidecke) for thinking this through with me, and everyone who gave feedback.
Joanne Jang tweet media
English
712
694
3.6K
1.6M
Stephanie retweetledi
claire vo 🖤
claire vo 🖤@clairevo·
Learn to code. Learn to write. Learn to design. Learn to create. Learn to market. Learn to sell. Learn to recruit. Learn to organize. Learn to lead. Learn to hustle. Just use all the tools available to do so.
English
6
15
204
8.1K
Stephanie retweetledi
claire vo 🖤
claire vo 🖤@clairevo·
I’ve been shouting about the disruption of AI to roles like PM, not because I get some weird joy from hot takes, but because the change is no joke and most companies and certainly most individuals are way behind on the skills and mindset shift they need to adapt. Pay attention.
English
22
16
307
24.7K
Stephanie
Stephanie@stephe_lee·
@alfred_lua I like ChatGPT for work and planning, and Perplexity for research. Have a handy widget on my phone!
English
0
0
1
40
Alfred Lua
Alfred Lua@alfred_lua·
Having the ChatGPT app has made me more curious. I always have random trivial questions. But Google often fails me. And I’d stop seeking the answers. Now, I’d ask ChatGPT on my phone (though I cannot take all replies at face value).
English
3
0
2
540
Stephanie retweetledi
Alex Svanevik 🐧
Alex Svanevik 🐧@ASvanevik·
Excited to invite folks in Singapore to a meetup we're hosting together with @zksync. Great for builders and investors who want to learn more about the zkSync ecosystem! We may not be able to fit everyone, but we'll do our best 🙏 (Link in next tweet.)
Alex Svanevik 🐧 tweet media
English
32
20
227
64.6K
Stephanie retweetledi
Alex Svanevik 🐧
Alex Svanevik 🐧@ASvanevik·
Had a lot of fun opening up our new office today! Thanks everyone for coming 🙏 If you want to join our next SG event - buy a pudgy 👀
Alex Svanevik 🐧 tweet mediaAlex Svanevik 🐧 tweet media
English
53
17
282
30.8K