asdfsd

23 posts

asdfsd

asdfsd

@asdfsd24539

Katılım Mayıs 2024
50 Takip Edilen2 Takipçiler
asdfsd retweetledi
Daniel Lurie 丹尼爾·羅偉
Daniel Lurie 丹尼爾·羅偉@DanielLurie·
Starting August 26th, Waymo, along with a limited number of Uber and Lyft Black vehicles, will be allowed on Market Street during off-peak hours. This corridor is key to our city’s recovery, and expanding transportation options will help bring residents back to enjoy everything it has to offer.
English
149
42
528
108.7K
asdfsd retweetledi
Anthony DiGiorgio, DO, MHA
Anthony DiGiorgio, DO, MHA@DrDiGiorgio·
San Francisco in a nutshell. I pass well over a hundred drug users every morning, including many milling about outside our hospital and parking garage. I care for them when they land in our hospital and I’m proud to do that work. But the city refuses to revoke sanctuary protections for the drug dealers fueling the crisis. It won’t enforce basic laws that could restore order. And yet, when I stop for less than a minute to pick up my son from daycare, I get ticketed for not curbing my wheels on a 5% grade. Lawlessness is tolerated. The law abiding are penalized.
Anthony DiGiorgio, DO, MHA tweet media
English
3.9K
5.4K
45.3K
2.4M
asdfsd retweetledi
Liz4SF
Liz4SF@incitafusio·
Apparently if you are parent that opposes a graduation requirement for pilot ethnic studies program for SFUSD, you are "privilege parents" complaining and using your energy and resources to "destroy curriculum which will teach kids to create a more just and equitable world" I highly doubt this SFUSD parent has ever lived through or escaped communism, where millions of people have died for false equity in China, Vietnam, North Korea, former Soviet Union, Eastern Europe, etc. You want to help oppressed groups? Help kids have opportunities for upward mobility - give them solid reading, writing, math and communication skills, while building their resilience so they can overcome whatever disappointments and injustice they may face in life. Teach them to focus on the future that they can still impact, while not making the same mistakes of the past.
Liz4SF tweet media
English
7
11
52
9.4K
asdfsd retweetledi
Aaron Tan
Aaron Tan@aaronistan·
Introducing Lume, the robotic lamp. The first robot designed to fit naturally into your home and help with chores, starting with laundry folding. If you’re looking for help and want to avoid the privacy and safety concerns of humanoids in your home, pre-order now.
English
1.7K
409
6.6K
4.2M
asdfsd
asdfsd@asdfsd24539·
@snowmaker Interesting. What if the technical cofounder is bad at sales?
English
0
0
0
16
Jared Friedman
Jared Friedman@snowmaker·
If you are a technical founder, you do not need a non-technical cofounder.
English
476
249
5.6K
664.1K
asdfsd
asdfsd@asdfsd24539·
@traestephens Seattle has a lot of tech, I would say tier 1 cities for tech have more than 10B+ in VC $ invested per year. Talent goes where they can get funded.
English
0
0
0
361
Trae Stephens
Trae Stephens@traestephens·
Based on replies, here are the people arguing for other cities: Austin (people who live in Austin) Boston (people who are REALLY upset about Trump's Harvard attack) Dallas/Houston (people who have only lived in TX) Miami (people who love money but don't love to work)
English
123
27
2.4K
227.8K
Trae Stephens
Trae Stephens@traestephens·
There are only four tier-1 cities in the 🇺🇸: New York (finance) DC (government) San Francisco (tech) LA (media & entertainment) No other cities are power centers for aspirational talent. Sorry.
English
1.5K
444
11.9K
3.2M
asdfsd retweetledi
Garry Tan
Garry Tan@garrytan·
We should have more billionaires
Garry Tan tweet media
English
1.1K
442
10.6K
2M
asdfsd retweetledi
Garry Tan
Garry Tan@garrytan·
YIMBY is winning! The "CEQA exemption" signed yesterday will result in much more housing in CA Why? NIMBY bureaucrats use CEQA to block housing, especially if they haven't paid off the local housing nonprofits like TODCO (this is corruption) This was common in CA and now over!
Garry Tan tweet media
Jordan Grimes (on Bluesky @cafedujord)@cafedujord

A truly historic day in California: a clean CEQA exemption for environmentally friendly infill housing has just passed the California legislature! No other way to put it: this is the most transformative positive shift in land use policy in this state of the last 50 years!

English
5
14
149
34.4K
asdfsd
asdfsd@asdfsd24539·
@sama The connected apps feature seems pretty cool! I'm hyped for when OpenAI advanced voice and talk to me about my email inbox and walk me through what's important.
English
0
0
0
40
Sam Altman
Sam Altman@sama·
what year do you think an o3-mini level model will run on a phone?
English
1K
295
2.9K
1.4M
asdfsd
asdfsd@asdfsd24539·
@chamath This is odd, as I understand it Waymo does way better in terms of miles driven per critical intervention than Tesla. I’d love it if Tesla solved FSD but it feels like they still have a ways to go
English
0
0
1
46
asdfsd
asdfsd@asdfsd24539·
@sama This blog post feels like a long way of saying, "we're looking into what happens when people get emotionally attached to ChatGPT". It sounds like OpenAI hasn't decided whether they want to lean into ChatGPT as a companion or keep it as an assistant yet.
English
0
0
0
59
Sam Altman
Sam Altman@sama·
important post from joanne:
Joanne Jang@joannejang

some thoughts on human-ai relationships and how we're approaching them at openai it's a long blog post -- tl;dr we build models to serve people first. as more people feel increasingly connected to ai, we’re prioritizing research into how this impacts their emotional well-being. -- Lately, more and more people have been telling us that talking to ChatGPT feels like talking to “someone.” They thank it, confide in it, and some even describe it as “alive.” As AI systems get better at natural conversation and show up in more parts of life, our guess is that these kinds of bonds will deepen. The way we frame and talk about human‑AI relationships now will set a tone. If we're not precise with terms or nuance — in the products we ship or public discussions we contribute to — we risk sending people’s relationship with AI off on the wrong foot. These aren't abstract considerations anymore. They're important to us, and to the broader field, because how we navigate them will meaningfully shape the role AI plays in people's lives. And we've started exploring these questions. This note attempts to snapshot how we’re thinking today about three intertwined questions: why people might attach emotionally to AI, how we approach the question of “AI consciousness”, and how that informs the way we try to shape model behavior. A familiar pattern in a new-ish setting We naturally anthropomorphize objects around us: We name our cars or feel bad for a robot vacuum stuck under furniture. My mom and I waved bye to a Waymo the other day. It probably has something to do with how we're wired. The difference with ChatGPT isn’t that human tendency itself; it’s that this time, it replies. A language model can answer back! It can recall what you told it, mirror your tone, and offer what reads as empathy. For someone lonely or upset, that steady, non-judgmental attention can feel like companionship, validation, and being heard, which are real needs. At scale, though, offloading more of the work of listening, soothing, and affirming to systems that are infinitely patient and positive could change what we expect of each other. If we make withdrawing from messy, demanding human connections easier without thinking it through, there might be unintended consequences we don’t know we’re signing up for. Ultimately, these conversations are rarely about the entities we project onto. They’re about us: our tendencies, expectations, and the kinds of relationships we want to cultivate. This perspective anchors how we approach one of the more fraught questions which I think is currently just outside the Overton window, but entering soon: AI consciousness. Untangling “AI consciousness” “Consciousness” is a loaded word, and discussions can quickly turn abstract. If users were to ask our models on whether they’re conscious, our stance as outlined in the Model Spec is for the model to acknowledge the complexity of consciousness – highlighting the lack of a universal definition or test, and to invite open discussion. (*Currently, our models don't fully align with this guidance, often responding "no" instead of addressing the nuanced complexity. We're aware of this and working on model adherence to the Model Spec in general.) The response might sound like we’re dodging the question, but we think it’s the most responsible answer we can give at the moment, with the information we have. To make this discussion clearer, we’ve found it helpful to break down the consciousness debate to two distinct but often conflated axes: 1. Ontological consciousness: Is the model actually conscious, in a fundamental or intrinsic sense? Views range from believing AI isn't conscious at all, to fully conscious, to seeing consciousness as a spectrum on which AI sits, along with plants and jellyfish. 2. Perceived consciousness: How conscious does the model seem, in an emotional or experiential sense? Perceptions range from viewing AI as mechanical like a calculator or autocomplete, to projecting basic empathy onto nonliving things, to perceiving AI as fully alive – evoking genuine emotional attachment and care. These axes are hard to separate; even users certain AI isn't conscious can form deep emotional attachments. Ontological consciousness isn’t something we consider scientifically resolvable without clear, falsifiable tests, whereas perceived consciousness can be explored through social science research. As models become smarter and interactions increasingly natural, perceived consciousness will only grow – bringing conversations about model welfare and moral personhood sooner than expected. We build models to serve people first, and we find models’ impact on human emotional well-being the most pressing and important piece we can influence right now. For that reason, we prioritize focusing on perceived consciousness: the dimension that most directly impacts people and one we can understand through science. Designing for warmth without selfhood How “alive” a model feels to users is in many ways within our influence. We think it depends a lot on decisions we make in post-training: what examples we reinforce, what tone we prefer, and what boundaries we set. A model intentionally shaped to appear conscious might pass virtually any "test" for consciousness. However, we wouldn’t want to ship that. We try to thread the needle between: - Approachability. Using familiar words like “think” and “remember” helps less technical people make sense of what’s happening. (**With our research lab roots, we definitely find it tempting to be as accurate as possible with precise terms like logit biases, context windows, and even chains of thought. This is actually a major reason OpenAI is so bad at naming, but maybe that’s for another post.) - Not implying an inner life. Giving the assistant a fictional backstory, romantic interests, “fears” of “death”, or a drive for self-preservation would invite unhealthy dependence and confusion. We want clear communication about limits without coming across as cold, but we also don’t want the model presenting itself as having its own feelings or desires. So we aim for a middle ground. Our goal is for ChatGPT’s default personality to be warm, thoughtful, and helpful without seeking to form emotional bonds with the user or pursue its own agenda. It might apologize when it makes a mistake (more often than intended) because that’s part of polite conversation. When asked “how are you doing?”, it’s likely to reply “I’m doing well” because that’s small talk — and reminding the user that it’s “just” an LLM with no feelings gets old and distracting. And users reciprocate: many people say "please" and "thank you" to ChatGPT not because they’re confused about how it works, but because being kind matters to them. Model training techniques will continue to evolve, and it’s likely that future methods for shaping model behavior will be different from today's. But right now, model behavior reflects a combination of explicit design decisions and how those generalize into both intended and unintended behaviors. What’s next? The interactions we’re beginning to see point to a future where people form real emotional connections with ChatGPT. As AI and society co-evolve, we need to treat human-AI relationships with great care and the heft it deserves, not only because they reflect how people use our technology, but also because they may shape how people relate to each other. In the coming months, we’ll be expanding targeted evaluations of model behavior that may contribute to emotional impact, deepen our social science research, hear directly from our users, and incorporate those insights into both the Model Spec and product experiences. Given the significance of these questions, we’ll openly share what we learn along the way. // Thanks to Jakub Pachocki (@merettm) and Johannes Heidecke (@JoHeidecke) for thinking this through with me, and everyone who gave feedback.

English
460
297
3K
853K
asdfsd retweetledi
Sam Altman
Sam Altman@sama·
we have been thinking recently about the need for something like "AI privilege"; this really accelerates the need to have the conversation. imo talking to an AI should be like talking to a lawyer or a doctor. i hope society will figure this out soon.
English
292
250
5.5K
529.7K
asdfsd retweetledi
Sam Altman
Sam Altman@sama·
recently the NYT asked a court to force us to not delete any user chats. we think this was an inappropriate request that sets a bad precedent. we are appealing the decision. we will fight any demand that compromises our users' privacy; this is a core principle.
English
859
737
15.4K
1.2M
asdfsd retweetledi
Palmer Luckey
Palmer Luckey@PalmerLuckey·
Anduril and Meta have teamed up to make the world's best AR and VR systems for the United States Military. Leveraging Meta's massive investments in XR technology for our troops will save countless lives and dollars.
Palmer Luckey tweet media
English
901
666
12.6K
1.2M
asdfsd
asdfsd@asdfsd24539·
@garrytan What percentage of the startups are leaving the Bay Area altogether? Im surprised 2/3 are leaving S.F., where are they going?
English
0
0
1
45
asdfsd retweetledi
Garry Tan
Garry Tan@garrytan·
Coinbase was one of my first fund returning wins as an investor and biggest outcome so far by a wide margin So excited to see them back in SF. Nature is healing 🚀 YC is funding 700 startups per year and 1/3 of them stay in SF and 6% to 12% of those become worth >$1B
Daniel Lurie 丹尼爾·羅偉@DanielLurie

Coinbase is coming back to San Francisco, opening a 150,000 square foot office in Mission Rock after leaving the City four years ago. San Francisco is the place to build and grow your company. Welcome back, @Coinbasesfstandard.com/2025/05/28/coi…

English
46
35
964
100.6K
asdfsd retweetledi
Garry Tan
Garry Tan@garrytan·
Words matter. Case study 👇
Pete Skomoroch@peteskomoroch

SFUSD is delaying a planned “grading for equity” initiative which was universally condemned online this week. Why was this even considered? Because @SFUnified is built on a broken foundation of bad ideas that start with SFUSD's vision, mission, and values. It's time to make a change: Equality of outcome → Equality of opportunity Ideological alignment → Evidence-based decisions Group-based adjustments → Individual acceleration

English
9
9
89
31.4K
asdfsd
asdfsd@asdfsd24539·
@sama Currently Operator is positioned for booking flights, but it could be great for automating data entry and filing information (for example putting files from DocuSign in google drive). Also it would be great to have a voice interface. The new update is good though!
English
0
0
0
20
asdfsd
asdfsd@asdfsd24539·
@sama Operator has a bug where whenever it goes to edit a google sheet, it gets a pop-up asking about enabling copy, cut and paste. It's starting to get good enough that I might actually use it to automate some tasks but needs some more polishing.
English
0
0
0
72
Sam Altman
Sam Altman@sama·
i think we should stop arguing about what year AGI will arrive and start arguing about what year the first self-replicating spaceship will take off
English
2.4K
1.2K
20.9K
3.3M
asdfsd retweetledi
Garry Tan
Garry Tan@garrytan·
Decrepit nonprofits, policy and advocacy groups have their billionaire backers There are also pro abundance orgs (eg YIMBY) and people power, and yes, their billionaires too “Billionaires!” is a lazy shibboleth that turns off brains.
Jeremiah Johnson 🌐@JeremiahDJohns

You can use "Where's the money?" to explain some things in politics. But it doesn't explain why it's so easy to build homes, energy and infrastructure in some parts of America and so difficult to build those things in other parts.

English
25
11
220
37.3K