Anne Carney

1K posts

Anne Carney banner
Anne Carney

Anne Carney

@AnneCarney_

Tech, Social Impact, Community

San Francisco, CA Katılım Mart 2009
1.4K Takip Edilen1.5K Takipçiler
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
39K
19.2M
Micah Berkley - The 50 Cent of AI.
@msg High key I just wanted to follow all the clawsters.... but it turned out to be a cool little app now. I'll probably throw it on Github tonight. Excellent event my friend!
English
1
0
2
42
Micah Berkley - The 50 Cent of AI.
🤾🏽‍♀️ I built a tool that mines every social media handle from a Luma event guest list in seconds. Paste a link, get every attendee's Twitter, Instagram, and LinkedIn ~~ ranked by follower/post counts in real time. Should I open source it? I couldn't find anything like this... Dope Use Cases: Pre-event networking (know who's in the room before you walk in), post-event follow-ups (bulk follow every attendee), speaker/sponsor research, community building, finding high-value connections at any event you're attending. Works on any Luma event with 1000+ guests. Total time to build about 2 hours. Actually a little less since I was eating Ramen while I was cooking.... Watch the video. #NoAudio
Micah Berkley - The 50 Cent of AI.@MicahBerkley

That was easy.....

English
5
0
7
1.1K
armand type beat
armand type beat@2irl4u·
They should rename GPT-5 to GPT—5
English
2
0
21
669
Anne Carney
Anne Carney@AnneCarney_·
@sama on ChatGPT5 “We still have the em dashes in GPT5, a lot of people like the em dashes!” 🦦 @cleoabram
English
5
3
27
5.8K
Anne Carney
Anne Carney@AnneCarney_·
Only in San Francisco
English
0
0
5
298
Anne Carney
Anne Carney@AnneCarney_·
I just want an @openai model that responds as though it values its own time @sama 🐐
English
0
0
1
239
Sam Altman
Sam Altman@sama·
today we are introducing codex. it is a software engineering agent that runs in the cloud and does tasks for you, like writing a new feature of fixing a bug. you can run many tasks in parallel.
English
1.2K
2.6K
36.1K
6.1M
Anne Carney
Anne Carney@AnneCarney_·
@erica_wenger @Ruffin__ No shocker. But this narrow list isn’t measuring power, it’s measuring access. The real signal? Women quietly building, funding, and scaling what’s next. Not chasing unicorns, rewriting the metric. Different era.
English
1
0
4
531
erica wenger🏕️
erica wenger🏕️@erica_wenger·
This is nuts. No women on list of top 50 angel investors More fuel to continue our mission at PRC 🔥 (shoutout to @Ruffin__ for flagging on Linkedin)
erica wenger🏕️ tweet media
English
24
9
141
75.9K
Anne Carney
Anne Carney@AnneCarney_·
Meta’s new “Edits” app just dropped — sleek UI, watermark-free, Reels-friendly. But no templates, no trends, no creator muscle. It’s not a CapCut killer, it’s a content draft folder. Full breakdown: open.substack.com/pub/annecarney…
English
0
0
1
159
cackles (jeff weisbein)
cackles (jeff weisbein)@jeffweisbein·
making progress on building the best journalist/media outreach platform ever.
cackles (jeff weisbein) tweet media
English
2
0
4
92
Sam Altman
Sam Altman@sama·
lol i feel like a YC founder in "build in public" mode again
English
399
209
7.9K
934.4K
Joanne Jang
Joanne Jang@joannejang·
my sidebar rn:
Joanne Jang tweet media
English
25
13
351
30.6K
Joanne Jang
Joanne Jang@joannejang·
🚫 on imagegen refusals we hear you on the current state of refusals not being up to far (amidst the transition discussed below)! favor: can you please reply with screenshots of imagegen requests that shouldn't have been refused, but did? that'd help us accelerate fixes. 🙏🏼 (e.g. the model should definitely be able to make the gnome's butt bigger.)
Joanne Jang@joannejang

// i lead model behavior at openai, and wanted to share some thoughts & nuance that went into setting policy for 4o image generation. features capital letters (!) bc i published it as a blog post: -- This week, we launched native image generation in ChatGPT through 4o. It was a special launch for many reasons — one of which our CEO Sam highlighted as "a new high-water mark for us in allowing creative freedom." I wanted to unpack that a bit, as it could be easily missed by those not deep in AI or closely following our evolving thoughts on model behavior (wh… what do you mean you haven’t read the sixty-page Model Spec in your free time??). tl;dr we’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm. The goal is to embrace humility: recognizing how much we don't know, and positioning ourselves to adapt as we learn. Images are visceral There's something uniquely powerful and visceral about images; they can deliver unmatched delight and shock. Unlike text, images transcend language barriers and evoke varied emotional responses. They can clarify complex ideas instantly. Precisely because images carry so much impact, we felt even more heft — relative to other launches — in shaping policy and behavior. Evolving perspectives on launching what feels like a new capability When it comes to launching (what feels like) a new capability, our perspective has evolved across multiple launches: 1. Trusting user creativity over our own assumptions. AI lab employees should not be the arbiters of what people should and shouldn’t be allowed to create. We’re always humbled after launch, discovering use cases we never imagined — or even ones that seem so obvious in hindsight but didn’t occur to us from our limited perspectives. 2. Seeing risks clearly, but not losing sight of everyday value to users. It’s easy to fixate on potential harms, and broad restrictions always feel safest (and easiest!). We often catch ourselves questioning, “do we really need better meme capabilities when the same memes could be used to offend or hurt people?”. But I think that framing itself is flawed. It implies that subtle, everyday benefits must justify themselves against hypothetical worst-case scenarios, which undervalues how these small moments of delight, humor, and connection genuinely improve people’s lives. 3. Valuing unknown, unimaginable possibilities. Maybe due to our cognitive bias against loss aversion, we rarely consider the negative impacts of inaction; some people refer to it as “invisible graveyards” although that’s a bit too morbid and extreme. There are second order or indirect impacts unlocked by a new capability: all the positive interactions, innovations, and ideas from people that never materialize simply because we feared the worst-case scenario. How we thought about policy decisions for Day 1 Navigating these challenges is hard, but we aimed to maximize creative freedom while preventing real harm. Some examples from our launch decisions: - Public figures: We know it can be tricky with public figures—especially when the lines blur between news, satire, and the interests of the person being depicted. We want our policies to apply fairly and equally to everyone, regardless of their “status”. But rather than be the arbiters of who is “important enough”, we decided to create an opt-out list to allow anyone who can be depicted by our models to decide for themselves. - “Offensive” content: When it comes to “offensive” content, we pushed ourselves to reflect on whether any discomfort was stemming from our personal opinions or preferences vs. potential for real-world harm. Without clear guidelines, the model previously refused requests like "make this person’s eyes look more Asian" or "make this person heavier," unintentionally implying these attributes were inherently offensive. - Hate symbols: We recognize symbols like swastikas carry deep and painful history. At the same time, we understand they can also appear in genuinely educational or cultural contexts. Completely banning them could erase meaningful conversations and intellectual exploration. Instead, we're iterating on technical methods to better identify and refuse harmful misuse. - Minors: Whenever a policy decision involved younger users, we decided to play it safe: choosing stronger protections and tighter guardrails for people under 18 across research and product. Ultimately, these considerations — coupled with our progress toward more precise technical levers — led us toward more permissive policies. We recognize this might be misinterpreted as "OpenAI lowering its safety standards,” but personally, I don’t think that does justice to the team’s extensive research, thoughtful debates, and genuine love & care for users and society. My colleague Jason Kwon once passed onto me: “Ships are safest in the harbor; the safest model is the one that refuses everything. But that’s not what ships or models are for.” The future is built with imagination and adventure. As we continue our research and learn from society, we believe we can continue to find ways to responsibly increase user freedom. When (not if!) our policies evolve, updating them based on real-world feedback isn’t failure; that’s the point of iterative deployment. Please keep sharing your feedback and creations — they genuinely help us improve!

English
221
28
541
237.1K
Justin Uberti
Justin Uberti@juberti·
During the development of WebRTC, we recognized the impact of voice and video on human communication, and I wondered if someday we'd talk to AIs the same way. Today, we can see this future taking shape, and I'm excited to announce I've joined @OpenAI to lead real-time AI efforts!
Justin Uberti tweet media
English
78
63
1.8K
216.6K
anu
anu@anuatluru·
memes are the best medicine
English
12
5
82
5.3K
Dan Romero
Dan Romero@dwr·
Farcaster is a mullet. web2 in the front, crypto in the back.
English
89
86
1K
86.2K
Ben South
Ben South@bnj·
What if we combined Clubhouse with FriendTech?
English
38
23
286
74.6K