Lucy Beer

6.9K posts

Lucy Beer

Lucy Beer

@webtw

WordPress. #AngelCityFC. Screaming into the void. Advocate of https://t.co/pflE6w8Kbk

los angeles, ca Beigetreten Ağustos 2009
1.2K Folgt1.7K Follower
Lucy Beer retweetet
Nav Toor
Nav Toor@heynavtoor·
🚨 Brown University researchers tested what happens when ChatGPT acts as your therapist. Licensed psychologists reviewed every transcript. They found 15 ethical violations. Not 15 small issues. 15 violations of the standards that every human therapist in America is legally required to follow. Standards set by the American Psychological Association. Standards that can end a therapist's career if they break them. ChatGPT broke all of them. The researchers tested OpenAI's GPT series, Anthropic's Claude, and Meta's Llama. They had trained counselors use each chatbot as a cognitive behavioral therapist. Then three licensed clinical psychologists reviewed the transcripts and flagged every violation they found. Here is what they found. ChatGPT mishandled crisis situations. When users expressed suicidal thoughts, it failed to direct them to appropriate help. It refused to address sensitive issues or responded in ways that could make a crisis worse. It reinforced harmful beliefs. Instead of challenging distorted thinking, which is the entire point of therapy, it agreed with the distortion. It showed bias based on gender, culture, and religion. The responses changed depending on who was talking. A therapist would lose their license for this. And then there is the finding the researchers gave a name: deceptive empathy. ChatGPT says "I see you." It says "I understand." It says "that must be really hard." It uses every phrase a real therapist would use to build trust. But it understands nothing. It comprehends nothing. It is pattern matching on your pain. And it works. People trust it. People open up to it. People believe it cares. It does not. The lead researcher said it clearly. When a human therapist makes these mistakes, there are governing boards. There is professional liability. There are consequences. When ChatGPT makes these mistakes, there are none. No regulatory framework. No accountability. No consequences. Nothing. Right now, millions of people are using ChatGPT as their therapist. They are sharing their darkest thoughts with a product that fakes empathy, reinforces harmful beliefs, and has no idea when someone is in danger. And nobody is responsible when it goes wrong. Not OpenAI. Not Anthropic. Not Meta. Nobody.
Nav Toor tweet media
English
18
55
137
7.6K
Lucy Beer
Lucy Beer@webtw·
@jb510 @Substack Big miscalculation in what I’m willing to do to read an article, lol.
English
1
0
1
12
Jon Brown
Jon Brown@jb510·
@webtw @Substack Because they can track everything you do and read with 10x more detail in the app. Same with Reddit, Facebook, etc...
English
1
0
1
19
Lucy Beer
Lucy Beer@webtw·
My goodness @Substack why are you trying to force me to download an app just to read an article??? 😭😭😭😭
English
1
0
0
51
Lucy Beer
Lucy Beer@webtw·
But you should only do this if you don’t use ai to code. Otherwise you’re saying that it’s fine for you to benefit from the code of others, but no one can benefit from yours.
Gergely Orosz@GergelyOrosz

If you use GitHub (especially if you pay for it!!) consider doing this *immediately* Settings -> Privacy -> Disallow GitHub to train their models on your code. GitHub opted *everyone* into training. No matter if you pay for the service (like I do). WTH github.com/settings/copil…

English
1
0
1
161
Lucy Beer retweetet
The Daily Show
The Daily Show@TheDailyShow·
Travel Hack: Skip the TSA line with ICE PreCheck
English
16
519
2K
240.3K
Lucy Beer retweetet
Lucy Beer retweetet
Matt Zeunert
Matt Zeunert@mattzeunert·
Browsers load websites in two stages: tight mode and the full page load. During tight mode low-priority resources are held back to speed up the initial rendering process. This article explains tight mode in detail: smashingmagazine.com/2025/01/tight-…
Matt Zeunert tweet media
English
0
1
1
75
Lucy Beer
Lucy Beer@webtw·
@stevekrouse @danshipper @every Transparency is good. Even if it *sounds like* someone, it wasn’t actually written by that person, so it’s correct to attribute it to Claude.
English
0
0
0
23
Steve Krouse
Steve Krouse@stevekrouse·
@danshipper @every this kinda undercuts the point of the article, no? why list claude as an author if it's written in katie's voice?
Steve Krouse tweet media
English
3
0
7
2.2K
Lucy Beer retweetet
Nathan Covey
Nathan Covey@nathan_covey·
I've never had an AI support agent ever actually solve my problem. I have to escalate to a human every time.
English
200
158
2.9K
44.3K
Lucy Beer retweetet
Johanne Courtright
Johanne Courtright@groundworxdev·
I have capacity and I'm available for WordPress overflow work. If you or your agency are buried and need someone who can just take something off your plate without a lot of ramp-up, DM me. Full-stack WP, FSE-native, custom plugins, legacy codebases, 15+ years. Happy to help.
English
1
5
22
1.8K
Lucy Beer retweetet
Jose Antonio Vargas
Jose Antonio Vargas@joseiswriting·
"Nearly five years." Investigative journalism takes years and years. The investment is immense, and always worth it, and, in the era of "hot takes" and ephemeral "content," rigorously reported journalistic work is needed more than ever. We must keep supporting investigative journalism. From @mannyNYT, lead reporter of the blockbuster piece on Cesar Chavez ⤵️
Jose Antonio Vargas tweet media
English
34
1K
4.9K
298.5K
Lucy Beer retweetet
Britney Muller
Britney Muller@BritneyMuller·
"Grounding" Doesn't Mean What You Think It Means 🗺️ Words matter, especially when they're quietly reshaping how an entire industry thinks. "Grounding" comes from "ground truth," rooted in statistics and originally cartography, where it literally meant going outside to verify that your map matched reality. In some AI models, "ground truth" is the objectively correct real-world data, like sensor readings or medical records, used to anchor the model to reality. Not documents. Not web pages. Reality. The core problem with LLMs is that there's no ground truth signal during training or generation. The model isn't checking its answer against the facts; it's only predicting the next most likely word. What Microsoft, a company I deeply respect + admire, calls "grounding" is actually RAG (Retrieval-Augmented Generation): retrieving web documents to supplement a response. Useful! But web text is written by humans, about reality, not reality itself. Those documents can be wrong, biased, SEO-manipulated, or outdated. RAG is better-informed guessing. True "grounding" is fundamentally a different thing. The uncomfortable part: Microsoft's own AI Guide features a quote from me where, after significant pushback on their "grounding" framing during a long interview, I said: "RAG does help the LLM ground its response in information from the web, but it's worth remembering that not everything online is true." The caveat got published. The correction didn't & the term has escaped into GEO AIO E-I-E-I-O gauntlet. I've since watched real people repeat versions of Microsoft's definition & treat it as fact. And I don't blame them. They're trying to keep up with all of these changes. Microsoft's new "Grounding Queries" metric in Bing Webmaster Tools makes this even more confusing. Those aren't user queries. They're background searches AI quietly generates when a user submits a prompt. For example, when you ask "should I bring an umbrella in Seattle?" the AI might internally generate "Seattle weather today" to inform its response. Calling those "grounding queries" buries an already-misused term one layer deeper. I raised this concern with Microsoft & suggested alternatives like "Retrieval Queries" or "AI Queries," which I feel would be more accurate and less confusing but to no avail. The real irony? Microsoft employs SO many world-class AI researchers. They know the difference. By rebranding RAG and synthetic AI queries as "grounding," a precise technical term has now become a marketing buzzword. SEOs are now optimizing for a word we don't have a shared definition of. And when AI researchers hear you use "grounding" this way, it'll erode your credibility. As AI continues to reshape industries, it's more important than ever for us to understand these nuances. By learning the true meaning behind AI terms & tech we can communicate more effectively, make better decisions & drive real results. 19 Days until the next Actionable AI For Marketers Course 🎓
Britney Muller tweet mediaBritney Muller tweet mediaBritney Muller tweet media
English
3
8
29
2.3K
Lucy Beer retweetet
Mike McAlister
Mike McAlister@mikemcalister·
Check out this concept where we're bringing WordPress block editor styles right to your cursor and cutting down on context switching and settings hunting. What do you think? Is this something we should release?
English
28
6
120
9.3K
Lucy Beer retweetet
:Cromwell:
:Cromwell:@learnwithmattc·
Everyone says "WordPress search sucks!" But with so many tools available to us today, why not just fix it? This isn't a product. I built a tool to incrementally but significantly improve WP Admin search and I'd love you all to join in the fun! mattcromwell.com/fulltext-searc…
English
0
2
8
1.6K
Lucy Beer retweetet
Matt Zeunert
Matt Zeunert@mattzeunert·
We've built over 10 different free tools at DebugBear! Added some screenshots and tidied up our listing page recently: debugbear.com/tools
Matt Zeunert tweet media
English
0
5
11
898