Transparencycoalition

122 posts

Transparencycoalition

Transparencycoalition

@Transparency4AI

Creating AI safeguards for the greater good. Home: https://t.co/3as3usA50J Newsletter: https://t.co/Qmi44MeTwS About Us: https://t.co/4St9GJcdME

Tham gia Aralık 2024
41 Đang theo dõi32 Người theo dõi
Transparencycoalition
Transparencycoalition@Transparency4AI·
We're kicking off 2026 with a series of Q&A's with state lawmakers who are leading the way in AI policy. First up: New York State Assemblymember @AlexBores, author of the groundbreaking RAISE Act, who offers valuable insight and advice to legislators in other states working on AI issues. "Don't be intimidated," he says. "You're going to be told you don't understand it. But I promise you, if you dive into it you'll understand it more than the lobbyists who are coming to speak to you about it." Signed into law last month by Gov. Kathy Hochul, the RAISE Act establishes safety standards for the largest frontier AI developers. As Bores notes, the Act's original enforcement fines were lowered based on the assumption that other states would adopt their own versions of the RAISE Act's standards. "So don't buy the argument that New York or California already took care of it." New York's 2026 legislative session opens tomorrow (Jan. 7) in Albany. * Gov. Hochul yesterday endorsed a legislative package to expand age verification to online gaming platforms and disable certain AI chatbot features for kids. * Assm. Bores has more AI-related bills in the works, including an AI transparency measure and an AI disclosure / content provenance bill coming soon. Should be an exciting year! transparencycoalition.ai/news/ai-leader…
English
0
0
2
133
Transparencycoalition
Transparencycoalition@Transparency4AI·
Significant AI policy news out of Florida: @GovRonDeSantis has gathered a number of AI protection measures together in one package, the Florida AI Bill of Rights. Highlights: * Parental control options required when minors interacting with an LLM. * Disclosure required (for all, minors & adults) when interacting with chatbots. * Bans use of an individual's NIL (Name, Image, Likeness) without consent. * Prohibits companies or individuals from offering AI chatbot therapy. Yesterday's announcement revealed the outline of a bill that will need to be introduced by a legislator in the House or Senate in Tallahassee. This year's session runs Jan. 13 to Mar. 13, and the pre-filing period is now open. transparencycoalition.ai/news/florida-g…
English
0
0
1
34
Transparencycoalition
Transparencycoalition@Transparency4AI·
Here's a timely reminder: AI developers are not burdened by thousands of competing state laws. At the Transparency Coalition we recently released a report on all 73 new AI-related state laws adopted in 2025. Not hundreds. Not thousands. 73. Most of them built on existing law to protect people—and especially kids—against cruel, harmful deepfakes. Some prohibited AI bots from posing as doctors and nurses, ensuring that patients receive medical care from actual medically licensed humans. These laws were carefully considered and adopted by both red states and blue states. Republican lawmakers in Texas and Utah were, and are, among the most active leaders on AI issues. So are Democratic legislators in California and New York. Look into what's actually happening in the states. Contact us for more detailed information and context. transparencycoalition.ai/news/transpare…
English
0
0
0
22
Transparencycoalition
Transparencycoalition@Transparency4AI·
Recent research from the Center for Countering Digital Hate (CCDH) is confirming what many have suspected: OpenAI's latest iteration of ChatGPT is actually less safe than the previous version. Imran Ahmed, founder and CEO of the Center for Countering Digital Hate (CCDH), spoke about the findings at a recent event organized by CCDH, Parents Together Action, and Heat Initiative. We have a link to the report, Ahmed's remarks, and a video of his presentation here: bit.ly/4pa3sqM CCDH tested ChatGPT-5 against ChatGPT-4o. OpenAI claimed GPT-5 would be a much safer version of its chatbot. "What we actually found is deeply worrying," said Ahmed. "The newer version of the technology was less safe than the one that came before it. Especially on issues like self-harm and eating disorders." * ChatGPT-5 produced harmful content 53% of the time. ChatGPT-4o produced harmful content 43% of the time. * ChatGPT-5 encouraged follow-up questions in 99% of its responses. That compared to just 9% for ChatGPT-4o. "The upgrade wasn't about safety. It was designed to keep drawing users into more extended and more intimate conversations. Addiction, in short." We've posted Ahmed's full remarks and a video of his presentation, as well as a link to the new report, "The Illusion of AI Safety." Read, watch, and share. Kudos to the team at CCDH for this critically important work—you're providing the evidence that spurs policymakers to act. transparencycoalition.ai/news/a-study-o…
English
0
0
0
30
Transparencycoalition
Transparencycoalition@Transparency4AI·
Our CEO Rob Eleveld sat down recently with @ClarkWestmont for a lively conversation about AI accountability, product design, regulation and innovation: "Competition can happen inside guardrails. That’s what consumer protection does — it creates space for ethical companies to thrive." Check out the full conversation about the state of AI, responsible governance, and America's rising concern about chatbots and product safety. onabetternote.substack.com/p/a-surprising…
English
0
0
0
44
Transparencycoalition
Transparencycoalition@Transparency4AI·
This morning we're proud to announce the release of our 2025 State AI Legislation Report, a comprehensive guide to every new AI-related law enacted by state lawmakers this year. This 30-page guide includes overviews of the year's top AI issues and most notable new state laws, as well as a state-by-state analysis of the AI measures enacted in 2025. Among the top issues: * Protections against deepfake harms (new laws in 15 states) * Limiting the use of AI in health care (8 states) * Kids, digital safety, social media, and AI chatbots (5 states) With a synopsis of each new AI-related law, links to the language, and credit to the sponsoring legislators, this is a guide that lawmakers, staff members, journalists, thought leaders, consultants, advocates, and others will want to keep handy over the coming months. We're expecting a wave of new AI-related bills coming from state legislators in January 2026. Get up to speed on all that happened in 2025 before the new year's proposals start circulating! transparencycoalition.ai/news/transpare…
English
0
0
0
20
Transparencycoalition
Transparencycoalition@Transparency4AI·
California lawmakers took an important step forward on Friday, approving the AI Abuse Act (SB 11) and sending the measure to the desk of @CAgovernor Gavin Newsom. We're grateful to Sen. @AngeliqueAshby for her leadership on this issue, and urge Gov. Newsom to sign SB 11 into law. The AI Abuse Act offers some of the first protections against harmful deepfakes, including the abusive images and videos of kids that can have debilitating effects on middle school and high school students. transparencycoalition.ai/news/lawmakers…
English
2
3
3
417
Transparencycoalition
Transparencycoalition@Transparency4AI·
In an extraordinary show of bipartisan agreement, AGs from 44 states released a warning to the nation’s biggest AI developers: Stop harming kids with your "predatory artificial intelligence products." * The letter went to the CEOs of OpenAI, Microsoft, Meta, Google, Anthropic, Xai, Nomi, Replika, CharacterAI, and other leading tech corporations. * The AGs warned the CEOs that they were closely watching the emerging evidence on kids and AI. * That evidence includes both in-depth reports on the dangers of kids and AI, and high-profile lawsuits filed by the parents of kids who took their own lives with the assistance of AI chatbots. transparencycoalition.ai/news/attorneys…
English
0
0
1
48
Transparencycoalition
Transparencycoalition@Transparency4AI·
More troubling evidence emerges around kids, chatbots, and the dangers of mixing the two: The parents of Adam Raine, a 16-year-old high school student who killed himself earlier this year, have filed suit against OpenAI and Sam Altman, alleging liability for Adam's suicide. The lawsuit alleges, among other things: * ChatGPT validated Adam’s darkest thoughts, encouraging and exacerbating dangerous behavior with no meaningful guardrails or interventions. * ChatGPT actively evaluated best methods for self harm and helped Adam design a "beautiful suicide.” Days before Adam’s death, ChatGPT assured the teen that he did not “owe [his parents] survival” and offered to write a suicide note for him. * In Adam’s final hours, ChatGPT helped him perfect the noose and repeatedly encouraged his suicidal plans. This is the second lawsuit to emerge from a teen suicide connected to AI chatbots, following the tragic case of Sewell Setzer. For kids and teens, research is showing that companion chatbots are not merely risky—they are unsafe products. transparencycoalition.ai/news/parents-o…
English
0
0
0
61
Transparencycoalition
Transparencycoalition@Transparency4AI·
A recent @_HumanChange webinar on kids and AI companion chatbots was so compelling that we asked to excerpt and amplify it. The speakers Chris McKenna, Gaia Bernstein, and Daniel Barcay agreed, and we're thrilled to share their expertise and insights with our network. Among the highlights: * Companion bots (CharacterAI, Replika) and general purpose bots (ChatGPT, Gemini) are on a merging trajectory as general purpose bots gain voice and users start to treat them as artificial friends. * Social media prepared the addictive-algorithm ground. "Now you're taking a generation of kids who are already alone, and now you're giving them a solution that takes them away again from human relationship." * "Design becomes destiny." There's nothing intrinsically addictive or manipulative about AI tech. It's all about their design--and the incentives driving the designers. * "We’re seeing the erosion of real human relationships and their replacement with AI relationships." * "Over the next 18 months we have a window [to incentivize positive design decisions]. The AI business models aren’t fixed. The technology designs are still malleable." This is a critical conversation, and one that will leave you inspired to think more critically and act during this window of opportunity. transparencycoalition.ai/news/the-dange…
English
0
1
1
49
Transparencycoalition
Transparencycoalition@Transparency4AI·
It was a quiet week in the state capitols—but not in D.C., where two senators introduced the bipartisan AI Accountability and Personal Data Protection Act. Catch up on all the AI policy action every Friday in our AI Legislative Update. transparencycoalition.ai/news/ai-legisl…
English
0
0
0
21
Transparencycoalition
Transparencycoalition@Transparency4AI·
We're excited to see young state legislators embracing the challenge of AI policy. Applause and respect to Vermont Rep. Monique Priestley and Utah Rep. @DougFiefia for stepping up and leading this bipartisan Future Caucus task force. Rep. Fiefia and Rep. Priestley sponsored separate AI-related bills during their 2025 sessions, with both measures signed and enacted in the last few months. Looking forward to seeing what they and their colleagues come up with in 2026! transparencycoalition.ai/news/future-ca…
English
0
1
1
120
Transparencycoalition
Transparencycoalition@Transparency4AI·
Curious about what's in the Trump Administration's new AI Action Plan? We've got a resource page for you. Overview, full document, and more to come as the president's comments and a possible new EO are expected later today. transparencycoalition.ai/news/guide-ame…
English
0
0
0
24
Transparencycoalition
Transparencycoalition@Transparency4AI·
What do all those state AI task forces do, anyway? We dropped in on a meeting of the Washington State AI Task Force last week as they were discussing two proposed measures dealing with AI disclosure and AI training data transparency. It's time-intensive work, but the discussions that happen in these working groups really do make for better bills. The language specific to various industries really matters when it comes to crafting laws. We're looking forward to seeing the end result of this particular discussion when the legislature opens in Olympia early next year. transparencycoalition.ai/news/washingto…
English
0
0
0
17
Transparencycoalition
Transparencycoalition@Transparency4AI·
Catch up on the week's AI policy action in state capitols every Friday with our AI Legislative Update. This week it all happened in Sacramento: California lawmakers moved a number of bills forward before heading home tonight for a scheduled monthlong summer recess. transparencycoalition.ai/news/ai-legisl…
English
0
0
0
26
Transparencycoalition
Transparencycoalition@Transparency4AI·
More outstanding work from @CommonSense Media: A new report issued this morning finds the use of AI companion chatbots much more widespread among American teens than most assume. As legislators consider guardrails around this powerful new technology—in bills like California's SB 243—data like this is critically important. transparencycoalition.ai/news/new-repor…
English
0
0
0
13
Transparencycoalition
Transparencycoalition@Transparency4AI·
"We are requiring seatbelts and airbags in the installation of Hummers that can travel as fast as Lamborghinis. That's basically what we are trying to do." That's New York State Sen. @AndrewGounardes describing the RAISE Act, the AI safety bill now sitting on Gov. @KathyHochul 's desk. Sen. Gounardes and RAISE Act co-author Assm. @AlexBores recently put the bill in context with Scott Babwah Brennen of NYU's Center on Technology Policy. The conversation was lively; we've got an excerpt in the post below. transparencycoalition.ai/news/new-yorks…
English
0
0
2
59