Identity.org

1.5K posts

Identity.org banner
Identity.org

Identity.org

@identity

Privacy-First Digital Identity that’s secure, private, and simple.

San Francisco, CA Katılım Temmuz 2018
223 Takip Edilen1.8K Takipçiler
Identity.org
Identity.org@identity·
Val Kilmer passed away in April 2025. Later this year, he is set to appear in a film he never got to shoot. The production, As Deep as the Grave, cast Kilmer five years ago in a role he connected with personally. His illness prevented him from ever making it to set. The filmmakers worked with his family and followed SAG guidelines. His daughter described her father as someone who always looked at new technology with optimism. By most measures, this is how the process is supposed to work. And yet the case has prompted a genuine conversation about where consent begins and ends. When a performer can no longer speak for themselves, decisions about their face, voice, and presence pass to others, even with the best intentions. Who gets to decide what a person would have wanted, once they can no longer say? variety.com/2026/film/news…
English
0
0
0
47
Identity.org
Identity.org@identity·
Most people contributing their faces and voices to AI platforms are not thinking about licensing terms. They are thinking about the payment, which in some markets can exceed what a full day of work would otherwise pay. In South Africa, a single task on some platforms pays roughly ten times the local minimum hourly wage. The barrier to entry is low and the money arrives quickly. What tends to get less attention is what they are actually agreeing to. The terms on most of these platforms are broad, worldwide, irrevocable licenses with no time limit on how the data can be used. The payment is for a single moment. The data outlasts it. Our latest blog explores the AI labor market and what people are actually agreeing to when they participate. identity.org/the-new-ai-lab…
English
0
1
1
31
Identity.org
Identity.org@identity·
Most creators do not go looking for unauthorized versions of themselves. They get a message from someone who did. Once they know, the work begins. Searching for the content, identifying where it appeared, filing takedown requests, and then starting again because more have surfaced in the meantime. One cybersecurity expert recently put it plainly: are you going to be the internet's police and keep looking for your face and your likeness? For many creators, that is already the reality. The tools to create this content are widely available. The tools to find and remove it are still catching up. That is not a gap individuals should have to close alone. ischool.berkeley.edu/news/2026/roll…
English
0
0
2
26
Identity.org
Identity.org@identity·
OpenAI is shutting down Sora, its video generation app, just months after launch. During its short run, deepfakes of public figures surfaced quickly. Families of deceased actors and civil rights leaders asked users to stop generating videos of their loved ones. Copyright holders raised objections over how the technology was being used. OpenAI cited compute costs and a shift in priorities as the reason for the closure, though the underlying technology is not going away with it. It is being folded into other products, which means the capability that raised those concerns will continue to exist, just somewhere less visible. techcrunch.com/2026/03/24/ope…
English
0
0
0
33
Identity.org
Identity.org@identity·
ByteDance's video tool Seedance 2.0 launched in China in February and drew significant attention. Photorealistic videos featuring well-known actors and recognizable IP circulated widely online. A global rollout was scheduled for mid-March. That rollout is now on hold. Major studios sent cease-and-desist letters. Disney described it as a virtual smash-and-grab of its intellectual property. SAG-AFTRA raised concerns about consent and what the technology means for performers. This pattern keeps showing up across the space. The capability of a tool and the conditions around its release do not always develop at the same pace. Legal frameworks, consent standards, and licensing infrastructure tend to take shape more gradually than the technology itself. For now, launches like this tend to move forward until something forces them to stop. theinformation.com/articles/byted…
English
0
0
0
49
Identity.org
Identity.org@identity·
"We’re celebrating people, not AI,” as Will Arnett put it on stage at the Oscars. The conversation around AI centered on protecting creativity, but for many in the industry, the question is more practical. It’s not just about whether AI belongs in filmmaking. It’s about how people remain part of what’s being built. Writers, actors, and performers aren’t looking to be replaced. They’re looking to participate, to understand how their work is used, to have a say, and to be fairly compensated when it extends beyond the original performance. That’s where much of the tension comes from, not resistance to technology, but uncertainty around how inclusion and compensation are handled as it evolves. The conversation is still framed around protecting the work. For many, it’s about staying part of it.
English
0
0
1
49
Identity.org
Identity.org@identity·
Sony Music has requested the removal of more than 135,000 tracks from streaming platforms. Each one falsely claimed to feature one of its artists, including Beyoncé, Queen, and Harry Styles. What makes this more than a volume problem is the timing. These tracks tend to surface when an artist is actively promoting new music, pulling attention and revenue away from the real release at the moment it matters most. Sony believes the 135,000 it has found is only a fraction of what is actually out there. The music industry estimates up to 10 percent of content across streaming platforms may be fraudulent. In practice, that means by the time these tracks are identified and removed, they’ve already captured the attention they were designed to take. routenote.com/blog/sony-musi…
English
0
0
1
63
Identity.org
Identity.org@identity·
A gymnast’s movement. A voice actor’s recording. A player’s on-field data. These used to be tied to a specific moment. Now they can be captured, stored, and reused across systems. That is where identity licensing starts to matter. When identity becomes an input, not just an output, access needs to be defined earlier, at the point where data enters the system, not just distribution. Once identity is used in training, it can shape outputs across many contexts, often far removed from the original moment. That shift is already starting to show up in how these systems are being built. Our latest blog looks at what that means in practice and where control starts to move. identity.org/identity-is-be…
English
0
1
1
37
Identity.org
Identity.org@identity·
The global deepfake AI market is projected to grow from about $764 million in 2024 to nearly $20 billion by 2033. Growth like that means synthetic media will appear more often across advertising, entertainment, and online platforms. For viewers, the challenge is becoming simpler but more difficult at the same time. When you see a person in a video or image, it is not always clear whether that person actually participated in creating it. Signals that confirm identity, source, and permission may become an important part of how people evaluate media online.
English
0
0
1
41
Identity.org
Identity.org@identity·
Sometimes people discover an AI version of themselves by accident. A model recently said a fashion brand created an AI generated ad that closely mirrored her face and past photos. She only found out after a friend sent her the video. That raises a difficult reality about digital likeness. You may not know when an AI version of your image is circulating online, or when someone else may be benefiting from it. The conversation about identity online is not only about removing content after the fact. It is also about creating ways for people to know when their likeness is being used in the first place. dailydot.com/model-fashion-…
English
0
1
3
97
Identity.org
Identity.org@identity·
YouTube is expanding its AI deepfake detection system to include politicians, government officials, and journalists. The tool scans for AI generated videos that recreate someone’s face and allows the person being impersonated to flag or request removal of the content. Impersonation has long been a concern for celebrities and creators. But when it involves political figures or journalists, the impact can extend to public trust and the information people rely on. Moves like this suggest platforms are beginning to treat AI impersonation not just as a moderation issue, but as part of protecting the integrity of online media. techcrunch.com/2026/03/10/you…
English
0
0
0
45
Identity.org
Identity.org@identity·
A lot of the conversation around AI likeness focuses on takedowns. Someone’s image appears somewhere it shouldn’t. A request is sent. The content is removed. 🔁 That process has been the standard response across social media for years. But some creators and studios are starting to think about it differently. Instead of dealing with problems after content spreads, they are beginning to publish clearer usage guidelines and licensing terms that explain how a likeness can be used from the start. These frameworks do not prevent every issue, but they can reduce confusion around endorsement, attribution, and authorization. When expectations are defined early, fewer disputes need to be resolved later.
English
0
1
2
81
Identity.org
Identity.org@identity·
Some performances remain part of culture long after they were first recorded. Films are restored. Archived interviews are replayed. Familiar voices and faces continue appearing in new productions built from earlier material. Before retiring from the role of Darth Vader, James Earl Jones worked with Lucasfilm to create a digital model of his voice using recordings captured throughout his career. That system allows the character’s voice to appear in future Star Wars projects. Examples like this show how a likeness, voice, or performance can continue circulating long after the original work was created. When past performances can be reused in new productions, questions about who authorizes those appearances naturally follow. Our latest article explores how identity can continue appearing after death and who often manages those decisions. identity.org/who-controls-y…
English
0
0
2
57
Identity.org
Identity.org@identity·
Actor Jared Harris recently issued a cease and desist after an AI generated version of him appeared in promotional material for a podcast he never agreed to participate in. The producers used AI to recreate his likeness for a fictional film pitch segment. Harris said he was never contacted and that his image and identity are protected under existing contracts and law. Situations like this are starting to appear more often as tools capable of recreating faces and voices become widely accessible. The technology may allow a likeness to be generated, but that does not automatically answer the question of who has the authority to use it, especially when it appears in promotional or commercial content. thewrap.com/culture-lifest…
English
0
1
1
67
Identity.org
Identity.org@identity·
Fan culture has always embraced reinterpretation. Posters, fan fiction, mashups, and tribute edits have long been part of media ecosystems. New AI tools make those creations far more realistic. What once looked clearly fan-made can now resemble official studio content. The tension begins when the line between unofficial and official becomes unclear. Confusion around endorsement and approval follows. Creativity and protection do not have to conflict. Clear usage guidelines and transparent labeling can help audiences distinguish fan work from official releases. With clearer expectations, participation can continue while professional interests remain protected.
English
0
1
2
72
Identity.org
Identity.org@identity·
Last week we explained why you generally cannot copyright your face or likeness. That raises the next question: what about trademark? Trademark applies when identity functions as a commercial identifier. A name on a product line. A catchphrase used on merchandise. A stylized logo built around a persona. In those cases, protection attaches to the branding element used in commerce, not to the person as a whole. Trademark does not grant ownership over a face or voice in the abstract. It protects defined brand elements tied to specific goods or services. ™️ 🔗 Our latest blog explains what trademark law can and cannot cover. identity.org/can-you-tradem…
English
0
0
1
51
Identity.org
Identity.org@identity·
Meta has filed lawsuits in Brazil, China, and Vietnam against advertisers accused of using AI-generated celebrity images and voices in scam ads that asked people to send money or share personal information. This kind of misuse isn’t new. Public figures have appeared in fraudulent ads for years. What makes these cases clearer legally is the link to financial deception. When identity is used to mislead consumers for profit, fraud and consumer protection laws apply. The enforcement path is relatively direct. The harder question arises outside obvious scams. Digital identity laws vary across jurisdictions and remain unsettled in many places. Not every unauthorized use fits neatly into fraud statutes. Legal systems tend to move fastest where harm is measurable. Other forms of identity misuse often fall into gray areas, even when the impact is real. cp24.com/news/world/202…
English
0
1
2
96
Identity.org
Identity.org@identity·
The market extracts value from identity as if it were an asset. But extraction isn’t the same as structured ownership. Brands monetize recognizable faces. Platforms optimize familiarity. AI replicates voice, image, and style. The value is undeniable. What’s missing is formalization. Identity isn’t consistently treated like intellectual property with clear valuation, governance, or proactive rights architecture. Most people don’t deliberately structure or license it. It’s monetized reactively, often platform-dependent and contractually vague. That gap matters. An asset isn’t just something that generates revenue. It’s something intentionally structured, protected, and managed over time. Identity is moving in that direction—but we’re still early. The real shift comes when individuals control replication, not just promotion.
Identity.org tweet media
English
0
0
1
47
Identity.org
Identity.org@identity·
When serious legal pressure appears, platforms tend to respond quickly. Restrictions are introduced. Certain categories of content are limited. Policies tighten. That responsiveness makes one thing clear: the ability to act is there. What is less consistent is how that protection works for individuals. Most people do not have the resources to trigger immediate change when their likeness is used without permission. A global studio and a private individual may not carry the same legal weight, but the underlying issue is the same. Someone’s identity is being used without approval. Control over your face, voice, and name should not depend on scale or influence. If systems can protect major rights holders, they can be built to protect everyone.
English
0
0
1
62
Identity.org
Identity.org@identity·
That Brad Pitt vs. Tom Cruise “fight scene” was hard to miss this past week. It looked like a real studio production. The lighting, the pacing, the performances felt convincing. But it wasn’t real. Now the Motion Picture Association is speaking out about tools like Seedance and how recognizable actors and major film properties are being handled. When the trade group representing major studios steps in, it signals something bigger than one viral clip. The conversation is moving from curiosity to governance. Likeness has value. Intellectual property has value. The platforms that create clear safeguards around both will earn long-term trust as this space evolves. variety.com/2026/film/news…
English
0
1
1
165