Hollie Choi, managing director of the Entertainment ID Registry (EIDR), recently spoke at a meeting of the Los Angeles chapter of the Association of Information Technology Professionals (AITP), sharing insights into how the media and entertainment industry is using AI in everything from production to metadata.
Her presentation — “The Intersection of AI Technology and Media & Entertainment” — also covered the importance of identifying AI-generated content, touting the importance of the Coalition for Content Provenance & Authenticity, an open technical standard that works to verify the sources of media content.
Choi’s presentation aimed to cut through the hype around AI and instead explored how AI supports creativity while preserving trust. AI is now a part of every workflow, from script to stream, with content companies using it in localization, metadata, and distribution automation. “How do we guide AI responsibly?” Choi said. “’We’re all defining where help ends and harm begins.”
AI is core collaborator in creative workflows, transforming how we create, localize, and monetize content, Choi said. Metadata automation saves hours of manual labor. Contextual advertising, with AI offering ads based on content and sentiment, not personal data, can improve discoverability and distribution efficiency. Subtitles, captions, dubbing, and image manipulation, all are made easier using AI.
But it’s redefining authorship and ownership, leading to deepfakes, copyright suits, and likeness replication, all in turn leading to attempts at government regulation. And rights, residuals, and contracts are all lagging behind the technology, Choi said.
Every media transition in the history of the business begins in confusion. Yet we’ve survived them all, Choi said. “Transparency will soon be mandatory,” thanks to laws both domestically and internationally, she said. It’s just the safeguards surrounding AI that need some work.
Choi also offered a word on how Generative AI differs from Agentic AI: “Generative is the artist; Agentic is the producer,” she said. Agentic AI in media and entertainment means personalized distribution, automated licensing, and fan engagement. “If we can trace the action, we can trust the automation,” she said.
Choi also noted the legal challenges AI is presenting to name, image, likeness, and voice talent. Who really owns AI-assisted creative output? For residuals, digital identities complicate compensation. And only a patchwork of global laws challenge exist to enforce the rights of real people in an AI-generated environment.
But it’s getting better: effective Jan. 1 are two California AI transparency laws (AB 2013 and SB 942), which will require training-data transparency and AI content disclosures, with watermarking and consent for deepfakes. Also coming into law is the EU’s AI Act (2024/1689), the first global framework that requires labeling, watermarking, and lawful training data.
Thoughtful laws around AI and collaboration among stakeholders, with an emphasis on clean AI models using licensed, consent-based data, standardized identifiers and provenance metadata, and keeping human creativity at the center of innovation, will make AI fully work for media and entertainment and its people.
“AI isn’t the enemy — misuse is,” Choi said.

