HITS 2024: Identifiers Can Help the Supply Chain Integrate AI, EIDR Says

As artificial intelligence (AI) increasingly becomes an agitator for complexity within media and entertainment industry contracts, and as intellectual property (IP) rights are being challenged, identifiers can be used by the supply chain to help integrate AI respectfully for all parties, according to the Entertainment Identifier Registry (EIDR).

During the Data/Localization breakout session “EIDR in Content Provenance and Authenticity,” May 22 at the Hollywood Innovation & Transformation Summit (HITS), Hollie Choi, EIDR managing director, discussed the various concepts and applications of the EIDR ID in the international supply chain as it relates to content provenance and authenticity.

“I’m really excited to discuss this topic,” she said at the start of the session, noting: “It’s something that I am staying really closely aligned with because, as we all know and, I’ve said this before, it’s not an event in 2024 unless you talk about AI incessantly. So I am going to kind of talk a little bit about AI and kind of what’s driving EIDR into this space because it’s a little different than what we usually do – but not really.”

EIDR, she pointed out, registers all the content that gets produced by “mainly the major studios, but we also have a lot of independent content,” including silent movies, podcasts and radio shows. “Our goal is to identify and de-duplicate the identification of all of the content across the globe.”

EIDR has become “pretty well accepted in the U.S. but, this past year, we had a pretty big international expansion,” she told attendees, noting “we are now working in the U.K., South America, South Korea [and] India.”

So EIDR is “sort of expanding all over the globe now,” she said, adding: “A lot of things are driving that, but primarily, some of our biggest partners have come out and said, ‘We really like EIDR for distribution purposes, and we’ve built a lot of automation around it.’ And then, additionally, the Google Search team has come out and said, ‘We’ve worked on updating the Google Search algorithm. We are using EIDR as sort of a baseline.’ So it’s not the thing that will get your content found on Google, but it’s a good step one [that], if you register your content with EIDR, it’s going to be easier for Google to find you and then direct search people to your content.”

EIDR is used to identify content and then, “let’s say one of the studios, Warner Brothers, has identified content and then they distribute through Google Play: So they’re going to distribute the content to Google. Google is going to get the EIDR ID and, using automation, check to see if they have the right content to go with that ID. And then they can automate the content to be up on the platform. So it’s a real time saver when you can automate, especially in that amount of content that Google is handling.”

In the past, it could take content “up to a month to get up onto the Google Play Store” but, now, they can have it up in like 13 minutes…. So it’s a pretty significant time saving,” she said.

But “what does that have to do with AI and content provenance and authenticity?” she asked rhetorically, before shifting her talk to discuss “some of the challenges that we’re facing,” she said, noting there are “benefits obviously to AI, which is why it’s becoming so popular and people are really embracing it.”

Among the “really cool things that I’ve seen [AI] used in is localization,” she went on to say.

On the other hand, she said, AI also “leads to what we’ve all come to know [as] deep fakes.”

What EIDR is trying to do is “partner up with some of the other folks in the industry where we can start to find solutions for some of those things,” she explained.

Meanwhile, “there’s a big concern around AI training” of models “without stepping on somebody’s intellectual property rights,” she added.

She went on to say: “AI in media is kind of a double- edged sword. So we have all of these really great benefits. There are things that we can do with it that help to make our jobs easier, that make the consumer experience better but it also can help with that speed to market. But there are also the challenges of people using it for nefarious reasons. The AI training model is a real question because there’s tons and tons of videos and images out on the internet that anyone can source and use for AI training. And there isn’t really any kind of tracking or regulation around that today.”

As a result, she said: “We’re sort of seeing the laws are kind of trying to play catch up at this point. So in the United States, we have our copyright laws that really are supposed to kind of protect us from this kind of stuff, but they don’t really appear to do that. And if you kind of talk to people who are in the legal profession, they’ll tell you it doesn’t really work on the training models because our laws basically just say you can’t take the content or the data and redistribute it. It doesn’t say you can’t use it and then produce something with it and then redistribute that.”

There is, she said, “sort of this weird, fuzzy gray area where obviously these laws were written before this technology existed. So it’s kind of, again, playing catch up a little bit.”

In March, she noted the European Union (EU) “took some steps to address this,” voting to enact the Artificial Intelligence Act, which “lays out the legal foundation for regulating AI platforms.”

The AI Act seeks to “protect EU citizens from the worst safety and security risks associated with AI,” she said. “But it also addresses some of those copyright and IP issues. So it bans several uses of AI, including facial recognition databases for law enforcement or untargeted scraping for facial recognition databases, emotion regulation or recognition in the workplace and school settings, and then social scoring systems.” The law also “bans predictive policing and real-time biometric identification applications,” she said, but noted: “They did make some narrow exceptions for terrorist attack prevention and missing person investigations.”

She added: “Basically, [it] is just a regulatory framework. There aren’t really any penalties associated at this point.” Spain and France, meanwhile, are among countries whose “laws are actually pretty strict around” AI issues.

HITS Spring was presented by Box, with sponsorship by Fortinet, SHIB, AMD, Brightspot, Grant Thornton, MicroStrategy, the Trusted Partner Network, the Content Delivery & Security Association (CDSA) and EIDR, and was produced by MESA in partnership with the Pepperdine Graziadio School of Business.