Illegal vs. Illegitimate
This past January, I was scrolling through social media when I first saw them – hyper-realistic, AI-generated explicit images of Taylor Swift. My stomach turned. As the images spread like wildfire across platforms, a part of me took comfort in thinking, "Swift's team will come down hard on whoever created and spread these. They'll face serious legal consequences."
But as I dug deeper, talking to dozens of lawyers and legal experts, I was hit with a shocking realization: spreading these AI-generated explicit images of public figures like Swift might not actually be illegal in many jurisdictions. Everyone I spoke to agreed it's deeply wrong, but the law hasn't caught up with the technology. This unsettling discovery sent me down a rabbit hole, exploring the murky waters where technology, ethics, and law intersect in our increasingly AI-driven world.
Over the past year, I've watched incident after incident unfold, each one bringing into sharper focus the crucial distinction between what's illegitimate and what's illegal when it comes to AI use in creative industries. Take the recent case where former President Donald Trump shared AI-generated images of Swift seemingly endorsing him. It's misleading, potentially damaging to Swift's reputation, and yet – it may not have clear legal consequences.
And it's just the tip of the iceberg. I was gutted reading Jenna Ortega's recent interview with the New York Times, where she recounted her experiences with AI-generated explicit content as a minor. The fact that a young actor has to deal with this kind of violation is nothing short of horrifying.
What's become painfully clear is that there's a massive gap between what we consider ethically wrong and what our current laws actually prohibit. The ease with which AI can be used to create and spread harmful content, coupled with the lack of clear legal remedies, has created a Wild West in the digital realm.
This newsletter is my attempt to make sense of this complex landscape. How do we harness the incredible potential of AI in creative industries while protecting individuals and maintaining the integrity of our cultural output? How do we build systems that encourage innovation while discouraging misuse? If we can't prevent widespread piracy and misuse, what can we do?
The Distinction Between Illegitimate and Illegal
To navigate this new frontier, it's crucial to understand the distinction between illegitimate and illegal uses of AI in creative contexts. This differentiation is not just semantic – it's at the heart of the complex issues we face in regulating and ethically deploying AI technologies.
Illegal uses of AI are those that explicitly violate existing laws. This might include using AI to impersonate someone for fraudulent purposes. However, many of the most troubling AI applications, like the Taylor Swift incident, fall into legal gray areas. While clearly unethical and a violation of privacy and dignity, they may not technically be illegal in places lacking specific legislation addressing AI-generated content.
Illegitimate uses, on the other hand, encompass a broader category of AI applications that may not break any specific laws but violate ethical standards, industry norms, or the spirit of existing regulations. These uses might exploit loopholes in current legislation or operate in areas where the law hasn't yet caught up with technological advancements.
The distinction becomes even more critical when we consider the pace at which AI technology is evolving. Lawmakers and regulators are often playing catch-up, trying to address yesterday's problems while tomorrow's challenges are already on the horizon. This gap between technological capability and legal framework creates a fertile ground for those willing to push ethical boundaries for profit or notoriety.
The Netflix Dilemma: A Case Study in AI Ethics
To better understand this distinction, let's consider a hypothetical scenario involving Netflix. Imagine the streaming giant hires a production team to create a new documentary series. In the process, the team uses AI-generated voices or images of real people without their consent. While this might not be explicitly illegal in some jurisdictions, it falls into a gray area that Netflix would likely consider illegitimate use.
Why would Netflix be unlikely to accept this scenario, even if it's not technically illegal? The reasons are multifaceted. Netflix's brand relies on trust from both viewers and content creators. Using AI-generated content without consent could severely damage this trust. While current laws might not explicitly prohibit such use, it could open Netflix up to lawsuits or legal challenges, especially as laws evolve to address AI. Moreover, Netflix depends on good relationships with actors, directors, and other creatives. Using AI to replicate their likeness without permission could severely damage these relationships.
Organizations like SAG-AFTRA have been vocal about protecting their members from unauthorized AI use. Netflix wouldn't want to risk conflicts with powerful industry unions. Additionally, Netflix, like many large companies, has ethical guidelines and corporate social responsibility commitments that such practices would likely violate. Finally, there might be concerns about the quality and consistency of AI-generated elements compared to traditional production methods.
This scenario illustrates how the line between illegitimate and illegal use can be blurry. While not explicitly against the law, these practices violate industry norms, ethical standards, and potentially the spirit of existing regulations.
The Evolution of Incentives
Currently, the incentives for companies like Netflix to avoid even the appearance of illegitimate AI use are strong. Public criticism, potential boycotts, and the risk of losing valuable talent relationships all serve as powerful deterrents. But how might these incentives evolve?
As AI technology improves, the quality and cost-effectiveness of AI-generated content may become too compelling to ignore. Younger audiences growing up with AI might be more accepting of its use in content creation, reducing the risk of backlash. As laws catch up with technology, clearer guidelines may emerge, giving companies more confidence in what constitutes legitimate use.
Competition will also play a crucial role. If competitors successfully implement AI in ways that significantly reduce costs or increase output, the pressure to adopt similar practices will grow. Moreover, if AI can credibly replicate or replace certain aspects of human performance, it could reduce the leverage of talent in negotiations, potentially making the use of AI more attractive to production companies.
This situation encapsulates the tension between the fear of being first and the fear of being last in adopting new technologies. Right now, a company like Netflix is unlikely to take risks with AI use that could be perceived as illegitimate. However, this calculus could shift rapidly. If a competitor demonstrates they can use AI technology legally and ethically to create compelling content at a fraction of the cost, the pressure to adopt similar practices will grow. The fear of being left behind could start to outweigh the fear of being first.
The Looming Flood of Illegitimate AI Content
To truly understand the magnitude of the challenge we face, we need to look back at a not-so-distant past when the creative industries faced a similar existential threat: the rise of digital piracy.
Cast your mind back to the late 1990s and early 2000s. Napster had just burst onto the scene, followed closely by platforms like Limewire and The Pirate Bay. Suddenly, anyone with an internet connection could access virtually any song or movie for free. The music industry, in particular, was thrown into chaos. CD sales plummeted, and many predicted the death of the recording industry as we knew it.
The initial response from the industry was predictable: litigation and legislation. The Recording Industry Association of America (RIAA) launched a campaign of lawsuits against individual file-sharers, sometimes targeting college students or single parents with hefty fines. Meanwhile, lobbyists pushed for stricter copyright laws and harsher penalties for infringement.
But here's the crucial lesson we need to take from that era: legal action and regulation alone were woefully insufficient to stem the tide of piracy. You can't sue an entire generation into compliance, nor can you regulate fast enough to keep up with technological innovation. The pirates were always one step ahead, with new platforms and technologies emerging as fast as the old ones could be shut down.
The parallels to our current AI situation are striking. Just as digital piracy democratized access to content, AI is democratizing the ability to create content. And just as the music industry initially tried to fight piracy through legal means, many in the creative industries today are looking to litigation and regulation as the primary tools to control AI.
But history suggests this approach is unlikely to succeed on its own. The genie is out of the bottle. AI tools for content creation are becoming more sophisticated and accessible by the day. Trying to put this technology back in the box is not only futile but potentially harmful, as it may stifle innovation and push development underground where it's harder to monitor and influence.
Lessons from the Streaming Revolution
The real turning point in the battle against piracy didn't come from the courtroom or the legislature. It came from the collaboration of new technology with the major music and film/tv studios, with the rise of streaming platforms like Spotify and Netflix. These services offered a legitimate alternative that was often more convenient and user-friendly than piracy, while still providing a revenue stream for creators and rights holders.
Spotify recognized that the appeal of piracy wasn't just about getting music for free – it was about instant access to a vast library of songs. By offering a free, ad-supported tier alongside its premium subscription, Spotify made it easier for many users to stream legally than to pirate. Netflix, similarly, saw that consumers were willing to pay for content if it was conveniently accessible and reasonably priced.
These platforms didn't eliminate piracy entirely, but they significantly reduced its impact by offering a compelling legitimate alternative. They created a new paradigm for content distribution that aligned with changing consumer behaviors and technological capabilities.
The key takeaway here is that the solution came from collaboration between tech and the entertainment industry. It wasn't imposed by external regulators or fought out in courtrooms. It was innovative companies, willing to disrupt existing business models, that ultimately created a viable path forward.
As we face AI disruption in creative industries, we need to take this lesson to heart. The solution to the challenges posed by AI is unlikely to come from trying to restrict or control the technology through legal means alone. It is more likely to come from embracing the technology and creating new, legitimate models for its use that benefit creators, companies, and consumers alike.
Building Infrastructure for Legitimate AI Commerce
So, what's the solution? How do we harness the creative potential of AI while mitigating its risks? I believe the answer lies in proactively building the infrastructure for legitimate AI commerce in the creative industries.
This infrastructure needs to address several key challenges. As the volume of content increases, we need systems that can handle a much higher volume of transactions. These systems need to be able to track usage, attribute ownership, and facilitate payments at a scale and speed that matches AI-driven content creation.
We need robust systems for managing rights in an AI-driven landscape. This includes not just copyrights, but also rights of publicity, moral rights, and potentially new forms of rights that emerge as AI becomes more prevalent. We also need to develop and enforce ethical standards for AI use in creative contexts. This goes beyond legal compliance to address issues of consent, representation, and the potential societal impacts of AI-generated content.
Transparency is crucial. As AI-generated content becomes more prevalent and sophisticated, we need mechanisms to ensure consumers can understand when they're engaging with AI-generated content and have confidence in its provenance.
We need to develop new models for compensating creators in an AI-driven landscape. This might include systems for attributing and compensating the creators whose work is used to train AI models, as well as new revenue-sharing models for AI-assisted creations. As AI reshapes the creative landscape, we need to ensure that creators and industry professionals have the skills and knowledge to thrive in this new environment.
Building this infrastructure is not just about addressing challenges – it's about creating new opportunities. Just as streaming created new revenue streams and business models in the music and film industries, legitimate AI commerce has the potential to unlock new forms of value in the creative sector.
The Audience Perspective: Challenging Our Assumptions
As we grapple with these changes, it's crucial to consider the perspective of the audience – the consumers of creative content. And here, we need to challenge some of our assumptions, particularly about how younger generations will engage with culture.
There's a natural tendency for each generation to assume that the cultural touchstones of their youth will hold the same significance for future generations. But history has shown us time and again that this is rarely the case. Each generation tends to push back against the norms and values of their predecessors, carving out their own cultural identities. For instance, the musical icons of the 1960s and 70s – the Joni Mitchells and James Taylors – may hold a revered place in the hearts of Baby Boomers and even some Gen Xers, but do they resonate in the same way with Millennials or Gen Z?
I believe it's a fallacy to assume that the next generation will engage with culture in the same way we did. Each generation tends to push back against the norms and values of their predecessors, carving out their own cultural identities. We saw this with the rise of rock 'n' roll in the 50s, punk in the 70s, hip-hop in the 80s and 90s, and now with the fragmented, internet-driven music scene of today.
Consider the shift from James Taylor to Skrillex. To many older listeners, Skrillex's electronic compositions might sound like noise, lacking the craftsmanship and emotional depth of a James Taylor ballad. But to a younger audience, Skrillex represents a new form of musical expression, one that speaks to their experiences and aesthetics in a way that acoustic folk simply may not.
This generational shift in taste and values extends beyond just musical preferences. It encompasses how people consume media, how they interact with technology, and what they consider authentic or valuable in creative expression. (even now I can feel Gen Z’ers rolling their eyes that I’m using Skrillex as a contemporary example).
So, when we consider the rise of AI in creative industries, we need to be cautious about projecting our own values and assumptions onto future audiences. We might place a high value on human authorship, seeing AI-generated content as somehow less authentic or meaningful. But will the generations growing up with AI make the same distinction?
It's entirely possible – even likely – that future audiences will engage with AI-generated content in ways we can't yet imagine. They may develop new criteria for judging the quality and authenticity of content that don't prioritize human authorship in the same way we do. They might appreciate AI-generated content for its unique qualities, or for how it enables new forms of creative expression.
This doesn't mean that human creativity will become obsolete – far from it. But it does mean that the way we think about creativity, authorship, and the value of creative works may need to evolve. Instead of trying to preserve old models of creative production and consumption, we need to be open to new possibilities and new ways of understanding and appreciating creative expression.
Shaping a New Paradigm for Legitimate AI Commerce
As we reflect on the incidents involving Taylor Swift, Jenna Ortega, and countless others, it's easy to feel overwhelmed by the challenges posed by AI in creative industries. The gap between what's illegitimate and what's illegal seems vast and daunting. But it's crucial to remember that we're not powerless in the face of these challenges.
While we may not have immediate control over the legal landscape, we do have the power to shape industry norms, ethical standards, and consumer expectations. Just as Spotify and Netflix transformed the music and film industries in response to digital piracy, we have the opportunity to create a new paradigm for legitimate AI commerce in creative fields.
Imagine a future where AI-generated content is created and distributed through platforms that prioritize consent, attribution, and fair compensation. A future where the provenance of AI-generated works is transparent and verifiable. A future where creators collaborate with AI tools in ways that enhance rather than replace human creativity.
In this vision, illegitimate uses of AI – like the non-consensual deepfakes that plagued Swift and Ortega – are relegated to the dark corners of the internet. They may still exist, but their impact and visibility are drastically reduced. Instead, the spotlight shines on innovative, ethical applications of AI in creative fields.
The path forward won't be easy, and it will require ongoing dialogue, experimentation, and adaptation. But the potential rewards – a new renaissance of creativity, democratized access to creative tools, and innovative forms of expression – make it a journey worth undertaking.
We can choose to be pioneers in shaping the future of AI in creative industries. Let's build the infrastructure, establish the norms, and create the tools that will usher in a new era of legitimate, ethical, and groundbreaking AI-assisted creativity.