MIPCOM24: The Applied AI Summit
Key insights about the impact of AI in the Media & Entertainment industry
Missed the MIPCOM AI Summit yesterday?
Whether you were tied up in meetings, couldn’t make it to Cannes, or just need a refresher, I’ve got you covered!
Introducing Summit Summaries: quick, insightful recaps of the key discussions from yesterday's summit.
Now it’s your turn to turn these highlights into actionable insights for your business ;)
During the MIPCOM week, I offer a special 20% discount on my premium membership TSL+. You’ll join in time to receive my special report “Beyond the Game”, a guide to how streaming is transforming sports in MENA, including a summary of the sports rights available by platforms. To be released on October 31st.
The Future of Media: Generative AI’s Transformation of Content Workflows
Evolution of Art and technology
Emily Golden, head of growth marketing at Runway, introduced the company's role in using AI to revolutionize art, entertainment, and human creativity. She emphasized the rapid growth of AI-generated content and predicted that within five years, 50% of online content will be AI-augmented.
Emily also traced the evolution of art, from the impact of the Industrial Revolution on Impressionism to the invention of the camera, which revolutionized visual capture and led to the coexistence of painting and photography.
“Within the next five years, 50% of online content generated by or augmented by AI” - Emily Golden, Head of Growth Marketing, Runway
The Impact of motion pictures
Emily discussed the early days of motion pictures, beginning with the first known photograph and the initial fears surrounding this new technology. She explained the swift evolution of moving pictures, from simple videos to full-scale cinematic masterpieces, and compared this rapid development to the slower progression of painting and photography.
Emily highlighted the transformative role of AI in shaping modern art and creative processes.
Runway's AI art evolution
Emily showcased Runway's journey in AI-generated art, starting with early pieces like "The Orange" in 2021, and progressing to complex cityscapes and scenes in 2022.
She highlighted improvements in quality and fidelity, leading to the introduction of the latest Gen 3 Alpha model. This model can generate entire video scenes and worlds from text prompts with high quality and minimal editing.
Emily demonstrated its capabilities, such as creating dynamic scenes, characters, and intricate camera movements.
AI in film and creativity
Emily described how AI is being used in filmmaking, including a 60-second video generated entirely by one person using Runway's tools. She emphasized how AI is democratizing creativity, enabling creators with limited resources to access high-quality production tools.
She described the creative process with AI as complex and non-linear, allowing rapid visualization, iteration, and collaboration. AI was also highlighted as a valuable tool in location scouting, visualization, and pre-production planning, making the creative process more efficient.
Practical AI applications in film
Emily provided practical examples of AI's application in film production, such as generating visual effects proofs of concept and creating production-ready B-roll. She discussed how AI can animate archival images, adding life to historical footage.
Emily highlighted Runway's partnership with Lionsgate, underscoring the value of proprietary training data. She demonstrated Runway's tools, including text-to-video, image-to-video, and video-to-video generation, for creating high-quality content rapidly.
Combining AI with traditional techniques
Emily showcased a commercial produced in partnership with Yann Lee Joshua, featuring AI-generated appearances and voiceover of Anthony Joshua. The project demonstrated how AI-generated content can be effectively combined with traditional filmmaking techniques to save time and reduce costs.
She stressed the importance of having a clear vision and understanding how to best utilize AI tools. Emily also discussed future advancements in AI, focusing on the development of general world models that can understand and generate across various media types.
“We have a say a runway that the best stories are yet to be told” - Emily Golden, Head of Growth Marketing, Runway
Future of AI in art and creativity
Emily concluded by discussing the rapid progress of AI models and the potential for even faster advancements in the future. She highlighted Runway's goal of creating sophisticated systems capable of understanding and generating across multiple media formats, including video, audio, text, and images.
Examples of current AI capabilities were shown, such as simulating real-world interactions and environments. Emily encouraged the audience to explore and adopt AI tools, emphasizing the limitless potential for transforming art and creativity.
The Emerging State of Generative AI in TV & Entertainment
Generative AI adoption and its impact on industry
Audrey Schomer discussed the mixed adoption of generative AI among US companies, with around 20% having no plans to use it. Productivity gains are the main expected benefit, with efficiency as the key metric.
Faster content production could lead to more content, but data security and copyright remain concerns. Generative AI's readiness for high-quality content, particularly for TV and film, is still uncertain.
"Boosting productivity is the main benefit that decision makers said that they would expect from using generative AI” - Audrey Schomer, Media Analyst & Research Editor, Variety Intelligence Platform
Challenges and concerns in generative AI adoption
Key reasons for not adopting generative AI include copyright uncertainty, lack of confidence in outputs, and a shortage of skilled personnel. Workers have the most confidence in AI's ability for tasks like sound effects, VFX edits, and concept work.
Copyright concerns are prompting companies to train models ethically to avoid risks. Generative AI is used for concept work, enhanced VFX, and dubbing, but not as final assets in screen content.
Applications and benefits of generative AI in content creation
Generative AI is used in content creation stages like pre-production, marketing, and distribution. Current uses include concept work, VFX, and dubbing. AI voices are gaining traction for localizing sports clips, offering faster turnaround.
AI lip-sync tools synchronize actors' facial movements with dubbed speech, though human voice actors are still preferred for premium projects. Face swapping and de-aging tools are used for basic edits, potentially eliminating reshoots by allowing remote line redos.
"The biggest use cases that are being looked at right now are concept work, enhanced VFX and dubbing and subtitling" - Audrey Schomer, Media Analyst & Research Editor, Variety Intelligence Platform
Advancements in video generation and future prospects
Video generation is advancing quickly, but its professional use is still uncertain, with questions about who will use these tools. Companies are working with AI teams to maximize video generation capabilities.
Text-to-video tools struggle with consistency and realism, but fine-tuning models on specific IP could improve results. The industry faces legal and ethical issues, with consumer acceptance likely shaping AI's future in content creation.
Final takeaways and industry outlook
Generative AI's capabilities are improving, but legal and ethical questions remain. Consumer response will influence its long-term effectiveness, with licensing and model fine-tuning being key areas to watch. The industry needs technical and creative methods to protect and leverage generative AI.
The New Profit Engine: Revolutionizing Entertainment Profitability
Panelist introductions
Jonathan Verk introduced the panel, emphasizing AI's importance in production workflows. Johan Choron from Google shared his experience with DeepMind's Go success, while Marianne Carpentier discussed using AI in animation, including replacing an actress's face during quarantine. Damien Viel talked about his work at Google, YouTube, and Twitter, and his initial disappointment with AI-generated music.
AI in production workflows
Johan Choron explained Google's use of AI for writing and summarization, saving time, and their partnership with Warner Brothers for caption AI, reducing captioning time by 80%. Marianne Carpentier shared how AI optimizes data management and planning for daily soap productions.
"We used AI to generate visuals and mood boards and pitch deck very quickly" - Marianne Carpentier, Director of Emerging Technologies for Production & Distribution, Group TF1-Newen Studios
AI in content distribution and creativity
Damien Viel discussed AI's logistical role in content management, including storage and indexing. He emphasized the need for independence while leveraging AI for global distribution and highlighted AI's potential to enhance creativity and save costs in production.
"How to make sure we are building tools that can enhance creativity?” - Damien Viel, Chief Digital & Marketing Officer, Banijay Entertainment
Ethical considerations and challenges
Johan Choron discussed Google's AI principles and the need for ethical considerations and industry standards. Marianne Carpentier emphasized internal regulation, addressing rights, and data transparency in AI use.
Future of AI in Media and Entertainment
Johan Choron saw AI as transformative, optimizing processes and enabling content fluidity. Marianne Carpentier highlighted faster production and better audience engagement through AI.
"We are working with people all around the world in trying to make the content more accessible to and personalized to each audiences." - Johann Choron, Strategic Partnerships, Head of Gen AI for Media & Entertainment, Google
Independence and strategic considerations
Damien Viel discussed challenges in maintaining independence while using AI for logistics and creativity. He stressed the importance of strategic decisions to protect IP and ensure long-term independence. Damien also expressed excitement about AI's potential to enhance creativity and reduce costs.
Found this article insightful? Don't keep it to yourself, share it with your team and start the conversation!
First AI that can automatically translate emotion in video
Jesse Shemen, the co-founder and CEO of Paper Cup, presented their innovative approach to translating and dubbing video and audio content into multiple languages while preserving the emotional expression and voice characteristics of the original.
Jesse explained that the current process of dubbing content is challenging and time-consuming, often relying on hours of work from human translators and voice actors. To address this, Paper Cup has developed a multi-step system that combines advanced AI technologies.
"We want to make all video and audio content consumable in any language." - Jesse Shemen, Co-founder & CEO, Papercup
The first step is "context-aware speech transcription and translation," which analyzes the source content to capture the appropriate tone, rhythm, and length when translating to the target language. This helps ensure the translated content is not only accurate, but also naturally consumable.
The second key component is Paper Cup's cross-lingual prosody transfer technology. This allows the system to model and transfer the prosody (emotion, rhythm, and tone), environmental characteristics, and persona of the original speaker into the target language version.
To demonstrate the capabilities, Jesse played several examples showcasing the system's ability to replicate emotional inflections, speaker-specific traits like whispering, and even voice cloning - all while maintaining the appropriate translation.
The ultimate goal, Jesse emphasized, is to make all video and audio content consumable in any language at scale. This unlocks new global audiences for content creators and publishers, who currently struggle to cost-effectively localize the vast amount of media being produced.
From Audience to Player: The Dawn of Interactive Streaming
The presentation was given by Devo Harris, the founder of Adventr, and focused on the importance of interactive streaming in the modern media landscape. Devo began by challenging the common belief that video is the primary language of today's audience, suggesting that gaming is actually more relevant and engaging.
Devo argued that gaming, in terms of scale and growth, far outpaces the entire online video industry. He emphasized that modern viewers expect to be part of the media experience, with features like personalized feeds and interactive elements. Devo presented gaming as a necessity for media companies to keep viewers engaged and delighted, noting that it is becoming a standard requirement for media offerings.
To illustrate this point, Devo provided several examples of gaming being integrated into various industries. He mentioned that every Tesla comes with built-in games, demonstrating how gaming is being used to enhance competitive business models. Devo also highlighted that LinkedIn offers gaming and interactive entertainment, as these features help keep viewers engaged for longer periods - a crucial metric for compensation in the media industry.
"Gaming is far exponentially larger than the entire online video industry in terms of size, scale, and growth" - Devo Harris, Founder & CEO, Adventr
Devo then introduced Adventr, his company that provides solutions for making content more interactive and personalized for viewers at scale. He showcased some of Adventr's interactive features, including choose-your-own-adventure content, interactive game shows, and shoppable video.
Overall, the presentation emphasized the growing importance of interactive and gaming-like experiences in the media industry, as companies strive to keep their audiences engaged and delighted in an increasingly competitive landscape.
Revolutionizing Entertainment Marketing with Vertical AI
Jonathan Verk discusses the significant shift in how audiences, especially younger demographics, discover and consume content. He notes that 80% of new content decisions are made on social media, and 70% of television viewing is of library content rather than new releases.
Jonathan emphasizes the high cost and demand for continuous, engaging social media content. Marketing departments are facing the challenge of needing to do more with less, as many are experiencing layoffs.
“Social media has become the primary platform for content discovery, with 80% of new content decisions made there” - Jonathan Verk, coFounder & CEO, Social Department
Jonathan introduces Social Department, a solution that aims to automate the social media workflow and content creation process for marketing teams at networks, streamers, fast channels, and studios.
Social Department's platform leverages a three-layer system: a content layer that uses large language models to understand the context of the content, an audience insights layer with demographic data, and a marketing strategy layer built on campaign efficacy data.
The platform's video intelligence feature allows users to upload content, and the AI tags and analyzes it. Marketers can then select target demographics, and the AI will create a bespoke campaign strategy and automate the creation of 80% of the social media assets.
Jonathan emphasizes that Social Department can significantly reduce the cost per social media asset, from $1,300 for agency-created assets or $625 for in-house creation down to less than $10 per asset.
Jonathan shares a case study with CBS, where Social Department's AI was able to identify 94 rules from 22,000 minutes of NCIS content in minutes, compared to the 1,000 hours and $17,000 it would have taken a human team.
The presentation demonstrates how the AI can create various social media assets, including posts, polls, and memes, tailored to different demographics, further highlighting the efficiency and scalability of the platform.
Jonathan emphasizes that the goal of Social Department is not to replace humans, but to enhance their productivity by automating the mundane, repetitive tasks, allowing marketing teams to focus on strategic and creative work.
That’s all for today, key insights from the Applied AI Summit. If you found this breakdown valuable, spread the word and share it with your network!
Throughout the week, I’ll be sharing with you the latest insights coming from the three MIPCOM summits:
Full article coming on Wednesday, October 23
Article coming on Thursday, October 24
Don’t miss my Hot Takes from MIPCOM 2024 during a live webinar on October 29th, with Marion Ranchet from Streaming Made Easy. Register here.
Master Streaming in MENA with exclusive data & analysis for streaming media professionals. Subscribe to TLS+:
Read all the previous edition of The Streaming Lab here.
Download my latest report, STREAMING in MENA - August 2024. Exclusively available to TSL+ Prime members!
Interested in advertising with The Streaming Lab and reach a qualified audience in MENA? Email me.