18.8 C
Casper
Tuesday, October 22, 2024

DeepMind’s AI Now Generates Soundtracks and Dialogue for Videos

Must read

DeepMind unveils V2A, an AI that creates soundtracks for videos. It uses video and audio data to generate music, effects, and dialogue. While not perfect, it raises concerns about the impact on creative jobs.

DeepMind, Google’s AI research lab, says it’s developing AI tech to generate soundtracks for videos.

In a post on its official blog, DeepMind says that it sees the tech V2A (short for “video-to-audio”) as an essential piece of the AI-generated media puzzle. While plenty of organizations, including DeepMind, have developed video-generating AI models, these models can’t create sound effects to sync with the videos that they generate.

“Video generation models are advancing at an incredible pace, but many current systems can only generate silent output,” DeepMind writes. “V2A technology [could] become a promising approach for bringing generated movies to life.”

DeepMind’s V2A tech takes the description of a soundtrack (e.g., “jellyfish pulsating underwater, marine life, ocean”) paired with a video to create music, sound effects, and even dialogue that matches the characters and tone of the video, watermarked by DeepMind’s deepfakes-combating SynthID technology. The AI powering V2A, a diffusion model, was trained on a combination of sounds and dialogue transcripts as well as video clips, DeepMind says.

“By training on video, audio, and the additional annotations, our technology learns to associate specific audio events with various visual scenes while responding to the information provided in the annotations or transcripts,” according to DeepMind.

Mum’s the word on whether any training data was copyrighted — and whether the data’s creators were informed of DeepMind’s work. We’ve contacted DeepMind for clarification and will update this post if we hear back.

AI-powered sound-generating tools aren’t novel. Startup Stability AI released one just last week, and ElevenLabs launched one in May. Nor are models to create video sound effects. A Microsoft project can generate talking and singing videos from a still image, and platforms like Pika and GenreX have trained models to take a video and make a best guess at what music or effects are appropriate in a given scene.

However, DeepMind claims that its V2A tech is unique in that it can understand the raw pixels from a video and sync generated sounds with the video automatically, optionally sans description.

V2A isn’t perfect, and DeepMind acknowledges this. Because the underlying model wasn’t trained on many videos with artifacts or distortions, it doesn’t create particularly high-quality audio for these. In general, the generated audio isn’t super convincing; my colleague Natasha Lomas described it as “a smorgasbord of stereotypical sounds,” and I can’t say I disagree.

For those reasons and to prevent misuse, DeepMind says it won’t release the tech to the public anytime soon.

Also Read: Web Scraping for AI Training: Can it Comply with GDPR?

“To make sure our V2A technology can positively impact the creative community, we’re gathering diverse perspectives and insights from leading creators and filmmakers and using this valuable feedback to inform our ongoing research and development,” DeepMind writes. “Before we consider opening access to it to the wider public, our V2A technology will undergo rigorous safety assessments and testing.”

DeepMind pitches its V2A technology as an especially useful tool for archivists and those working with historical footage. But generative AI along these lines also threatens to upend the film and TV industry. It’ll take some seriously strong labor protections to ensure that generative media tools don’t eliminate jobs—or, as the case may be, entire professions.

More articles

Latest posts