Film gene – the future of AI video generation

Meta, the dominant company Facebook and Instagram, introduced a breakthrough model of artificial intelligence called Film geneDesigned to significantly improve video creation. This new video generator powered by AI It is able to create high resolution films with sound, using only text hints. The announcement of the latest expedition of General Markes Meta in generative artificial intelligence, placing it in direct competition with other industry giants, such as Opeli and Google.

At the base of the film, the gene allows users to create a completely new video clips from simple text input data such: “Lenopnia with pink sunglasses lies on a donut swimmer in the pool.” The model offers a significant jump for video generation, crossing the limits of creativity for creators, content and enthusiasts creators. Movies can be produced in various proportions and can last up to 16 seconds, which makes them suitable for a wide range of applications, from social media posts to film clips. This technology is based on previous work in video synthesis, such as the Make-A-Scene video generator and the EMU image synthesis model.

In addition to creating new movies from scratch, Movie Gen offers advanced editing capabilities. Users can send existing films or images and modify them using simple text commands. For example, the immovable image of a person can be transformed into moving video in which a person performs actions based on input monitor. The ability to adapt existing materials does not end with this. Users can change specific details, such as the background, objects and even costumes. These changes, all made with the help of text hints, show the precision and versatility of the Gen's movie edition function.

But what really distinguishes the gene gene distinguishes from competitors is the integration of high quality sound generation. AI can create soundtracks, sound effects and ambient sounds that synchronize with visualizations of the generated video. Users can provide text hints to specific audio tips, such as “rustling leaves” or “steps on gravel”, and the film film will include these sounds in the stage. The model can generate up to 45 seconds of sound, ensuring that even short films or detailed clips are accompanied by a dynamic sound landscape. Meta AI also mentioned that the model contains the technique of expanding the sound, enabling smooth looping of the sound to longer movies.

The unveiling of film genes appears at a time when other main players in the AI ​​industry also develop similar tools. Opeli announced its text model for Sora at the beginning of this year, but the model has not yet been published. And Runway recently introduced his latest AI -Gen-3 Alpha generative platform.

However, Movie Gen stands out due to the ability to perform many tasks: generating new video content, editing existing clips and turning on personalized elements, while maintaining the integrity of the original film. According to Meta AI, in blind tests, the Gen movie exceeded competitive models in both video and audio generation.

Despite the emotions surrounding the film gene, Meta said that the tool is not yet ready for public release. According to the company, the technology is still too expensive to work efficiently and the generation time is longer than the desired. These technical restrictions mean that the film film will remain for now, without a scheduled schedule, when it is made available to programmers or general society.

https://www.youtube.com/watch?v=svtdag9zqzc

LEAVE A REPLY

Please enter your comment!
Please enter your name here