LiveVideoStackCon, an audio-visual event, was held in Beijing from March 31 to April 1. Meishe Co., Ltd. participated in the conference for the sixth year running as one of the representative enterprises, displaying its leading accomplishments in AIGC and digital human.
Ruiquan Zhang, Senior AI Algorithm Specialist at the R&D Center of Meishe, attended the event and shared his insights into the rapid implementation of digital content production.
The Meishe R&D team can create virtual videos devoid of actual objects using computer vision, image processing, and deep learning thanks to a method called virtual video synthesis. Voice-driven, action-driven, and face-changing technologies are now the three main types of video generation. Zhang evaluated the advantages and disadvantages of several approaches in depth and presented the technological principles of expression and mouth prediction, 3D face rendering, and face synthesis. GLB file stores information about the 3D model in binary format, including node levels, cameras, materials, animations and meshes. When generating a digital human figure in GLB file, it can be converted to Meicam's self-developed 3D file format ".ARSCENE", and driven by Meicam SDK for real-time rendering on different platforms.
Zhang's team focuses on two directions in ChatGPT. One is the digital voice assistant. This combines the voice interaction system and hybrid semantic understanding capacity of ChatGPT to provide better responses to open-ended questions from users. The second option combines ChatGPT, video editing, and digital human. Users only need to enter one sentence, and the system can return the split script via ChatGPT and extract the necessary tags from it. It then uses that information to intelligently find the appropriate clips in the media databank, and users can choose to use the template.
So far, Meishe AIGC digital human solution has been integrated into a number of brands and has created outstanding application results in products for smart cars, smart watches, smartphones, social media, etc.
This story has been provided by PRNewswire. ANI will not be responsible in any way for the content of this article. (ANI/PRNewswire)
Zhang's team has come up with a relatively mature technical solution to achieve realistic virtual video synthesis effects and developed a variety of applications that quickly generate digital human images. The operator only needs to upload a photo or a video, enter the preset text content, and the system automatically generates the corresponding digital human image with a realistic voice.
READ ALSO: