Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
-
Updated
Sep 20, 2018 - Python
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
This is a pix2pix demo that learns from pose and translates this into a human. A webcam-enabled application is also provided that translates your pose to the trained pose. Everybody dance now !
This is a pix2pix demo that learns from edge and translates this into view. A interactive application is also provided that translates edge to view.
Dataset generation pipeline with BeamNG.tech + Visual Experiments with vid2vid models.
Audio driven video synthesis
Demo for NVIDIA's Fewshot Vid2vid
Small script for AUTOMATIC1111/stable-diffusion-webui to run video through img2img.
Python OSS library that provides vid2vid pipeline by using Hugging Face's diffusers.
ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".
A modified version of vid2vid for Speech2Video, Text2Video Paper
ControlAnimate Library
vid2vid ai optimization script
WarpFusion
Add a description, image, and links to the vid2vid topic page so that developers can more easily learn about it.
To associate your repository with the vid2vid topic, visit your repo's landing page and select "manage topics."