Randomly generated samples by our method as described in Figure 1 and in Section 4.2. Videos are shown as GIFs and so repeat continuosly. Training and generated videos each consist of 13 frames.
Training Video | Randomly Generated Samples | |
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
As mentioned in Section 3.2, our method can also be trained on longer videos, thus generated further variability in outputs. We show a number of longer training videos (more then 13 frames) and associated randomly generated samples of 13 frames.
Training Video | Randomly Generated Samples | |
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
Shown here are a number of video outputs of SinGAN and ConSinGAN baseline methods (with 2D convolutions replaced with 3D ones) as presernted in the user study of Section 4.2. As can be seen, the generated output mostly collapses to the input training video.
Training Video | SinGAN (3D) [24] | |
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
Training Video | ConSinGAN (3D) [28] | |
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
Effect of the number of VAE levels M on the generated samples as described in Figure 6 and Section 4.2. N is set to 9, and so a total of 10 levels are trained. In addition a comparsion to SinGAN and ConSinGAN (with 2D convolutions replaced with 3D ones) is given.
Training Video | SinGAN (3D) [24] | |
![]() |
![]() |
|
Training Video | ConSinGAN (3D) [28] | |
![]() |
![]() |
|
Training Video | Single VAE level (M=1) | |
![]() |
![]() |
|
Training Video | Single GAN level (M=9) | |
![]() |
![]() |
|
Training Video | Our Method (M=3) | |
![]() |
![]() |
As described in Section 4.1, we randomly sample a sample s from each baseline method. nn1 and nn2 are the 1st and 2nd nearest neighbors (NN) in the UCF-101 training set. We show here our randomly generaed samples s' when our model is trained on nn1.
MoCoGAN's [30] sample s | nn1 Video | nn2 Video | Our sample s' (trained on nn1 video) | |||
![]() |
![]() |
![]() |
![]() |
|||
TGAN's [2] sample s | nn1 Video | nn2 Video | Our sample s' (trained on nn1 video) | |||
![]() |
![]() |
![]() |
![]() |
|||
TGAN-v2's [3] sample s | nn1 Video | nn2 Video | Our sample s' (trained on nn1 video) | |||
![]() |
![]() |
![]() |
![]() |
Additional images generation results and comparison to baselines as described in Figure 7 and Section 4.2.
SinGAN [24] | ConSinGAN [28] | Our Method (2D) | ||||||||||||||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
As mentioned in Section 4.2, when training all levels (i.e. no freezing), we observe a lot of memorization, shown here.
Training Video | Training All Levels | |
![]() |
![]() |