Fun with ML on Splatoon 2     About     Archive

VAE-GAN Part 2

From the observation made in the previous post, I ran the same training, but without fake images genrated from random sample $z \sim \mathcal{N}(\mathbf{0}, \mathbf{1})$.

At the beginning, the pixel error seems to be fluctuating more than the original setting, but towards the end, such phenomenon is no longer observed.

Adversarial losses and KL divergence follow a pretty similar trait.

There is a descrepency in feature matching error but they seem to follow the similar trend.

The fact that this VAE-GAN model does not produce any synthetic image that resembles the original data bothers me still. Maybe I should put a large weight on KL-divergence so that samples in latent space stay within a reasonable range of normal distribution $\mathcal{N}(\mathbf{0}, \mathbf{1})$.

But the fact that KL divergence grows indicates that latent space generated from input images are not in the form of normal distribution. So it makes sence that fake images generated from random sample are not helping training.

Quality-wise, no noticible difference was observed.

Code and model are available here.