Designing an Auxiliary Task for the VAE Model: Encoding Digit Type Information
In this blog post, we’ve discussed the composition of a model that utilizes three sub-networks to perform different tasks within the framework of a vanilla VAE. The first two sub-networks involve encoding an image into a latent space distribution and decoding a latent vector back into an image, which are standard components of a VAE.
The third sub-network serves as an auxiliary task, aimed at enforcing certain latent dimensions to encode information about the digit present in an image. This is done by incorporating one hot encoding of the digit type into the latent vector, providing the model with valuable information for its task. By including this digit information in the latent space, the model can generate images conditioned on the digit type.
There are two approaches to providing the model with the one hot encoding vector of the digit: either add it as an input to the model or treat it as a label for the model to predict. We opt for the latter option as it allows for more flexibility during inference, particularly when providing a latent vector as input to generate an image.
By incorporating the digit prediction as part of the model’s training, we enable the model to learn to predict the digit type itself, making it more robust and versatile for different inference scenarios. This comprehensive model architecture can be implemented by coding the encoder, decoder, and digit prediction sub-networks, leveraging the benefits of VAE framework while enhancing it with additional task-specific information.
Overall, this model design showcases a thoughtful integration of various components to improve the representation and generation capabilities of a VAE, highlighting the importance of incorporating domain-specific knowledge to enhance the model’s performance.