# Implicit Discriminator in Variational Autoencoder

@article{Munjal2020ImplicitDI, title={Implicit Discriminator in Variational Autoencoder}, author={Prateek Munjal and Akanksha Paul and N. C. Krishnan}, journal={2020 International Joint Conference on Neural Networks (IJCNN)}, year={2020}, pages={1-8} }

Recently generative models have focused on combining the advantages of variational autoencoders (VAE) and generative adversarial networks (GAN) for good reconstruction and generative abilities. In this work we introduce a novel hybrid architecture, Implicit Discriminator in Variational Autoencoder (IDVAE), that combines a VAE and a GAN, which does not need an explicit discriminator network. The fundamental premise of the IDVAE architecture is that the encoder of a VAE and the discriminator of a… Expand

#### 3 Citations

An adversarial algorithm for variational inference with a new role for acetylcholine

- Computer Science
- ArXiv
- 2020

This work constructs a VI system that is both compatible with neurobiology and avoids the assumption that neural activities are independent given lower-layers during generation, and implements this algorithm, which can successfully train the approximate inference network for generative models. Expand

Bio-informed Protein Sequence Generation for Multi-class Virus Mutation Prediction

- Biology
- 2020

A GAN-based multi-class protein sequence generative model, named ProteinSeqGAN, which can be promising predictions of virus mutations and can generate valid antigen protein sequences from both bioinformatics and statistics perspectives. Expand

Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders

- Computer Science
- 2020

Adversarial Mirrored AutoEncoder is proposed, a simple modification to Adversarial Autoencoder where a Mirrored Wasserstein loss in the Discriminator is used to enforce better semantic-level reconstruction and a new metric for anomaly quantification is proposed. Expand

#### References

SHOWING 1-10 OF 31 REFERENCES

Variational Approaches for Auto-Encoding Generative Adversarial Networks

- Mathematics, Computer Science
- ArXiv
- 2017

This paper develops a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model, and describes a unified objective for optimization. Expand

Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

- Computer Science, Mathematics
- ICML
- 2017

Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). Expand

Improved Training of Generative Adversarial Networks Using Representative Features

- Computer Science
- ICML
- 2018

This paper achieves both aims simultaneously by improving the stability of training GANs by implicitly regularizing the discriminator using representative features based on the fact that standard GAN minimizes reverse Kullback-Leibler divergence. Expand

Adversarial Generator-Encoder Networks

- Computer Science, Mathematics
- ArXiv
- 2017

We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted… Expand

Improved Techniques for Training GANs

- Computer Science, Mathematics
- NIPS
- 2016

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand

It Takes (Only) Two: Adversarial Generator-Encoder Networks

- Computer Science, Mathematics
- AAAI
- 2018

We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted… Expand

Mode Regularized Generative Adversarial Networks

- Computer Science, Mathematics
- ICLR
- 2017

This work introduces several ways of regularizing the objective, which can dramatically stabilize the training of GAN models and shows that these regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem. Expand

Adversarial Symmetric Variational Autoencoder

- Computer Science, Mathematics
- NIPS
- 2017

An extensive set of experiments is performed, in which a new form of adversarial training is developed and state-of-the-art data reconstruction and generation on several image benchmark datasets are demonstrated. Expand

Generative Adversarial Nets

- Computer Science
- NIPS
- 2014

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a… Expand

Autoencoding beyond pixels using a learned similarity metric

- Computer Science, Mathematics
- ICML
- 2016

An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic. Expand