The IEEE World Congress on Computational Intelligence (WCCI) will take place at Glasgow, Scotland, UK on July 19-24, 2020.
Title: Deep Stochastic Learning and Understanding
Time: 8:00 AM-10:00 AM or 10:30 AM-12:30 PM on July 19
Place: Scottish Event Campus
Description: This tutorial addresses the advances in deep Bayesian learning for sequence data which are ubiquitous in speech, music, text, image, video, web, communication and networking applications. Spatial and temporal contents are analyzed and represented to fulfill a variety of tasks ranging from classification, synthesis, generation, segmentation, dialogue, search, recommendation, summarization, answering, captioning, mining, translation, adaptation to name a few. Traditionally, “deep learning” is taken to be a learning process where the inference or optimization is based on the real-valued deterministic model. The “latent semantic structure” in words, sentences, images, actions, documents or videos learned from data may not be well expressed or correctly optimized in mathematical logic or computer programs. The “distribution function” in discrete or continuous latent variable model for spatial and temporal sequences may not be properly decomposed or estimated. This tutorial addresses the fundamentals of statistical models and neural networks, and focus on a series of advanced Bayesian models and deep models including recurrent neural network (RNN), convolutional neural network (CNN), sequence-to-sequence model, variational auto-encoder (VAE), attention mechanism, memory-augmented neural network, skip neural network, temporal difference VAE, stochastic neural network, stochastic temporal convolutional network, predictive state neural network, and policy neural network. Enhancing the prior/posterior representation is addressed. We present how these models are connected and why they work for a variety of applications on symbolic and complex patterns in sequence data. The variational inference and sampling method are formulated to tackle the optimization for complicated models. The embeddings, clustering or co-clustering of words, sentences or objects are merged with linguistic and semantic constraints. A series of case studies, tasks and applications are presented to tackle different issues in deep Bayesian learning and understanding. At last, we will point out a number of directions and outlooks for future studies. This tutorial serves the objectives to introduce novices to major topics within deep Bayesian learning, motivate and explain a topic of emerging importance for natural language understanding, and present a novel synthesis combining distinct lines of machine learning work.
Organization: The presentation of this tutorial is arranged into five parts. First of all, we share the current status of researches on natural language processing, statistical modeling and deep neural network and explain the key issues in deep Bayesian learning for discrete-valued observation data and latent semantics. Modern natural language models are introduced to address how data analysis is performed from language processing to semantic learning and memory networking. Secondly, we address a number of Bayesian models ranging from latent variable model to variational Bayesian inference, MCMC sampling and deep unfolding inference. In the third part, a series of deep spatial and temporal models including memory network, dilated RNN and attention network with transformer, CNN, sequence-to-sequence learning and neural domain adaptation are introduced. Next, the fourth part focuses on a variety of advanced studies which illustrate how deep Bayesian learning is developed to infer the sophisticated recurrent models for multimedia and language understanding. In particular, the Bayesian RNN, VAE, neural variational learning, neural discrete representation, stochastic neural network, stochastic temporal convolutional neural network, temporal difference VAE and Markov recurrent neural network are introduced in various deep models which open a window to more practical tasks, e.g. style transfer, image and sentence generation, visual dialogue system, visual question answering and image translation. Variational inference methods based on normalizing flows and variational mixture of posteriors are addressed. Posterior collapse problem in stochastic modeling and learning is compensated. In the final part, we spotlight on some future directions for deep Bayesian learning understanding which can handle the challenges of big data, heterogeneous condition and dynamic system. In particular, deep learning, structural learning, temporal and spatial modeling, long history representation and stochastic learning are emphasized.