CISS 2018

Plenary Speakers

Promod Viswanath

Pramod Viswanath University of Illinois at Urbana-Champaign

Wednesday, March 21, 11:45am

Inventing Algorithms via Deep Learning

Deep learning is a part of daily life, owing to its successes in computer vision and natural language processing. In these applications, the success of the model-free deep learning approach can be attributed to a lack of a (mathematical) generative model. In yet other applications, the data is generated by a simple model and performance criterion mathematically clear and training samples infinitely abundant, but the space of algorithmic choices is enormous (example: chess). Deep learning has recently shown strong promise in these problems too (example: alphazero). In this talk, we study two such canonical problems of great scientific and engineering interest through the lens of deep learning. The first is reliable communication over noisy media where we successfully revisit classical open problems in information theory; we show that creatively trained and architected neural networks can beat state of the art on the AWGN channel with noisy feedback by a 100 fold improvement in bit error rate. The second is optimization and classification problems on graphs, where the key algorithmic challenge is scalable performance to arbitrary sized graphs. Representing graphs as randomized nonlinear dynamical systems via recurrent neural networks, we show that creative adversarial training allows one to train on small size graphs and test on much larger sized graphs (100~1000x) with approximation ratios that rival state of the art on a variety of benchmarks -- we show this on the optimization problems of minimum vertex cover, maximum cut and maximum independent set which vary across the complexity theoretic hardness spectrum. Apart from the obvious practical value, this study of mathematically precise problems sheds light on the mysteries of deep learning methods: training example choices, architectural design decisions and loss function/learning methodologies.

Bio: Pramod Viswanath received the Ph.D. degree in electrical engineering and computer science from University of California at Berkeley in 2000. From 2000 to 2001, he was a member of research staff at Flarion technologies, NJ. Since 2001, he is on the faculty at University of Illinois at Urbana-Champaign in Electrical and Computer Engineering, where he currently is a professor. He is a coauthor, with David Tse, of the text book Fundamentals of Wireless Communication, which has been used in over 60 institutions around the world. He is coinventor of the opportunistic beamforming method and codesigner of Flash-OFDM communication algorithms used in all fourth-generation cellular systems. His current research interests are in machine learning and natural language processing.

Bin Yu

Bin Yu University of California, Berkeley

Thursday, March 22, 11:45am

Three Principles of Data Science: Predictability, Stability and Computability

In this talk, I'd like to discuss the intertwining importance and connections of three principles of data science in the title. The three principles will be demonstrated in the context of two neuroscience projects and through analytical connections. In particular, the first project adds stability to predictive models used for reconstruction of movies from fMRI brain signals to gain interpretability of the predictive models. The second project uses predictive transfer learning and stable (manifold) deep dream images to characterize the difficult V4 neurons in primate visual cortex. Our results lend support, to a certain extent, to the resemblance to a primate brain of Convolutional Neural Networks (CNNs).

Bio: Bin Yu is Chancellor’s Professor in the Departments of Statistics and of Electrical Engineering & Computer Sciences at the University of California at Berkeley. Her current research interests focus on statistics and machine learning theory, methodologies, & algorithms for solving high-dimensional data problems. Her lab is engaged in interdisciplinary research with scientists from genomics, neuroscience, precision medicine and political science. She obtained her B.S. degree in Mathematics from Peking University and her M.A. and Ph.D. degrees in Statistics from the University of California at Berkeley. She held faculty positions at the University of Wisconsin-Madison and Yale University and was a Member of Technical Staff at Bell Labs, Lucent. She was Chair of Department of Statistics at UC Berkeley from 2009 to 2012, and is a founding co-director of the Microsoft Lab on Statistics and Information Technology at Peking University, China, & Chair of the Scientific Advisory Committee of the Statistical Science Center at Peking University. She is Member of the U.S. National Academy of Sciences & Fellow of the American Academy of Arts and Sciences. She was a Guggenheim Fellow in 2006, an Invited Speaker at ICIAM in 2011, and the Tukey Memorial Lecturer of the Bernoulli Society in 2012. She was President of IMS (Institute of Mathematical Statistics) in 2013-2014 and the Rietz Lecturer of IMS in 2016. She is a Fellow of IMS, ASA, AAAS and IEEE. She served on the Board of Mathematics Sciences and Applications (BMSA) of NAS and as co-chair of SAMSI advisory committee, & on the Board of Trustees at ICERM and Scientific Advisory Board of IPAM. She has served or is serving on many editorial boards, including Journal of Machine Learning Research (JMLR), Annals of Statistics and American Statistical Association (JASA).

Alexandros Dimakis

Alexandros Dimakis University of Texas at Austin

Friday, March 23, 11:45am

Generative Adversarial Networks (GANs) and Compressed Sensing

The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model, e.g. a GAN or a VAE. We show how the problems of image inpainting and super-resolution are special cases of our general framework. We show how to generalize the RIP condition for generative models and that random gaussian measurement matrices have this property with high probability. A Lipschitz condition for the generative neural network is a key technical condition. We will also discuss on-going work for adding causality and distributed training to these models. (based on joint work with Ashish Bora, Ajil Jalal and Eric Price)


Bio: Alex Dimakis is an Associate Professor at the ECE department, University of Texas at Austin. He received his Ph.D. in 2008 from UC Berkeley and the Diploma degree from the National Technical University of Athens in 2003. During 2009 he was a CMI postdoctoral scholar at Caltech. He received an NSF Career award, a Google faculty research award and the Eli Jury dissertation award. He is the co-recipient of several best paper awards including the joint Information Theory and Communications Society Best Paper Award in 2012. He is currently serving as an associate editor for the IEEE Transactions on Information Theory. His research interests include information theory, coding theory and machine learning.