CISS 2018

Plenary Speakers

Bin Yu

Bin Yu University of California at Berkeley

Wednesday, March 21, 11:45am

Three Principles of Data Science

In this talk, I'd like to discuss the intertwining importance and connections of three principles of data science in the title. The three principles will be demonstrated in the context of two neuroscience projects and through analytical connections. In particular, the first project adds stability to predictive models used for reconstruction of movies from fMRI brain signals to gain interpretability of the predictive models. The second project uses predictive transfer learning and stable (manifold) deep dream images to characterize the difficult V4 neurons in primate visual cortex. Our results lend support, to a certain extent, to the resemblance to a primate brain of Convolutional Neural Networks (CNNs).

Bio: Bin Yu is Chancellor’s Professor in the Departments of Statistics and of Electrical Engineering & Computer Sciences at the University of California at Berkeley. Her current research interests focus on statistics and machine learning theory, methodologies, & algorithms for solving high-dimensional data problems. Her lab is engaged in interdisciplinary research with scientists from genomics, neuroscience, precision medicine and political science. She obtained her B.S. degree in Mathematics from Peking University and her M.A. and Ph.D. degrees in Statistics from the University of California at Berkeley. She held faculty positions at the University of Wisconsin-Madison and Yale University and was a Member of Technical Staff at Bell Labs, Lucent. She was Chair of Department of Statistics at UC Berkeley from 2009 to 2012, and is a founding co-director of the Microsoft Lab on Statistics and Information Technology at Peking University, China, & Chair of the Scientific Advisory Committee of the Statistical Science Center at Peking University. She is Member of the U.S. National Academy of Sciences & Fellow of the American Academy of Arts and Sciences. She was a Guggenheim Fellow in 2006, an Invited Speaker at ICIAM in 2011, and the Tukey Memorial Lecturer of the Bernoulli Society in 2012. She was President of IMS (Institute of Mathematical Statistics) in 2013-2014 and the Rietz Lecturer of IMS in 2016. She is a Fellow of IMS, ASA, AAAS and IEEE. She served on the Board of Mathematics Sciences and Applications (BMSA) of NAS and as co-chair of SAMSI advisory committee, & on the Board of Trustees at ICERM and Scientific Advisory Board of IPAM. She has served or is serving on many editorial boards, including Journal of Machine Learning Research (JMLR), Annals of Statistics and American Statistical Association (JASA).


Promod Viswanath

Pramod Viswanath University of Illinois at Urbana Champaign

Thursday, March 22, 11:45am

Bio: Pramod Viswanath received the Ph.D. degree in electrical engineering and computer science from University of California at Berkeley in 2000. From 2000 to 2001, he was a member of research staff at Flarion Technologies, NJ. Since 2001, he is on the faculty at University of Illinois at Urbana Champaign in Electrical and Computer Engineering, where he currently is a professor. He received the Eliahu Jury Award from the EECS department of UC Berkeley in 2000, the Bernard Friedman Prize from the mathematics department of UC Berkeley in 2000, a NSF CAREER award in 2002, the Xerox faculty research award from the college of engineering of UIUC in 2010 and the Best Paper Award at the Sigmetrics conference in 2015. He has worked extensively on wireless communication: co-designer of the first OFDM cellular system at Flarion Technologies and has coauthored a popular textbook on wireless communication. With wireless technology fairly mature, he has explored a variety of research topics: his current research interests include cryptocurrencies and natural language processing.


Alexandros Dimakis

Alexandros Dimakis University of Texas at Austin

Friday, March 23, 11:45am

Generative Adversarial Networks (GANs) and Compressed Sensing

The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model, e.g. a GAN or a VAE. We show how the problems of image inpainting and super-resolution are special cases of our general framework. We show how to generalize the RIP condition for generative models and that random gaussian measurement matrices have this property with high probability. A Lipschitz condition for the generative neural network is a key technical condition. We will also discuss on-going work for adding causality and distributed training to these models. (based on joint work with Ashish Bora, Ajil Jalal and Eric Price)

Code: https://github.com/AshishBora/csgm
Homepage: https://users.ece.utexas.edu/~dimakis

Bio: Alex Dimakis is an Associate Professor at the ECE department, University of Texas at Austin. He received his Ph.D. in 2008 from UC Berkeley and the Diploma degree from the National Technical University of Athens in 2003. During 2009 he was a CMI postdoctoral scholar at Caltech. He received an NSF Career award, a Google faculty research award and the Eli Jury dissertation award. He is the co-recipient of several best paper awards including the joint Information Theory and Communications Society Best Paper Award in 2012. He is currently serving as an associate editor for the IEEE Transactions on Information Theory. His research interests include information theory, coding theory and machine learning.