Learning with Tensors:
Why Now and How ?
Tensor-Learn Workshop @ NIPS'16
Dec 10th, 2016 - Barcelona, Spain

Introduction

Real world data in many domains is multimodal and heterogeneous, such as healthcare, social media, and climate science. Tensors, as generalizations of vectors and matrices, provide a natural and scalable framework for handling data with inherent structures and complex dependencies. Recent renaissance of tensor methods in machine learning ranges from academic research on scalable algorithms for tensor operations, novel models through tensor representations, to industry solutions including Google TensorFlow,Torch and Tensor Processing Unit (TPU). In particular, scalable tensor methods have attracted considerable amount of attention, with successes in a series of learning tasks, such as learning latent variable models, relational learning, spatio-temporal forecasting and training deep neural networks.

These progresses trigger new directions and problems towards tensor methods in machine learning. The workshop aims to foster discussion, discovery, and dissemination of research activities and outcomes in this area and encourages breakthroughs. We will bring together researchers in theories and applications who are interested in tensors analysis and development of tensor-based algorithms. We will also invite researchers from related areas, such as numerical linear algebra, high-performance computing, deep learning, statistics, data analysis, and many others, to contribute to this workshop. We believe that this workshop can foster new directions, closer collaborations and novel applications. We also expect a deeper conversation regarding why learning with tensors at current stage is important, where it is useful, what tensor computation software and hardware work well in practice and, how we can progress further with interesting research directions and open problems.

Schedule

Imperial Ballroom A & B
8:00 am - 6:30 pm
December 10, 2016

Morning Session


8:30 - 8:40 Openning Remarks video
8:40 - 9:20 Invited Talk: Amnon Shashua video
9:20 - 10:00 Contributed Talk video
10:00 - 10:30 Poster Spotlight 1 video
10:30 - 11:00 Coffee Break and Poster Session 1
11:00 - 11:40 Invited Talk: Lek-Heng Lim video
11:40 - 12:20 Invited Talk: Jimeng Sun video
11:20 - 14:00 Lunch

Afternoon Session


14:00 - 14:40 Invited Talk: Gregory Valiant
14:40 - 15:00 Poster Spotlight 2
15:00 - 15:30 Coffee Break and Poster Session 2
15:30 - 16:10 Invited Talk: Vagelis Papalexias
16:10 - 17:00 PhD Symposium
17:00 - 18:00 Panel Discussion and Closing Remarks
 

Keynote Speakers

Latifur Khan

Amnon Shashua

Professor
The Hebrew University of Jerusalem

On Depth Efficiency of Convolutional Networks: the use of Hierarchical Tensor Decomposition for Network Design and Analysis

Abstract
Our formal understanding of the inductive bias that drives the success of deep convolutional networks on computer vision tasks is limited. In particular, it is unclear what makes hypotheses spaces born from convolution and pooling operations so suitable for natural images. I will present recent work that derive an equivalence between convolutional networks and hierarchical tensor decompositions. Under this equivalence, the structure of a network corresponds to the type of decomposition, and the network weights correspond to the decomposition parameters. This allows analyzing hypotheses spaces of networks by studying tensor spaces of corresponding decompositions, facilitating the use of algebraic and measure theoretical tools. Specifically, the results I will present include showing how exponential depth efficiency is achieved in a family of deep networks called Convolutional Arithmetic Circuits, show that CAC is equivalent to SimNets, show that depth efficiency is superior to conventional ConvNets and show how inductive bias is tied to correlations between regions of the input image. In particular, correlations are formalized through the notion of separation rank, which for a given input partition, measures how far a function is from being separable. I will show that a polynomially sized deep network supports exponentially high separation ranks for certain input partitions, while being limited to polynomial separation ranks for others. The network's pooling geometry effectively determines which input partitions are favored, thus serves as a means for controlling the inductive bias. Contiguous pooling windows as commonly employed in practice favor interleaved partitions over coarse ones, orienting the inductive bias towards the statistics of natural images. In addition to analyzing deep networks, I will show that shallow ones support only linear separation ranks, and by this gain insight into the benefit of functions brought forth by depth -- they are able to efficiently model strong correlation under favored partitions of the input. This work covers material recently presented in COLT, ICML and CVPR including recent Arxiv submissions. The work was jointly done with doctoral students Nadav Cohen and Or Sharir.

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005.

In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 11 million cars from 25 automobile manufacturers rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. An early version of Mobileye’s autonomous driving technology was deployed in series as an "autopilot" feature in October, 2015, and will evolve to support more autonomous features in 2016 and beyond. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and own in the future.

In 2010 Prof. Shashua co-founded OrCam which harnesses computer vision artificial intelligence to assist people who are visually impaired or blind. The OrCam MyEye device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam’s device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.


Latifur Khan

Jimeng Sun

Associate Professorr
Georgia Institute of Technology

Computational Phenotyping using Tensor Factorization

Jimeng Sun is an Associate Professor of School of Computational Science and Engineering at College of Computing at Georgia Institute of Technology. His research focuses on medical informatics, especially in applying large-scale predictive modeling and similarity analytics on biomedical applications.

Dr. Sun has extensive research records on data mining: big data analytics, similarity metric learning, social network analysis, predictive modeling, tensor analysis, and visual analytics. He also applies data mining to healthcare applications such as heart failure onset prediction and hypertension control management.

He has published over 70 papers, filed over 20 patents (5 granted). He has received ICDM best research paper in 2008, SDM best research paper in 2007, and KDD Dissertation runner-up award in 2008. Dr. Sun received his B.S. and M.Phil. in Computer Science from Hong Kong University of Science and Technology in 2002 and 2003, and PhD in Computer Science in Carnegie Mellon University in 2007. Prior to joining Georgia Tech, He was a research staff member at IBM TJ Watson Research Center.


Latifur Khan

Lek-Heng Lim

Assistant Professor
University of Chicago

Tensor network ranks

Lek-Heng Lim is an Assistant Professor in the Computational and Applied Mathematics Initiative, the Department of Statistics, and the College of University of Chicago. His research focuses on tensors and their coordinate representations, hypermatrices. He is interested in the hypermatrix equivalents of various matrix notions, their mathematical and computational properties, and their applications to science and engineering. Another area of Lim's interests is applied/computational algebraic and differential geometry, particularly Hodge Laplacians and geometry of subspaces. Lim is also generally interested in numerical linear algebra, optimization and machine learning.

Lim was educated at Stanford University (PhD), Cambridge University, Cornell University (MS), and the National University of Singapore (BS). Prior to joining the University of Chicago as an Assistant Professor, he was the Charles Morrey Assistant Professor at UC Berkeley. Lim serves on the editorial boards of Linear Algebra and its Applications and Linear and Multilinear Algebra. His work is supported by an AFOSR Young Investigator Award, an NSF Early Career Award, and a DARPA Young Faculty Award.


Latifur Khan

Gregory Valiant

Assistant Professor
Stanford

Orthogonalized Alternating Least Squares: A theoretically principled tensor factorization algorithm for practical use

Abstract
From a theoretical perspective, low-rank tensor factorization is an algorithmic miracle, allowing for (provably correct) reconstruction and learning in a number of settings. From a practical standpoint, we still lack sufficiently robust, versatile, and efficient tensor factorization algorithms, particularly for large-scale problems. Many of the algorithms with provable guarantees either suffer from an expensive initialization step, and require the iterative removal of rank-1 factors, destroying any sparsity that might be present in the original tensor. On the other hand, the most commonly used algorithm in practice is "alternating least squares" [ALS], which iteratively fixes all but one mode, and optimizes the remaining mode. This algorithm is extremely efficient, but often converges to bad local optima, particularly when the weights of the factors are non-uniform. In this work, we propose a modification of the ALS approach that enjoys practically viable efficiency, as well as provable recovery (assuming the factors are random or have small pairwise inner products) even for highly non-uniform weights. We demonstrate the significant superiority of our recovery algorithm over the traditional ALS on both random synthetic data, and on computing word embeddings from a third-order word tri-occurrence tensor. This is based on joint work with Vatsal Sharan.

Greg Valiant is an Assistant Professor in the Computer Science Department at Stanford, after completing a postdoc at Microsoft Research, New England. His main research interests are in algorithms, learning, applied probability and statistics; he is also interested in game theory, and has enjoyed working on problems in database theory.

Valiant graduated from Harvard with a BA in Math and an MS in Computer Science, and obtained his PhD in Computer Science from UC Berkeley in 2012.


Latifur Khan

Vagelis Papalexakis

Assistant Professorr
UC Riverside

Tensor decompositions for big multi-aspect data analytics

Abstract
Tensors and tensor decompositions have been very popular and effective tools for analyzing multi-aspect data in a wide variety of fields, ranging from Psychology to Chemometrics, and from Signal Processing to Data Mining and Machine Learning. Using tensors in the era of big data poses the challenge of scalability and efficiency. In this talk, I will discuss recent techniques on tackling this challenge by parallelizing and speeding up tensor decompositions, especially for very sparse datasets (such as the ones encountered for example in online social network analysis). In addition to scalability, I will also touch upon the challenge of unsupervised quality assessment, where in absence of ground truth, we seek to automatically select the decomposition model that captures best the structure in our data. The talk will conclude with a discussion on future research directions and open problems in tensors for big data analytics.

Evangelos (Vagelis) Papalexakis is an Assistant Professor of the CSE Dept. at University of California Riverside. He obtained his PhD degree at the School of Computer Science at Carnegie Mellon University (CMU), under the supervision of Prof. Christos Faloutsos since August 2011. Prior to joining CMU, he obtained his Diploma and MSc in Electronic & Computer Engineering at the Technical University of Crete, in Greece.

Broadly, his research interests span the fields of Data Mining, Machine Learning, and Signal Processing. His research involves designing scalable algorithms for mining large multi-aspect datasets, with specific emphasis on tensor factorization models, and applying those algorithms to a variety of real world multi-aspect data problems. His work has appeared in KDD, ICDM, SDM, ECML-PKDD, WWW, PAKDD, ICDE, ICASSP, IEEE Transactions of Signal Processing, and ACM TKDD. He has a best student paper award at PAKDD'14, finalist best papers for SDM'14 and ASONAM'13 and he was a finalist for the Microsoft PhD Fellowship and the Facebook PhD Fellowship. Besides his academic experience at CMU, he has industrial research experience working at Microsoft Research Silicon Valley during the summers of 2013 and 2014 and Google Research during the summer of 2015.


Accepted Papers

 

Morning Session

  • Structurally regularised Non-negative Tensor Completion for latent spatio-temporal change detection

    Koh Takeuchi, Yoshinobu Kawahara and Tomoharu Iwata
  • Searching for optimal patterns in Boolean tensors

    Dmitry Ignatov, Sergei O. Kuznetsov, Dmitry Gnatyshak and Jaume Baixeries
  • Using Tensor Theory to Embed Invariances: A Case Study from Turbulence Modeling

    Julia Ling
  • Multi-Label Learning with Provable Guarantee

    Sayantan Dasgupta
  • Bayesian multi-tensor factorization

    Suleiman Ali Khan, Eemeli Leppäaho and Samuel Kaski
  • Tensor Decomposition with Smoothness

    Masaaki Imaizumi and Kohei Hayashi
  • Non-negative Factorization of the Occurrence Tensor from Financial Contracts

    Zheng Xu, Furong Huang, Louiqa Raschid and Tom Goldstein
  • TensorLy: Tensor Learning in Python

    Jean Kossaifi, Yannis Panagakis and Maja Pantic
  • CoFactor: Concise Factorization of Sparse and High-Order Tensors

    Ioakeim Perros, Richard Peng, Richard Vuduc and Jimeng Sun
  • ParTI: A Parallel Tensor Infrastructure for Data Analysis

    Jiajia Li, Yuchen Ma, Chenggang Yan, Jimeng Sun and Richard Vuduc
  • SenTenCE: A multi-sensor data compression framework using tensor decompositions for human activity classification

    Vinay Uday Prabhu and John Whaley

    Afternoon Session

  • Ultimate tensorization: convolutions and FC alike

    Timur Garipov, Dmitry Podoprikhin, Alexander Novikov and Dmitry Vetrov
  • BaTFLED: Bayesian Tensor Factorization Linked to External Data

    Nathan Lazar, Mehmet Gonen and Kemal Sonmez
  • Approximate Inference in Graphical Models via Low-Rank Tensor Propagation

    Andrew Wrigley, Wee Sun Lee and Nan Ye
  • Factorizing Sparse Tensors for Supervised Machine Learning

    Stephan Baier and Volker Tresp
  • Graph Learning as a Tensor Factorization Problem

    Raphael Bailly and Guillaume Rabusseau
  • Learning Maliciousness in Cybersecurity Graphs

    Akshay Rangamani, Connor Walsh, Sam Gottlieb and Elisabeth Maida

Call for Papers

Papers submitted to the workshop should be up to four pages long excluding references and in NIPS 2016 format. As the review process is not blind, authors can reveal their identity in their submissions. All inquiries could be sent to tensorlearn@gmail.com.

Submissions page: Tensor-Learn 2016.

Note on open problem submissions: In order to promote new and innovative research on tensors, we plan to accept a small number of high quality manuscripts describing open problems in tensor learning. Such papers should provide a clear, detailed description and analysis of a new or open problem that poses a significant challenge to existing techniques, as well as a thorough empirical investigation demonstrating that current methods are insufficient. Accepted submissions will be presented as posters. there is no published proceedings and the authors are free to send it elsewhere.

Key Dates

 

Paper Submission Deadline: Oct 28, 2016, 11:59 PM PST

Author Notification: Nov 7, 2016, 11:59 PM PST

Final Version: Nov 25, 2016, 11:59 PM PST

Workshop: December 10, 2016

Workshop Organizers

 

Anima Anandkumar

California Institute of Technology

Rong Ge

Duke University

Yan Liu

University of Southern California

Maximilian Nickel

Facebook AI Research (FAIR)

Rose Yu

University of Southern California