// about

I am an Assistant Professor in the Booth School of Business at the University of Chicago. Previously, I was a project scientist and postdoctoral researcher in the Machine Learning Department at Carnegie Mellon University.

My broad research interests include high-dimensional statistics, machine learning, and optimization. My research focuses on mathematical aspects of data science and statistical machine learning in nontraditional settings. I am particularly interested in problems with heterogeneous, multimodal, and nonconvex structures. I am also involved in open-source software development and problems in interpretability, ethics, and fairness in AI.

I am looking for new PhD students and postdocs interested in working on problems in machine learning, statistics, and nonconvex optimization. If you're interested, please e-mail me.

// news

  • New preprint: A new approach to personalization via interpretable, sample-specific models.
  • New preprint: A general framework for score-based learning of nonparametric DAG models.
  • Two papers accepted to NeurIPS 2019.
  • New preprint: A new proof that almost all Gaussian graphical models are perfect.
  • Our paper on nonparametric mixture models and clustering has been accepted to the Annals of Statistics.

// research interests

  • Statistical machine learning
  • Unsupervised learning
  • Graphical models
  • Nonconvex optimization

// selected papers

Identifiability of nonparametric mixture models and Bayes optimal clustering.
Aragam, B., Dan, C., Xing, E. P., and Ravikumar, P. Annals of Statistics.

Learning sample-specific models with low-rank personalized regression.
Lengerich, B., Aragam, B., and Xing, E. P. NeurIPS.

DAGs with NO TEARS: Continuous optimization for structure learning.
Zheng, X., Aragam, B., Ravikumar, P., and Xing, E. P. NeurIPS (spotlight).

The sample complexity of semi-supervised learning with nonparametric mixture models.
Dan, C., Leqi, L., Aragam, B., Ravikumar, P., and Xing, E. P. NeurIPS.

// publications

// preprints

We develop a framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data. Our approach is based on a recent algebraic characterization of DAGs that led to the first fully continuous optimization for score-based learning of DAG models parametrized by a linear structural equation model (SEM). We extend this algebraic characterization to nonparametric SEM by leveraging nonparametric sparsity based on partial derivatives, resulting in a continuous optimization problem that can be applied to a variety of nonparametric and semiparametric models including GLMs, additive noise models, and index models as special cases. We also explore the use of neural networks and orthogonal basis expansions to model nonlinearities for general nonparametric models. Extensive empirical study confirms the necessity of nonlinear dependency and the advantage of continuous optimization for score-based learning.

Keywords: Directed acyclic graphs, Bayesian networks, nonparametric statistics, multilayer perceptron, basis expansions

Knowing when a graphical model is perfect to a distribution is essential in order to relate separation in the graph to conditional independence in the distribution, and this is particularly important when performing inference from data. When the model is perfect, there is a one-to-one correspondence between conditional independence statements in the distribution and separation statements in the graph. Previous work has shown that almost all models based on linear directed acyclic graphs as well as Gaussian chain graphs are perfect, the latter of which subsumes Gaussian graphical models (i.e., the undirected Gaussian models) as a special case. However, the complexity of chain graph models leads to a proof of this result which is indirect and mired by the complications of parameterizing this general class. In this paper, we directly approach the problem of perfectness for the Gaussian graphical models, and provide a new proof, via a more transparent parametrization, that almost all such models are perfect. Our approach is based on, and substantially extends, a construction of Lněnička and Matúš showing the existence of a perfect Gaussian distribution for any graph.

Keywords: Graphical models, perfectness, conditional independence graphs

Neighborhood regression has been a successful approach in graphical and structural equation modeling, with applications to learning undirected and directed graphical models. We extend these ideas by defining and studying an algebraic structure called the neighborhood lattice based on a generalized notion of neighborhood regression. We show that this algebraic structure has the potential to provide an economic encoding of all conditional independence statements in a Gaussian distribution (or conditional uncorrelatedness in general), even in the cases where no graphical model exists that could "perfectly" encode all such statements. We study the computational complexity of computing these structures and show that under a sparsity assumption, they can be computed in polynomial time, even in the absence of the assumption of perfectness to a graph. On the other hand, assuming perfectness, we show how these neighborhood lattices may be "graphically" computed using the separation properties of the so-called partial correlation graph. We also draw connections with directed acyclic graphical models and Bayesian networks. We derive these results using an abstract generalization of partial uncorrelatedness, called partial orthogonality, which allows us to use algebraic properties of projection operators on Hilbert spaces to significantly simplify and extend existing ideas and arguments. Consequently, our results apply to a wide range of random objects and data structures, such as random vectors, data matrices, and functions.

Keywords: Neighbourhood lattice, graphical modeling, neighbourhood regression, partial orthogonality, projection operators, Hilbert spaces

We study a family of regularized score-based estimators for learning the structure of a directed acyclic graph (DAG) for a multivariate normal distribution from high-dimensional data with p >> n. Our main results establish support recovery guarantees and deviation bounds for a family of penalized least-squares estimators under concave regularization without assuming prior knowledge of a variable ordering. These results apply to a variety of practical situations that allow for arbitrary nondegenerate covariance structures as well as many popular regularizers including the MCP, SCAD, L0 and L1. The proof relies on interpreting a DAG as a recursive linear structural equation model, which reduces the estimation problem to a series of neighbourhood regressions. We provide a novel statistical analysis of these neighbourhood problems, establishing uniform control over the superexponential family of neighbourhoods associated with a Gaussian distribution. We then apply these results to study the statistical properties of score-based DAG estimators, learning causal DAGs, and inferring conditional independence relations via graphical models. Our results yield---for the first time---finite-sample guarantees for structure learning of Gaussian DAGs in high-dimensions via score-based estimation.

Keywords: Graphical modeling, high-dimensional statistics, concave regularization, directed acyclic graphs, structural equations, sparse regression

// papers

Modern applications of machine learning (ML) deal with increasingly heterogeneous datasets comprised of data collected from overlapping latent subpopulations. As a result, traditional models trained over large datasets may fail to recognize highly predictive localized effects in favour of weakly predictive global patterns. This is a problem because localized effects are critical to developing individualized policies and treatment plans in applications ranging from precision medicine to advertising. To address this challenge, we propose to estimate sample-specific models that tailor inference and prediction at the individual level. In contrast to classical ML models that estimate a single, complex model (or only a few complex models), our approach produces a model personalized to each sample. These sample-specific models can be studied to understand subgroup dynamics that go beyond coarse-grained class labels. Crucially, our approach does not assume that relationships between samples (e.g. a similarity network) are known a priori. Instead, we use unmodeled covariates to learn a latent distance metric over the samples. We apply this approach to financial, biomedical, and electoral data as well as simulated data and show that sample-specific models provide fine-grained interpretations of complicated phenomena without sacrificing predictive accuracy compared to state-of-the-art models such as deep neural networks.

Keywords: Personalization, sample-specific, low-rank models, personalized regression

We prove that $\Omega(s\log p)$ samples suffice to learn a sparse Gaussian directed acyclic graph (DAG) from data, where $s$ is the maximum Markov blanket size. This improves upon recent results that require $\Omega(s^{4}\log p)$ samples in the equal variance case. To prove this, we analyze a popular score-based estimator that has been the subject of extensive empirical inquiry in recent years and is known to achieve state-of-the-art results. Furthermore, the approach we study does not require strong assumptions such as faithfulness that existing theory for score-based learning crucially relies on. The resulting estimator is based around a difficult nonconvex optimization problem, and its analysis may be of independent interest given recent interest in nonconvex optimization in machine learning. Our analysis overcomes the drawbacks of existing theoretical analyses, which either fail to guarantee structure consistency in high-dimensions (i.e. learning the correct graph with high probability), or rely on restrictive assumptions. In contrast, we give explicit finite-sample bounds that are valid in the important $p\gg n$ regime.

Keywords: Graphical modeling, directed acyclic graphs, sample complexity, score-based learning

Motivated by problems in data clustering, we establish general conditions under which families of nonparametric mixture models are identifiable by introducing a novel framework for clustering overfitted parametric (i.e. misspecified) mixture models. These conditions generalize existing conditions in the literature, and are flexible enough to include for example mixtures of Gaussian mixtures. In contrast to the recent literature on estimating nonparametric mixtures, we allow for general nonparametric mixture components, and instead impose regularity assumptions on the underlying mixing measure. As our primary application, we apply these results to partition-based clustering, generalizing the well-known notion of a Bayes optimal partition from classical model-based clustering to nonparametric settings. Furthermore, this framework is constructive in that it yields a practical algorithm for learning identified mixtures, which is illustrated through several examples. The key conceptual device in the analysis is the convex, metric geometry of probability distributions on metric spaces and its connection to optimal transport and the Wasserstein convergence of mixing measures. The result is a flexible framework for nonparametric clustering with formal consistency guarantees.

Keywords: Nonparametric statistics, mixture models, clustering, identifiability, optimal transport

Machine learning (ML) training algorithms often possess an inherent self-correcting behavior due to their iterative-convergent nature. Recent systems exploit this property to achieve adaptability and efficiency in unreliable computing environments by relaxing the consistency of execution and allowing calculation errors to be self-corrected during training. However, the behavior of such systems are only well understood for specific types of calculation errors, such as those caused by staleness, reduced precision, or asynchronicity, and for specific types of training algorithms, such as stochastic gradient descent. In this paper, we develop a general framework to quantify the effects of calculation errors on iterative-convergent algorithms and use this framework to design new strategies for checkpoint-based fault tolerance. Our framework yields a worst-case upper bound on the iteration cost of arbitrary perturbations to model parameters during training. Our system, SCAR, employs strategies which reduce the iteration cost upper bound due to perturbations incurred when recovering from checkpoints. We show that SCAR can reduce the iteration cost of partial failures by 78%--95% when compared with traditional checkpoint-based fault tolerance across a variety of ML models and training algorithms.

Keywords: Fault tolerance, distributed systems, machine learning, reliability, iterative algorithms

We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions. Under these assumptions, we establish an \Omega(K\log K) labeled sample complexity bound without imposing parametric assumptions, where K is the number of classes. Our results suggest that even in nonparametric settings it is possible to learn a near-optimal classifier using only a few labeled samples. Unlike previous theoretical work which focuses on binary classification, we consider general multiclass classification (K>2), which requires solving a difficult permutation learning problem. This permutation defines a classifier whose classification error is controlled by the Wasserstein distance between mixing measures, and we provide finite-sample results characterizing the behaviour of the excess risk of this classifier. Finally, we describe three algorithms for computing these estimators based on a connection to bipartite graph matching, and perform experiments to illustrate the superiority of the MLE over the majority vote estimator.

Keywords: Semi-supervised learning, mixture models, nonparametric statistics, permutation learning, coupon collection, sample complexity

Estimating the structure of directed acyclic graphs (DAGs, also known as Bayesian networks) is a challenging problem since the search space of DAGs is combinatorial and scales superexponentially with the number of nodes. Existing approaches rely on various local heuristics for enforcing the acyclicity constraint. In this paper, we introduce a fundamentally different strategy: we formulate the structure learning problem as a purely continuous optimization problem over real matrices that avoids this combinatorial constraint entirely. This is achieved by a novel characterization of acyclicity that is not only smooth but also exact. The resulting problem can be efficiently solved by standard numerical algorithms, which also makes implementation effortless. The proposed method outperforms existing ones, without imposing any structural assumptions on the graph such as bounded treewidth or in-degree.

Keywords: Directed acyclic graphs, Bayesian networks, constrained optimization, nonconvex optimization, augmented Lagrangian, black-box

In many applications, inter-sample heterogeneity is crucial to understanding the complex biological processes under study. For example, in genomic analysis of cancers, each patient in a cohort may have a different driver mutation, making it difficult or impossible to identify causal mutations from an averaged view of the entire cohort. Unfortunately, many traditional methods for genomic analysis seek to estimate a single model which is shared by all samples in a population, ignoring this inter-sample heterogeneity entirely. In order to better understand patient heterogeneity, it is necessary to develop practical, personalized statistical models. To uncover this inter-sample heterogeneity, we propose a novel regularizer for achieving patient-specific personalized estimation. This regularizer operates by learning latent distance metrics between personalized parameters and clinical covariates, and attempting to match these distances as closely as possible. Crucially, we do not assume these distances are already known. Instead, we allow the data to dictate the structure of these latent distance metrics. Finally, we apply our method to learn patient-specific, interpretable models for a pan-cancer gene expression dataset containing samples from more than 30 distinct cancer types and find strong evidence of personalization effects between cancer types as well as between individuals. Our analysis uncovers sample-specific aberrations that are overlooked by population-level methods, suggesting a promising new path for precision analysis of complex diseases such as cancer.

Keywords: Precision medicine, personalized regression, patient-specific modeling, distance-matching, TCGA

Keywords: High-dimensional regression, genomics, irrepresentability, correlated variables

A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naively applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method.

Keywords: GWAS, linear mixed models, heterogeneous data, confounding, population structure

Learning graphical models from data is an important problem with wide applications, ranging from genomics to the social sciences. Nowadays datasets often have upwards of thousands---sometimes tens or hundreds of thousands---of variables and far fewer samples. To meet this challenge, we have developed a new R package called sparsebn for learning the structure of large, sparse graphical models with a focus on Bayesian networks. While there are many existing software packages for this task, this package focuses on the unique setting of learning large networks from high-dimensional data, possibly with interventions. As such, the methods provided place a premium on scalability and consistency in a high-dimensional setting. Furthermore, in the presence of interventions, the methods implemented here achieve the goal of learning a causal network from data. Additionally, the sparsebn package is fully compatible with existing software packages for network analysis.

Keywords: R, software, graphical modeling, directed acyclic graphs, structural equations

We develop a penalized likelihood estimation framework to learn the structure of Gaussian Bayesian networks from observational data. In contrast to recent methods which accelerate the learning problem by restricting the search space, our main contribution is a fast algorithm for score-based structure learning which does not restrict the search space in any way and works on high-dimensional data sets with thousands of variables. Our use of concave regularization, as opposed to the more popular L0 (e.g. BIC) penalty, is new. Moreover, we provide theoretical guarantees which generalize existing asymptotic results when the underlying distribution is Gaussian. Most notably, our framework does not require the existence of a so-called faithful DAG representation, and as a result, the theory must handle the inherent nonidentifiability of the estimation problem in a novel way. Finally, as a matter of independent interest, we provide a comprehensive comparison of our approach to several standard structure learning methods using open-source packages developed for the R language. Based on these experiments, we show that our algorithm obtains higher sensitivity with comparable false discovery rates for high-dimensional data and scales efficiently as the number of nodes increases. In particular, the total runtime for our method to generate a solution path of 20 estimates for DAGs with 8000 nodes is around one hour.

Keywords: Bayesian networks, concave penalization, directed acyclic graphs, coordinate descent, nonconvex optimization

// theses

Research into graphical models is a rapidly developing enterprise, garnering significant interest from both the statistics and machine learning communities. A parallel thread in both communities has been the study of low-dimensional structures in high-dimensional models where $p\gg n$. Recently, there has been a surge of interest in connecting these threads in order to understand the behaviour of graphical models in high-dimensions. Due to their relative simplicity, undirected models such as the Gaussian graphical model and Ising models have received most of the attention, whereas directed graphical models have received comparatively little attention. An important yet largely unresolved class of directed graphical models are Bayesian networks, or directed acyclic graphs (DAGs). These models have a wide variety of applications in aritificial intelligence, machine learning, genetics, and computer vision, but estimation of Bayesian networks in high-dimensions is not well-understood. The main focus of this dissertation is to address some fundamental questions about these models in high-dimensions.

The primary goal is to develop both algorithms and theory for estimating continuous, linear Bayesian networks, capable of handling modern high-dimensional problems. Motivated by problems from the regression literature, we show how to adapt recent work in sparse learning and nonconvex optimization to the structure learning problem for Bayesian networks in order to estimate DAGs with several thousand nodes. We draw an explicit connection between linear Bayesian networks and so-called neighbourhood regression problems and show how this can be exploited in order to derive nonasymptotic performance bounds for penalized least squares estimators of directed graphical models.

On the algorithmic side, we develop a method for estimating Gaussian Bayesian networks based on convex reparametrization and cyclic coordinate descent. In contrast to recent methods which accelerate the learning problem by restricting the search space, we propose a method for score-based structure learning which does not restrict the search space. We do not require the existence of a so-called faithful DAG representation, and as a result, our methodology must handle the inherent nonidentifiability of the estimation problem in a novel way. On the theoretical side, we provide (a) Finite-dimensional performance guarantees for local minima of the resulting nonconvex program, and (b) A general high-dimensional framework for global minima of the nonconvex program. Both the algorithms and theory apply to a general class of regularizers, including the MCP, SCAD, $\ell_1$ and $\ell_0$ penalties. Finally, as a matter of independent interest, we provide a comprehensive comparison of our approach to several standard structure learning methods using open-source packages developed for the \texttt{R} language.

Keywords: Bayesian networks, high-dimensional statistics, graphical models, sparse regression, concave regularization, nonconvex optimization

Ten years ago, Ehrlich and Sanchez produced a pointwise statement of the classical Bishop volume comparison theorem for so-called SCLV subsets of the causal future in a Lorentz manifold, while Petersen and Wei developed and proved an integral version for Riemannian manifolds. We apply Peterson and Wei's method to the SCLV sets, and verify that two essential differential equations from the Riemannian proof extend to the Lorentz setting. As a result, we obtain a volume comparison theorem for Lorentz manifolds with integral, rather than pointwise, bounds. We also brie􏱭y discuss the history of the problem, starting with Bishop's original theorem from 1963.

Keywords: Differential geometry, volume comparison, Lorentz manifolds

Link

I will not be teaching in Fall 2019 or Winter 2020.

Past teaching assignments:

  • Machine Learning 10-821: Data Analysis Project Preparation (Fall 2017)
  • Statistics 10: Introduction to Statistical Reasoning (Spring 2016)
  • Statistics 10: Introduction to Statistical Reasoning (Winter 2016)
  • Statistics 10: Introduction to Statistical Reasoning (Fall 2015)
  • Statistics 495A: Teaching College Statistics (Winter 2015)
  • Statistics 100A: Introduction to Probability (Spring 2014)
  • Statistics 101B: Introduction to Design and Analysis of Experiments (Winter 2014)
  • Statistics 102A: Introduction to Computational Statistics with R (Fall 2013)
  • PIC 20A: Principles of Java (Spring 2010)
  • PIC 10A: Introduction to C++ Programming (Winter 2010)
  • PIC 10A: Introduction to C++ Programming (Fall 2009)

// software

You can find more up-to-date information on my software projects by visiting my Github page.

// NO TEARS

DAG learning formulated as a continuous, black-box optimization problem over real matrices that avoids combinatorial optimization. This repository includes two versions: A simple version that is implemented using scipy in fewer than 50 lines of Python code, and an L1-regularized version that is solved with proximal quasi-Newton.

source / paper

// Precision Lasso

The Precision Lasso is a variant of the Lasso designed to adapt to and account for correlations and dependencies in high-dimensional data.

source / paper

// Personalized regression

This repository contains Python code for learning sample-specific, personalized regression models. The goal of personalized regression is to perform retrospective analysis by estimating simple models that each apply to a single sample. After estimating these sample-specific models, we have a matrix of model parameters which we may analyze as we wish.

source / paper

// sparsebn package for R

sparsebn is an R package for learning large-scale Bayesian networks from high-dimensional data. It allows users to incorporate mixed experimental and observational data with either continuous or discrete observations, and scales to datasets with many thousands of variables. The underlying framework is based on recent developments in sparse (e.g. L1) regularization, coordinate descent, and nonconvex optimization.

cran / source / paper

// ccdr package for R

The source code for the CCDr algorithm described in Aragam and Zhou (2015) is freely available online through GitHub.

ccdr is an R package for structure learning of linear Bayesian networks from high-dimensional, Gaussian data. The underlying algorithm estimates a Bayesian network (aka DAG or belief net) using penalized maximum likelihood based on L1 or concave (MCP) regularization and observational data.

source / paper

// contact

If you want to...

...e-mail me: bryon at chicagobooth dot edu.

...find me on LinkedIn, click here.

...see my CV, click here.

I also got a little bored while designing this site so I hid some easter eggs here and there.

// biography

Bryon Aragam studies high-dimensional statistics, machine learning, and optimization. His research focuses on mathematical aspects of data science and statistical machine learning in nontraditional settings. Some of his recent projects include problems in graphical modeling, nonparametric statistics, personalization, nonconvex optimization, and high-dimensional inference. He is also involved with developing open-source software and solving problems in interpretability, ethics, and fairness in artificial intelligence.

Prior to joining the University of Chicago, he was a project scientist and postdoctoral researcher in the Machine Learning Department at Carnegie Mellon University. He completed his PhD in Statistics and a Masters in Applied Mathematics at UCLA, where he was an NSF graduate research fellow. Bryon has also served as a data science consultant for technology and marketing firms, where he has worked on problems in survey design and methodology, ranking, customer retention, and logistics.