Large scale machine learning relies on different kinds of resources and needs to obey various constraints. Understanding the tradeoffs between resources as well as constraints is necessary for effective and successful learning. In the early stages of machine learning, a fundamental tradeoff of interest was between computational and statistical resources. In this paradigm, computational resources (often on a single machine) were in tension with statistical resources and constraints including, for example, the number of training samples and statistical accuracy. The study of this tradeoff had a large impact on the statistical analysis of machine learning algorithms. With the advancement of parallel and distributed machines, the study of tradeoffs expanded to include memory and communication considerations. Other advances such as online streaming and small embedded devices raised additional constraints, such as partial information.
More recently, tradeoffs involving the constraints of privacy, fairness, and robustness have been of growing interest to the machine learning community. These three constraints have been important to the machine community for a long time. However, modern technological advancements that affect data gathering, processing, and transferring and theoretical advancements that clarify these nontrivial constraints in various settings have resulted in a surge of recent works on these constraints and their tradeoffs with other resources and constraints. The workshop will discuss exciting recent developments in this area. It will bring together experts from diverse backgrounds to present and discuss recent results and will also identify new research opportunities for this broad research area.