HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
Abstracts for the IMA Workshop on
Robustness in Complex Systems
February 9-13, 2004


Probability and Statistics in Complex Systems: Genomics, Networks, and Financial Engineering, September 1, 2003 - June 30, 2004

David Alderson (Control and Dynamical Systems, California Institute of Technology (Caltech)) alderd@cds.caltech.edu

The Role of Design in the Internet and Other Complex Systems
Slides:
  html    pdf    ps     ppt

The Internet offers an attractive case study of a complex network, since our understanding of the underlying technology together with the ability to perform detailed measurements means that most conjectures about its large-scale properties can be unambiguously resolved, though often not without substantial effort. A fundamental challenge in the study of the Internet and other complex systems is to identify and understand the relationship between large-scale features and their underlying mechanisms. One popular approach leverages the tools of statistical physics to emphasize emergent properties of random ensembles, typically with constraints on macroscopic statistics. The main focus is on generic configurations whose construction is governed by randomness and are likely to occur by chance. A recent example of this perspective when applied to the Internet are the so called ``scale-free'' models of network connectivity graphs. These models claim that the emergence of power laws in the distribution of node degrees in these graphs is essentially explained by a random process that involves preferential attachment and reveals what has become known as the ``Achilles' heel of the Internet''--the presence of a few highly connected hubs within the core of the network that, when disabled by targeted attacks, will bring the Internet to its knee. An alternative perspective, motivated by engineering, suggests that nonrandom design rather than randomness plays a primary role in the construction and evolution of complex systems. The emphasis here is on non-generic, highly engineered configurations that are extremely unlikely to occur by chance, and the complex structure of highly engineered technology and of biological systems is viewed as the natural by-product of tradeoffs between system-specific objectives and constraints.

This talk shows how and why the latter view, when applied to the study of router-level Internet connectivity, results in conclusions that are fully consistent with the real Internet, but are the exact opposite of what the scale-free models claim. The reasons for reaching such divergent conclusions about one and the same system go well beyond the Internet and scale-free models and are endemic in the application of ideas from statistical physics to problems in technology and biology, where it is assumed that the details related to a complex system's design, functionality, constraints, and evolution (i.e., all ingredients that make engineering and biology different from physics) can be safely ignored in favor of random ensembles and their emergent properties.

(This represents joint work with J. Doyle, W. Willinger, and L. Li.).

Massoud Amin (HW Sweatt Chair in Technological Leadership, CDTL Director and Professor of Electrical and Computer Engineering, University of Minnesota) amin@umn.edu http://cdtlnet.cdtl.umn.edu/amin/amin.html

Robustness and Resilience of Critical Infrastructures

The massive August 14, 2003 power outage in the United States and Canada evoked eerie reminders of what shook our world on September 11, 2001. While early reports indicated no apparent evidence of terrorism in this outage or those in the UK and Italy in 2003, the cascading blackouts spotlighted our electricity infrastructure's vulnerabilities. This infrastructure affects us all -- are we prepared for future storms?

Our economy and security places increased demand for reliable, disturbance-free electricity. Virtually every crucial economic and social function depends on the secure, reliable operation of energy, telecommunications, transportation, financial, and other infrastructures. The Internet, computer networks, and our digital economy have increased the demand for reliable and disturbance-free electricity; banking and finance depends on the robustness of electric power, cable, and wireless telecommunications. Transportation systems, including military and commercial aircraft and land and sea vessels, depend on communication and energy networks. Links between the power grid and telecommunications and between electrical power and oil, water, and gas pipelines continue to be a lynchpin of energy supply networks.

The potential ramifications of network failures have never been greater, as the transportation, telecommunications, oil and gas, banking and finance, and other infrastructures depend on the continental power grid to energize and control their operations. Furthermore, as the power grids become heavily loaded with long distance transfers, the already complex system dynamics become even more important. The potential for rare-events but high-impact cascading phenomena represent just a few of many new science and technology concepts that are under development.

This presentation focuses on robustness and resilience of critical infrastructures, in particular focusing on electricity infrastructure combined with economic and communication layers. We briefly present the overall context in which these coupled infrastructures are operated in, followed by challenges associated with achieving increased situational awareness of operators/security monitors, signals and precursors to failures, infrastructure defense plans, protection against rare events and extreme contingencies, along with rapid/robust restoration.

From a strategic R & D viewpoint, agility and robustness/survivability of large-scale dynamic networks that face new and unanticipated operating conditions will be presented. A major challenge is posed by the lack of a unified mathematical framework with robust tools for modeling, simulation, control and optimization of time-critical operations in complex multicomponent and multiscaled networks. This presentation will also focus on a strategic vision extending to a decade, or longer that would enable more secure and robust systems operation, security monitoring and efficient energy markets.

Biography:

Massoud Amin is Professor of Electrical and Computer Engineering, holds the H.W. Sweatt Chair in Technological Leadership, and is the Director of the Center for the Development of Technological Leadership (CDTL) at the University of Minnesota. His research focuses on global transition dynamics to enhance resilience and security of national critical infrastructures. Before joining Minnesota in March 2003, Dr. Amin was with the Electric Power Research Institute (EPRI), where he directed all security-related research and development. He twice received Chauncey Awards at EPRI, the institute's highest honor and coined the term "self-healing" grid.

Dr. Amin is a member of the Board on Infrastructure and the Constructed Environment (BICE) at the U.S. National Academy of Engineering. He is a member of the IEEE Computer Society's Task Force on Security and Privacy, Chairs the Energy Security team of ASME Critical Assets Protection Initiative (CAPI), and is a Board member of the Center for Security Technologies (CST) at Washington University. He is the author or co-author of more than 95 research papers and the editor of six collections of manuscripts, and he serves on the editorial boards of four journals. Dr. Amin holds B.S. (cum laude) and M.S. degrees from the University of Massachusetts-Amherst, and M.S. and D.Sc. degrees from Washington University in St. Louis, Missouri.

For additional information and downloadable publications, please see: http://cdtlnet.cdtl.umn.edu/amin.html

Adam Paul Arkin (Departments of Bioengineering, University of California, Berkeley, Physical Biosciences Division, Lawrence Berkeley National Laboratory, Howard Hughes Medical Institute) aparkin@lbl.gov

Playing Practical Games with Bacteria and Viruses. Exploring the Molecular Mechanisms Behind Clever Cellular Stratagems

How do pathogenic bacteria sense their environment to deploy different survival strategies? Why do some viruses, like HIV, allow their host to live for long periods whereas others like Ebola do not? How precisely are these strategies encoded in the organism's biochemistry and genetics and how closely do they need to be followed to guarantee its survival? What are the optimal strategies for defeating these organisms or forcing them to do our bidding for industrial or medical benefit? Here I will demonstrate, using examples from our research on Bacillus subtilis stress response and the design of HIV gene therapeutic strategies, how molecular biology combined with methods from statistical physics, nonlinear dynamics, and game theory can be used to pose and partially answer these questions as well as illustrate some of the profound challenges in doing so.

Francis J. Doyle III (Center for Control Engineering and Computation, Department of Chemical Engineering, University of California, Santa Barbara)  doyle@engineering.ucsb.edu  http://www.chemengr.ucsb.edu/people/faculty/doyle.html

Robustness Analysis of Gene Regulatory Network Underlying Circadian Rhythms

The gene network which underlies circadian rhythms is an ideal system for robustness studies, owing to its remarkable performance in a highly uncertain environment. Of interest for control theoretic analyses, the dominant elements of the postulated architecture for Drosophila consist of nested negative autoregulatory feedback loops controlling the expression of timeless (tim) and period (per) interlocked with a positive feedback loop established via the dClock gene. Complex formation, regulated translocation and degradation of several of these gene products, which is additionally controlled (and delayed) by protein phosphorylation, add further levels of complexity to the system. Ongoing efforts to model and analyze this system using formal systems engineering methods will be presented.

Carla P. Gomes (Department of Computer Science, Cornell University)  gomes@cs.cornell.edu  http://www.cs.cornell.edu/gomes/

Heavy-Tailed Phenomena in Computation

We study the run time profiles of combinatorial search methods. Our study reveals that such methods have a large variability in performance. In fact the runtime distributions of combinatorial search methods exhibit intriguing properties, often characterized by very long tails or "heavy tails." Such non-standard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. We discuss how one can boost the performance of search procedures by exploiting the heavy tail phenomena. We demonstrate speedups of several orders of magnitude for state-of-the-art complete search procedures running on real-world problem instances. We also discuss formal models of heavy-tails in combinatorial search and discuss different ``statistical regimes of heavy-tailed behavior'' in combinatorial domains, providing a general characterization of parameter regions where heavy-tailed behavior is prevalent.

Ramesh Johari (Laboratory for Information and Decision Systems, Massachusetts Institute of Technology)  rjohari@MIT.EDU  http://www.mit.edu/~rjohari/

A Game Theoretic View of Efficiency Loss in Network Resource Allocation
Paper:
  pdf

The Internet has evolved into a heterogeneous system, comprised of many users who value their own performance, rather than the efficiency of the system as a whole; as a result, proposals for network resource allocation must be robust against self-interested behavior of the network users. With this motivation, we analyze a network congestion game in which the users of congested finite-capacity links anticipate the effect of their actions on the link prices. We show existence of a Nash equilibrium, discuss uniqueness, and establish that the efficiency of the system drops by no more than 25% relative to the social optimum. We also consider an extension of this model to links with elastic capacity; we again establish existence of a Nash equilibrium, and show that in this case the efficiency of the system drops by no more than 6 - 4 \sqrt{2} (approximately 34%) relative to the social optimum. Finally, we discuss some implications of these results for current work on Internet congestion control.

This is joint work with John Tsitsiklis and Shie Mannor.

Mustafa Khammash (Department of Mechanical and Environmental Engineering, University of California at Santa Barbara) khammash@engineering.ucsb.edu   http://www.cse.ucsb.edu/people/Khammash.html

Modeling and Analysis of Bacterial Stress-Response
Slides:
  html    pdf    ps    ppt

The heat shock response in bacteria is an important mechanism for combating the stress associated with an increase in temperature in the cellular environment. The resulting increased heat causes the unfolding or misfolding of cellular proteins and leads to a state of cellular stress. The cell responds to the accumulation of nonfunctional proteins by the heat induced upregulation of the heat shock proteins (HSPs), including both chaperones and proteases. The production of HSPs is regulated directly by alterations in the level, activity, and stability of the sigma factor sigma-32. The logic of the heat shock response is implemented through a hierarchy of feedback and feedforward controls that regulate both the amount of sigma-32 and its functionality. In this talk we present a dynamic model that captures known aspects of the heat shock system. With the aid of this model, we discuss the logic of the heat shock response from a control theory perspective, drawing comparisons to synthetic engineering control systems. Questions related to stochastic modeling, robustness analysis, and model validation will also be discussed and related mathematical challenges will be outlined.

Steven Low (Department of Electrical Engineering & Computer Science, California Institute of Technology (Caltech))  slow@caltech.edu

Utility Maximization, Routing, and Fair-Efficiency Tradeoff
Slides:   html    pdf    ps    ppt

It turns out that TCP-AQM can be interpreted as a distributed primal-dual algorithm over the Internet to maximize aggregate utility over source rates. Indeed an allocation policy can be defined in terms of a class of utility functions characterized by a scalar parameter alpha. A allocation is fair if alpha is large, and efficient if the aggregate source rate is large. All examples in the literature suggest that a fair allocation is necessarily inefficient. We characterize exactly the tradeoff between fairness and throughput in general networks. The characterization allows us both to produce the first counter-example and trivially explain all the previous supporting examples. Surprisingly, the class of networks in our counter-example is such that a fairer allocation is always more efficient.

Will TCP-AQM/IP maximize aggregate utility over both source rates and routes? We show that the problem is NP-hard and therefore cannot be solved by minimum cost routing in general. We exhibit inevitable tradeoff between routing stability and achievable utility.

(Joint work with J. Doyle, L. Li, A. Tang, J. Wang).

Andrew T. Ogielski (Renesys Corporation) ato@renesys.com

Impact of the 2003 Blackouts on Internet Communications

In August 2003, electric power outages affected 50 million people in the Northeastern US and Canada, causing economic losses estimated to exceed $5 billion. The outage was not only the largest in US history, but also the first of its scale since the rise of the commercial Internet and the World Wide Web.

In this report we examine the impact the blackout had on Internet connectivity and traffic routing in the region, as recorded in real time in data collected from Internet routers worldwide. We find that Internet connectivity in the blacked-out region was far more seriously affected than has been publicly revealed.

While the very largest provider networks (the Internet backbones) were apparently unaffected by the blackout [KN1, KN2], many thousands of significant networks and millions of individual Internet users were offline for hours or days. Banks, investment funds, business services, manufacturers, hospitals, educational institutions, internet service providers, and federal and state government units were among the affected organizations.

In this presentation we describe the time course and statistics of various network outage metrics during the US/Canadian blackout (August 14-16, 2003). We also provide a comparative analysis of Internet outages during a country-wide blackout in Italy on September 28, 2003 that reveals some interesting statistical similarities despite significant differences in Italian Internet architecture. These similarities may indicate the existence of more general statistical laws describing the expected number of connectivity failures and their time courses under wide-area network failures due to blackouts or other disasters. This is a suggested subject for future research.

The report is available on the Web at http://www.renesys.com/news/index.html.

Venkat N. Padmanabhan (Microsoft Research)  padmanab@microsoft.com  http://www.research.microsoft.com/~padmanab/

Impact of Interference on Multi-hop Wireless Network Performance
Slides:
 pdf    ps

Multi-hop wireless networks are growing in importance, driven by applications such as community networking and sensor networking. A key characteristic of such networks is that the (wireless) links are not independent - the operation of one link may interfere with the operation of other links in its vicinity. This raises interesting and fundamental questions of what the throughput capacity of such networks is and how it is impacted by network scale and engineering parameters such as radio range, number of channels, etc.

We survey prior work in this area, including the seminal work of Gupta and Kumar, that has focused on computing asymptotic capacity bounds under assumptions of homogeneity or randomness in the the network topology and/or workload. We then present our work which considers the throughput performance question for any given network and workload specified as inputs. The conflict graph framework we develop enables "what-if" analysis of the impact of various network parameters on throughput. We conclude by pointing out several avenues for future work.

(Joint work with K. Jain, J. Padhye, and L. Qiu).

Antonis Papachristodoulou (California Institute of Technology) antonis@its.caltech.edu

Analysis of Nonlinear Delay Models for TCP/AQM

Understanding the properties of complex systems such as the Internet requires the construction of robust models followed by an appropriate analysis. The models that are usually used are in the form of coupled nonlinear delay differential equations, but the absence of efficient methodologies for analysis at this model level often leads to the investigation of their linearizations or their un-delayed versions. In this presentation we will introduce an algorithmic procedure for the analysis of nonlinear TCP/AQM protocols modelled by functional differential equations using the Sum of Squares decomposition and convex optimization.

Pablo A. Parrilo (Automatic Control Laboratory, Swiss Federal Institute of Technology, ETH Zurich) parrilo@control.ee.ethz.ch http://control.ee.ethz.ch/~parrilo/

SOS Relaxations for System Analysis: Possibilities and Perspectives

We present an overview of the theory and applications of sum of squares (SOS) relaxations in system and robustness analysis. The developed techniques, based on convex optimization and results from real algebraic geometry, unify and generalize many well-known existing methods, such as those based on the S-procedure. The ideas and algorithms will be illustrated through examples, emphasizing recent developments as well as future challenges.

Craig Partridge (BBN Technologies)  craig@bbn.com  http://www.ir.bbn.com/~craig/

Frequency Analysis of Protocols
Slides:
  html    pdf    ps    ppt

In the past three years or so, a few different researchers have begun to use frequency analysis to try to understand features of data networks and their protocols. Their use of frequency techniques has varied widely, and in some cases, we've got evidence that frequency techniques work (in surprising ways) without a strong understanding of why they work.

In this talk I try to survey the work that has been done to date, talk about apparent strengths and weaknesses of the various pieces of work, and talk about the interesting open questions that I believe this work raises.

Stephen Prajna (Control and Dynamical Systems, California Institute of Technology) prajna@cds.caltech.edu

Relating Robustness and Complexity

Biologists and engineers often find the task of verifying the functionality of their systems onerous. They are many times obligated to modify their ambitious inquiries, either because these inquiries are not well-posed, or because they are computationally prohibitive. Modifying the inquiries to render them simpler yet more meaningful and robust is an important aspect of this process, as it is more likely that the resulting questions will have easier answers with shorter proofs, and small perturbations will not nullify the answers. When the questions have been carefully posed to avoid the above complication, the difficulty in answering them has a profound albeit not well understood relationship to the system robustness. The computational effort required in functionality verification is deeply connected to hidden fragilities which may lead to system failure, and this computational complexity indicates a bound on the maximum possible robustness. We will advocate and illustrate these issues using the Number Partitioning Problem, a deceptively "simple" problem which in fact is NP-hard.

Michael A. Savageau (Department of Biomedical Engineering, and Microbiology Graduate Group, University of California, Davis, CA 95616 USA)  masavageau@ucdavis.edu

Robustness and Evolvability of Gene Circuitry

Robustness is observed at many different levels in biological systems. By robustness I mean that the system tends to continue performing its function despite changes in the parameters that define the relatively fixed nature of the system. Three explanations of this robustness have been commonly proposed. First, that there are redundant back-up mechanisms that take over the function when the original mechanism is compromised, much like the spare tire in the trunk of your car. Second, that robustness is a mathematical necessity, which is the result of the underlying mathematical model that governs biological systems. Third, that network topology produces robustness as a result of numerous alternative routes that provide connectivity between the components of the network. Each of these explanations implies that robustness is basically a gratuitous property and that function and design by natural selection have little if any role to play. I will suggest two additional explanations for the robustness of biological systems - thermodynamics and regulation -- in which consideration of function and design by natural selection play a prominent role. I also will examine the robustness of alternative biological designs, which provides the context for a comparison of these explanations. These results demonstrate that system robustness is an important criterion for, and consequence of, design by natural selection.

Connect With Us:
Go