HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
August 6-7, 2001


Andre Broido and Kim C. Claffy (Cooperative Association for Internet Data Analysis (CAIDA),  San Diego Supercomputer CenterUniversity of California, San Diego)  kc@ipn.caida.org

Complexity of Global Routing Policies

In this paper we introduce a framework for analyzing BGP connectivity, and evaluate a number of new complexity measures for a union of core backbone BGP tables. Sensitive to engineering resource limitations of router memory and CPU cycles, we focus on techniques to estimate redundancy of the merged tables, in particular how many entries are essential for complete and correct routing.

We introduced the notion of policy atoms as part of a calculus in routing table analysis. We found that the number of atoms and individual counts of atoms with a given number of prefixes properly scale with the Internet's growth and with filtering of prefixes by length. We show that the use of atoms can potentially reduce the number of route announcements by a factor of two, with all routing policies being preserved. Atoms thus represent Internet properties in an accurate way, yet with much smaller complexity.

Several of our analysis results suggest that commonly held Internet engineering beliefs require re-consideration. We find that more specific routes had a relatively constant share of routes in backbone tables across 2000/2001. On the other hand, the churn of more specific routes was much larger than that of top prefixes. We also find that deaggregation of existing announcements is a second major source (beyond announcement of recently allocated address space) of new top (least specific) prefixes in global BGP tables. We also provide examples of misconguration and noise in BGP data, including multi-origin prefixes, AS paths with apparent routing loops (some of them due to typographical errors, other actual loops undetected by local BGP speakers), inadvertent transit through customer ASes.

 Andre Broido and Kim C. Claffy (Cooperative Association for Internet Data Analysis (CAIDA),  San Diego Supercomputer CenterUniversity of California, San Diego)  kc@ipn.caida.org

Internet Topology: Connectivity of IP Graphs

In this paper we introduce a framework for analyzing local properties of Internet connectivity. We compare BGP and probed topology data, finding that currently probed topology data yields much denser coverage of AS-level connectivity. We describe data acquisition and construction of several IP-level graphs derived from a collection of 220M skitter traceroutes. We find that a graph consisting of IP nodes and links contains 90.5% of its 629K nodes in the acyclic subgraph. In particular, 55% of the IP nodes are in trees. Full bidirectional connectivity is observed for a giant component containing 8.3%of IP nodes.

We analyze the same structures (trees, acyclic part, core, giant component) for other combinatorial models of Internet (IP-level) topology, including arc graphs and place-holder graphs. We also show that Weibull distribution N{X >x} = a exp(-(x/b)c approximates outdegree distribution with 10-15% relative accuracy in the region of generic object sizes, spanning two to three orders of magnitude up to the point where sizes become unique.

The extended version of this paper includes dynamic and functorial properties of Internet topology, including properties of and diffusion on aggregated graphs, invariance of a reachability function's shape regardless of node choice or aggregation level, analysis of topological resilience under wide range of scenarios. We also demonstrate that the Weibull distribution provides a good fit to a variety of local object sizes.

David L. Donoho (Department of Statistics, Stanford University)  donoho@stanford.edu

Multiscale Stepping Stone Detection    Slides:   pdf

Joint work with Vern Paxson and Umesh Shankar (at ACIRI & Lawrence Berkeley Labs) Stuart Staniford, Jason Coit, and Gary Grim (Silicon Defense).

We discuss the problem of detecting network intruders who use universities and similar facilities as third-party launching pads (stepping stones) for attacks against other facilities (the attack seems to be coming from the university, not the true source). Staniford, Paxson and colleagues have been developing tools that allow one to recognize that an interactive stream coming into the university and another interactive stream coming out of the university are the same (modulo IP delays). The most recent development is an idea to defeat the intruder who tries to `mask' the interactive session by attempting local jittering of packet arrival times so they won't be identical. By using wavelets we can still detect the similarity of the streams at an appropriately long time scale and so, in theory, defeat this countermeasure.

John Doyle (Control Theory and Dynamic Systems, California Institute of Technology)  doyle@cds.caltech.edu

Internet Coding and Control        Slides:    powerpoint    html    pdf (4MB)

Two of the great abstractions of the 20th century were the separation, in both theory and applications, of 1) controls, communications, and computing from each other, and 2) the systems level from its underlying physical substrate. This horizontal and vertical isolation of systems held both in practical applications and in academic research. It facilitated massively parallel, wildly successful, explosive growth in both mathematical theory and technology, but left many fundamental problems unresolved and a poor foundation for future systems of systems in which these elements must be integrated. For example, congestion control and routing in the current TCP/IP protocol suite is based on classical control methods which are extended in purely heuristic ways to the distributed internetworking setting. Improving the performance of existing networks as well as future networks of networks employing ubiquitous controls and computing will stretch these abstractions past their breaking point. While a unified theory of systems has been an appealing intellectual challenge for decades, it has only recently become both an urgent technological challenge, and a tangibly reachable research objective.

The unifying theme in this talk is the new concept of Highly Optimized Tolerance (HOT) that arises when deliberate robust design aims for a specific level of tolerance to uncertainty. The resulting "robust, yet fragile" features of HOT systems are high performance and high throughput, but potentially high sensitivities to design flaws and unanticipated or rare events. HOT provides a framework in which the previously fragmented mathematical tools of robust control, communications, computation, dynamical systems, and statistical physics are beginning to be unified and brought to bear on a variety of applications. For example, congestion due to bursty Internet traffic can be traced to HOT design of web layouts and protocols, a generalization of source coding that suggests novel new protocol designs. This treatment of Internet "source coding" is complemented by new robust and scaleable congestion control strategies that provide high QoS (Quality of Service) for bursty traffic with minimal changes at the IP layer and below. We hope this will lead to not only to better understanding and control of internet traffic, but should also facilitate distributed control of dynamical systems using networks. Similar insights have been obtained in domains as diverse as shear flow turbulence, biological signal transduction and gene regulation, forest ecology, cascading failures in power grids, financial market volatility, and the ubiquity of power laws in technological and natural systems.

John Lavery (Computing and Information Sciences Division, Army Research Office, Army Research Laboratory)  Lavery@arl.aro.army.mil

The Applied Analysis Program at the Army Research Office    html    pdf (339KB)

The Applied Analysis Program at the Army Research Office supports research in mathematical analysis for advanced complex materials, inverse scattering in complex media, modeling of multiscale objects and functions, nonlinear dynamics for communication, data fusion in complex networks, dynamics of distributed networks of embedded sensors and actuators and additional areas of opportunity. Funding for single-investigator projects is low but no longer declining. Funding for large multidisciplinary initiatives is steady and much larger than funding for single-investigator projects. The Program will keep a physics-based component but expects to expand its support for information-based research, including large-scale network dynamics for analysis, design and defense of wired and wireless networks.

Steven Low (Caltech) slow@caltech.edu

Equilibrium and Dynamics of TCP/AQM

Congestion control of the Internet is performed by TCP protocol at traffic sources in concert with active queue management (AQM) algorithms at network links. We interprete TCP/AQM as carrying out a distributed primal-dual algorithm over the Interenet to maximize aggregate source utility. Different protocols correspond to different optimization algorithms to solve the same prototype problem with different utility functions, and we derive these utility functions explicitly. We describe a linear model to study the dynamics of the system around equilibrium. It suggests that the current protocol will become unstable (oscillatory) as delay or capacity increases. We propose a scalable protocol that maintains stability for arbitrary delay, capacity and routing.

(Joint work with John Doyle (Caltech) and Fernando Paganini UCLA)).

J.S. Marron (Cornell University and University of North Carolina)  marron@email.unc.edu

Zooming Statistics: A Multiscale Look at Internet Traffic Data

Joint work with: Jan Hannig, Colorado State University and Rolf Riedi, Rice University.

Scale of statistical analysis is an important topic for internet traffic data. A seminal body of work has revealed that apparent heavy tail TCP connection distributions result in a host of phenomena related to long range dependence and self similarity, indicating that the exponential/Poisson models at the heart of classical queueing theory are quite inappropriate. More recent purely empirical work suggests that at fine time scales, classical models are appropriate near the center of the internet. These issues are empirically illustrated, and the apparent contradictions are resolved, using zooming autocovariance and zooming SiZer, analyses. If time permits, some novel views of "stationarity" and "heavy tails" will also be presented.

Balaji Prabhakar (Departments of Electrical Engineering & Computer Science, Stanford University)  balaji@isl.stanford.edu

Simple, Scaleable Network Algorithms

Several networking problems suffer from the ``curse of dimensionality'': algorithmic solutions do not scale well with the size of the user population or with the speed of operation. There is often the need to take advantantage of some features of the problem to devise simple-to-implement, scaleable algorithms.

This talk illustrates the use of randomization, parallelism and ``memory'' to design algorithms for web caching, switch scheduling and bandwidth partitioning.

It represents joint work with P. Giaccone, R. Pan, K. Psounis and D. Shah.

Frederick Serr (Director, Systems & Capacity Analysis Genuity, Inc.)  fserr@genuity.com

Network Traffic and Capacity    Slides:   html    pdf

Bill St. Arnaud (Senior Director Network Projects CANARIE Inc)  bill.st.arnaud@canarie.ca

Some Observations on Network Traffic Growth and Scaling    Slides:  html    pdf

Christopher Stark (Mathematical Sciences, National Science Foundation)  cstark@nsf.gov

Funding Opportunities at the NSF

This presentation will review some of the funding mechanisms for research related to the workshop on "Mathematical Opportunities in Large-Scale Network Dynamics" and some of the recent experience of the Division of Mathematical Sciences with those competitions.

Don Towsley (Department of Computer Science, University of Massachusetts at Amherst)  towsley@newworld.cs.umass.edu

Some Challenges Facing Network Practitioners    pdf

The Internet continues to grow and evolve at a rapid pace. Consisting of 10,000 networks made up of several hundred thousand routers, it supports a rich variety of applications. These include the Web with text, images, audio and video, network telephony, etc.. The underlying technologies also continue to expand, ranging from low bandwidth, lossy wireless links to multi-gigabit per second optical links; from Web server farms, to overlay networks (e.g., Akamai), to peer-to-peer networks (e.g., Napster) . When these pieces are put together, we confront a large, heterogenous, complex system whose behavior defies simple explanation. This talk will present some important issues and challenges that have not been addressed yet in a satisfactory manner. The purpose of this being to stimulate the development of new mathematical theories and frameworks, and the application of existing mathematical ideas to help networkers in mastering these issues. These issues and challenges include:

  • how to do measurements and how to use measurements to draw inferences regarding network workloads and behavior.

  • how to model and control the behavior of large, heterogeneous networks

  • how to design wireless networks

  • how to model and design overlay and peer-to-peer networks.

These are not intended to be exhaustive but representative of current interests and trends in networks with examples taken from the speaker's work.

Walter Willinger (AT&T Labs-Research)   walter@research.att.com

Scaling Phenomena in the Internet: Critically Examining Criticality

Recent empirical discoveries concerning various scaling properties of the temporal dynamics of Internet traffic (i.e., self-similar scaling behavior) or of some of the topological features associated with the physical structure of the Internet (e.g., scale-free AS connectivity graphs) have resulted in a number of proposed models or "explanations" of such "emergent" phenomena. Many of these explanations invoke concepts such as fractals, chaos, or self-organized criticality, mainly because these concepts are closely associated with scale-invariance and power laws.

In this talk, I will present a simple framework for identifying those explanations (and models) that are relevant for the networking application at hand and can provide a basis for developing a useful, consistent, and verifiable theory of large-scale communication networks such as the Internet. This proposed framework for Internet modeling is data-driven, exploits fully the many facets of the available network measurements, and relies heavily on our ability to identify mathematical constructions that can accommodate the highly engineered nature of today's Internet. Using the proposed framework, I will show that while some recently proposed models of or ``explanations'' for Internet-related scaling phenomena that rely on the popular theory of self-organized criticality are able to produce the observed "emergent" phenomena, they are not the explanations of why these phenomena arise in the Internet (even though they claim otherwise).

Material from Talks

Mathematical Opportunities in Large-Scale Network Dynamics

Wireless Networks, August 8-10, 2001

2001-2002 IMA Thematic Year on Mathematics in the Geosciences
Go