HOME    »    SCIENTIFIC RESOURCES    »    Volumes
Abstracts and Talk Materials
Blackwell-Tapia Conference
November 3 - 4, 2006


Asheber Abebe (Auburn University)
http://www.auburn.edu/~abebeas

Discriminant analysis based on statistical depth functions
December 31, 1969

We will consider the problem of identifying the most likely source of a multivariate data point from among several multivariate populations. The use of statistical depth functions for solving this classification problem will be discussed. Statistical depth functions provide a center-outward ordering of points in a multivariate data cloud and hence can be considered to be multivariate analogues of ranks. Specifically, classification through maximizing the estimated transvariation probability of statistical depths is proposed. Considering elliptically symmetric populations, it will be illustrated that these new classification techniques provide lower misclassification error rates in the case of heavy tailed distributions.

This is joint work with Nedret Billor, Asuman Turkmen and Sai Nudurupati.

Alejandro Aceves (University of New Mexico)
www.math.unm.edu/~aceves

Nonlinear interaction of light in disordered optical fiber arrays
December 31, 1969

Light propagation in coupled fiber arrays is described by a balanced of diffraction and nonlinearity. At high intensities, light is localized as a nonlinear mode propagating in a few fibers. The imperfections in the manufacturing of such fiber arrays account for multiplicative noise in the governing equations. Here we analyze how this noise affects the phenomenon of linear (Anderson-like) and nonlinear localization.

Javier Armendariz (Johns Hopkins University)

JHU Applied Physics Lab - Aviation systems engineering group overview
December 31, 1969

The Aviation Systems Engineering Group at JHU/APL conducts systems engineering and analysis to support the development and operational employment of military aviation systems. In this endeavor technical requirements and enabling technologies are identified that relate to operational requirements and operational concepts. The group strives to maintain expertise in air defense threat characterization and analyze the survivability and effectiveness of current and future military aviation systems. To this end we are involved in a wide array of projects encompassing many technical disciplines.

Francisco Barahona (IBM)
Raymond Beaulieu (National Security Agency)
Fern Yvette Hunt (National Institute of Standards and Technology)
Overtoun Jenda (Auburn University)
http://www.dms.auburn.edu/~jendaov/

Panel discussion on career opportunities in the mathematical sciences
November 4, 2006


Kanadpriya Basu (University of South Carolina)
Maria Cristina Villalobos (University of Texas Pan American)

Modelling faculty teaching workload as a linear program
December 31, 1969

We present an assignment problem that distributes classes among instructors in the Mathematics department. Currently, the Director of Scheduling assigns about 190 classes 60 instructors using the manual process of trial-and-error by considering, for example, an instructor's teaching workload and class preferences. However, this process is quite time-consuming. Therefore, we model the problem as a linear program with binary variables. The results are presented for Fall'2006.

Manuel Berriozábal (University of Texas)

Texas prefreshman engineering program: Closing the gap for minorities in science and engineering
December 31, 1969

The Texas Prefreshman Engineering Program (TexPREP) started in the summer of 1979 at the University of Texas at San Antonio. It is a seven-to eight week summer mathematics-based academic enrichment program designed to prepare middle school and high school students for college studies in science and engineering. The program focuses on the development of abstract reasoning and problem solving skills through the mastery of academic content. Since the program started, over 24,000 students have completed at least one summer component of PREP. At least 75% of the students have come from minority groups underrepresented in science and engineering and over 50% have been women. Of the 11,000 students former students who are of college age, 6,500 responded to the 2005 annual survey. The following is a summary of the results:
  • 99.9% graduated from high school;
  • 97 % are college students (3,300) or senior college graduates (3,000);
  • The senior college graduation rate is 80%;
  • 78% of the college graduates are underrepresented minorities;
  • 50% of the college graduates are science, mathematics, or engineering majors;
  • 74% of the science, mathematics, and engineering graduates are underrepresented minorities.
The 2006 Program served over 2600 students in 21 Texas college campuses and 6 college campuses in other states and Puerto Rico.

Nelson Butuk (Prairie View A&M University)

Accurate computation of second order derivatives using complex variables
December 31, 1969

In this presentation, the complex variables method of computing accurate first derivatives is combined with an approximation method to calculate second order derivatives efficiently. The complex variables method, is some what similar to the automatic differentiation technique using the popular software tool ADIFOR, to obtain sensitivities (derivatives) from source codes. Application of automatic differentiation to an existing source code, (that evaluates output functions) automatically generates another source code that can be used to evaluate both output functions and derivatives of those functions with respect to specified code input or internal parameters. The pre-compiler software tool, ADIFOR is usually used to obtain derivatives from CFD and grid generation codes. On the other hand, the complex variables (CV) approach is simpler and easier to implement. The current implementation of CV method only computes first order derivatives accurately. The current methods of computing 2nd order derivatives using different approaches are based on construction of appropriate meshes in a given domain. Then some form of Taylor expansion scheme is applied to these meshes to obtain the desired derivatives. The problem with this approach is that only the function is continuous across meshes, but not its partial derivatives. Because of this, the computed 2nd order derivatives are usually inaccurate. The new method to be presented will address this issue by combining the CV method with an accurate efficient approximation method.

Luis Enrique Carrillo Díaz (Universidad Nacional Mayor de San Marcos)
Roxana Lopez-Cruz (Universidad Nacional Mayor de San Marcos)
http://mathpost.asu.edu/~roxana/

Research Institute of Mathematical Sciences
December 31, 1969

The Research Institute of Mathematical Sciences develops research in pure and applied mathematics, statistics, computer science and research operations. One of the goals of the Institute is to promote means of international cooperation to support the research among the members of our institute and other insitutions of the world. PESQUIMAT is the review of the Institute in charge to spread the research of our members. http://matematicas.unmsm.edu.pe/

Edward Castillo (Rice University)
www.caam.rice.edu/~ec

Registration of 4D CT lung images
December 31, 1969

In collaboration with Guerrero et al from MD Anderson Cancer Center, we are developing a new method for accurate registration of 4D CT lung images which accounts for: (1) the compressible nature of the lungs, (2) noise in the images, (3) the high computational workload required to register 4D CT image sets.

In order to account for lung compressibility, voxel displacement is modeled by the conservation of mass equation. Secondly, the effects of noise are alleviated by applying the local-global approach of Weickert et al. to the conservation of mass setting. Finally, the resulting large scale linear systems are solved using a parallelizable, preconditioned conjugate gradient algorithm.

The new method has been implemented in serial and tested on two dimensional sythetic images with promising results.

Farrah J. Chandler (University of North Carolina)
http://www.uncw.edu/math/faculty/about-faculty-chandler.html
Shirley M. Malcom
http://www.aaas.org/ScienceTalk/malcom.shtml
David Manderscheid (University of Nebraska)
William Yslas Vélez (University of Arizona)
http://math.arizona.edu/~velez

Panel discussion: Best practices for recruitment and retention of under-represented minorities in the mathematical sciences
November 3, 2006


Gerardo Chowell (Los Alamos National Laboratory)
http://math.lanl.gov/~gchowell/

Transmission and control of seasonal and pandemic influenza
December 31, 1969

Recurrent epidemics of influenza are observed seasonally around the world with considerable health and economic consequences. Major changes in the influenza virus composition through antigenic shifts can give rise to pandemics. The reproduction number provides a measure of the transmissibility of influenza. We estimated the reproduction number across influenza seasons in the United States, France, and Australia for the last 3 decades. In regards to pandemic influenza, we estimated the reproduction number for the first two epidemic waves during the 1918 influenza pandemic in Geneva, Switzerland. I will discuss the public health implications of our findings in terms of controlling regular influenza epidemics and an influenza pandemic of comparable magnitude to that of 1918.

Erhan Cinlar (Princeton University)

Jump-diffusions
November 4, 2006

For Hunt processes with jumps, we seek a treatment that concentrates on the jumps. The idea is to use a generalized version of the renewal theory (to which Blackwell was a seminal contributor). Embedded at the jump times, there are Markov renewal processes (with continuous state space) that decompose the original process into a sequence of diffusions. Then, the original resolvent can be written as the potential operator of a Markov chain acting on the resolvent of a diffusion. Similar decompositions are possible for hitting distributions and the transition semigroup. Theoretically, our method reduces a jump diffusion to a combination of diffusions and Markov chains.

Minerva Cordero-Epperson (University of Texas)
http://www2.uta.edu/math/cordero

A new semifield of order 36
December 31, 1969

A (finite) semifield is a non-associative division ring; the associated projective plane is called a semifield plane. The first semifields were defined by Dixon in the early 1900s; in the 1960s several new classes were introduced including the twisted fields defined by Albert. In this poster we will give a historical development of finite semifields. We will present the development in the last decade including a new semifield recently constructed by the author.

Ricardo Cortez (Tulane University)

Computation of biological flows
November 4, 2006

Biological systems often include very interesting fluid flows that arise from the interaction of a fluid with an external source of force. Examples are the motion of micro-organisms, such as bacteria, that propel themselves by moving their flagella, the motion of cells, and the motion generated by cilia beating in the lungs. The common theme is the interaction between the fluid and an elastic membrane of filament. Numerical models of these motions must compute the motion of the membranes and the fluid simultaneously. This talk highlights the use of Regularizaton Methods for these problems, a methodology that has shown promising results and that continues to expand. Examples of the computations will be shown.

Carla Cotwright (Wake Forest University)
http://www.math.wfu.edu/Faculty/Cotwright.html

Clones in minors of matroids
December 31, 1969

Results that relate clones in a matroid to minors of that matroid are given. Also, matroids that contain few clonal-classes are characterized. These results are related to several results from the literature such as Tutte's Excluded-Minor characterization of the binary matroids.

Joint work with T. James Reid.

Diana Dalbotten (University of Minnesota, Twin Cities)

Mathematics and its application to modeling the earth's surface
December 31, 1969

Students with a Mathematics or Physics degree who wish to apply their abstract skills in a concrete way are invited to investigate the National Center for Earth-surface Dynamics. This multidisciplinary center examines the Earth's surface quantitatively, using computer models, field studies, and laboratory experiments to investigate channels and channel dynamics.

Adewale Faparusi (Texas A & M University)

The fixed charge network flow problem
December 31, 1969

The fixed charge network flow problem (FCNFP) is NP Hard and has various practical applications including transportation, network design, communication, and production scheduling. More work has been done on the development of algorithms for specific variants of the FCNFP than the generalized problem. Various formulations and exact and heuristic methods for solving the FCNFP are reviewed.

Sean C. Garrick (University of Minnesota, Twin Cities)

Probabilistic and stochastic modeling of turbulent flows
December 31, 1969

The transport of wide variety of phenomena in turbulent flows (heat, mass, momentum, species, etc.) is a significant challenge to computational scientists and engineers working in chemical processing, pharmaceuticals, materials synthesis, and atmospheric physics, to name a few. Capturing the variety of length and time scales manifest in these flows leads to compute times which are impractical at best and infeasible at worst. In this seminar, I will present some ideas and recent work in the modeling of multi-scale transport phenomena and the probabilistic and stochastic tools used in their description.

Nancy Glenn (University of South Carolina)
www.stat.sc.edu/~nglenn

EL algorithm for linear models with missing data
December 31, 1969

Linear regression is one of the most widely used statistical techniques. However, there is often a problem of missing response variables in practical applications. The expectation maximization (EM) algorithm is a general iterative algorithm for the analysis of missing data; but it relies on parametric assumptions that are usually not met. We present a nonparametric algorithm--the empirical likelihood (EL) algorithm for linear models with missing data. The EL algorithm's advantage is that it makes no assumptions regarding the form of the underlying distribution of the data. We construct confidence intervals for the mean response in the presence of missing responses. We also discuss the power and efficiency of confidence intervals constructed when using the EL algorithm to replace missing responses.

Edray Herber Goins (Purdue University)
http://homepage.mac.com/ehgoins

Why should I care about Lie groups?
December 31, 1969

Sometimes differential equations have an obvious symmetry which leads to a natural guess for its solution. The Norwegian mathematician Marius Sophus Lie (1842-1899) spent most of his career attempting to generalize ideas of fellow Norwegian Niels Henrik Abel (1802-1829) from discrete groups of symmetries of algebraic objects to continuous groups of symmetries of topological objects. In the process, Lie created a new branch of mathematics which united differential geometry and abstract algebra.

In this talk, we give a brief introduction to the pulchritude of Lie's ideas. From the geometric nature of manifolds to the analytic nature of differential equations, we discuss the natural group action of the space of vector fields of a manifold on itself. We conclude the talk with a discussion of the computation of Lie group of the real line.

Illya V. Hicks (Texas A & M University)
http://ie.tamu.edu/people/faculty/Hicks

Branch decomposition techniques for discrete optimization
November 3, 2006

This talk gives a general overview of an emerging technique for discrete optimization that has footholds in mathematics, computer science, and operations research: branch decompositions. Branch decompositions along with its respective connectivity invariant, branchwidth, were first introduced to aid in proving the Graph Minors Theorem, a well known conjecture (Wagner's conjecture) in graph theory. The algorithmic importance of branch decompositions for solving NP-hard problems modeled on graphs was first realized by computer scientists. The dynamic programming techniques utilizing branch decompositions, called branch decomposition based algorithms, fall into a class of algorithms known as fixed-parameter tractable algorithms and this talk will highlight the computational effectiveness of these algorithms in a practical setting for NP-hard problems such as the travelling salesman problem, general minor containment, and the branchwidth problem.

Fern Yvette Hunt (National Institute of Standards and Technology)

Mathematical modelling at NIST: An example
December 31, 1969

Fluorescent stains and dyes are widely used to visualize biological structure and function on the cellular and sub-cellular level. The photodegradation of fluorescent particles (fluorophores) is an extremely important issue for biomedical and biotechnology applications because the sensitivity and the accuracy of the quantitative information conveyed by assays using them depends on fluorophore photostability. Recently the presenter and Dr. Adolfas Gaigalas of NIST developed a mathematical model of an experimental method for measuring photodegradation. The model is a set of coupled partial differential equations that describe the kinetics of photodegradation and the flow of fluorophores through the experimental apparatus. Using singular perturbation techniques, the model is reduced to to a dramatically simpler and experimentally accessible ordinary differential equation. The latter can be used to interpret and fit the experimental meausurements, thus providing a quantitative characterization of photostability.

Christopher K. R. T. Jones (University of North Carolina)
http://www.math.unc.edu/Faculty/jones/

Statistical and Applied Mathematical Sciences Institute
December 31, 1969

Come learn about opportunities at SAMSI.

Donald King (Northeastern University)

Spherical nilpotent orbits of reductive Lie groups: an overview
December 31, 1969

The vector space of complex symmetric n×n matrices is preserved by conjugation with complex n×n orthogonal matrices. Conjugacy classes (orbits) of height two nilpotent symmetric matrices have many pleasant properties, and give insights into the structure of interesting irreducible unitary representations of SL(n, R), the group of real n×n matrices of determinant one. If we replace SL(n, R) by a general reductive Lie group G, then its spherical nilpotent orbits have similar properties, and carry similar information about some of the irreducible unitary representations of G.

Rachel Kuske (University of British Columbia)
http://www.math.ubc.ca/~rachel

AWM Mentor Network
December 31, 1969

At present, the goal of the Association for Women in Mathematics (AWM) Mentor Network is to match mentors, both men and women, with girls and women who are interested in mathematics or are pursuing careers in mathematics. The network is intended to link mentors with a variety of groups: recent PhD's, graduate students, undergraduates, high school and grade school students, and teachers. Matching is based on common interests in careers in academics or industry, math education, balance of career and family, or general mathematical interests. Following increased support from the math institutes, we are considering the possibility to expand the Mentor Network to other under-represented groups in mathematics. All who are interested in participating in this expansion are encouraged to discuss this possibility at the conference.

Rachel Kuske (University of British Columbia)
http://www.math.ubc.ca/~rachel

American Institute of Mathematics
December 31, 1969

AIM, the American Institute of Mathematics, would like to bring to your attention opportunities at its conference center, AIM Research Conference Center (ARCC). Located in Palo Alto, California, AIM has been hosting fully-funded, week-long workshops at ARCC in all areas of the mathematical sciences since 2002. Through ARCC, AIM supports and develops an innovative style of workshop that encourages interactive research as part of the workshop, fosters new connections, and builds productive and lasting collaborations. Several proactive approaches are used to attract a diverse groups of participants, including women and under-represented minorities as well as junior mathematicians. All 32 participants receive full funding to attend the week-long workshop.

Steven L. Lee (Department of Energy)

Lawrence Livermore National Laboratory
December 31, 1969

Come learn about the exciting opportunities in the Computation Directorate at LLNL.

Mark E. Lewis (Cornell University)
http://www.orie.cornell.edu/~melewis/

From Massey to Blackwell: A study of non-stationary queueing control via sensitive optimality criteria
November 3, 2006

In this talk we explain how a single jump non-stationary queueing control problem can be solved via sensitive optimality criteria. In particular, the queueing problem is divided into a stationary infinite horizon problem and a non-stationary finite horizon problem with the appropriate terminal reward. The stationary problem leads to several results including the existence of a single bias optimal policy. Since the existence of a Blackwell optimal policy is known, this implies a similar result under this criterion. The search for an optimal policy in the non-stationary problem is shown to lie within the class of monotone (in time) control limit policies. The original problem was posed by Professor Massey and lead to an understanding of an application of Blackwell's sensitive optimality criterion, thereby drawing a connection between 2 (actually 3) generations of African-American scholars.

Oluwole Daniel Makinde (University of the North)

Thermal stability of a reactive third grade fluid in a cylindrical pipe: An exploitation of Hermite-Padé approximation technique
December 31, 1969

A large class of real fluids used in industries is chemically reactive and exhibit non-Newtonian characteristics e.g. coal slurries, polymer solutions or melts, drilling mud, hydrocarbon oils, grease, etc. Because of the non-linear relationship between stress and the rate of strain, the analysis of the behavior of such fluids tends to be more complicated and subtle in comparison with that of Newtonian fluids. In this paper, we investigate the thermal stability of a reactive third-grade fluid flowing steadily through a cylindrical pipe with isothermal wall. It is assumed that the reaction is exothermic under Arrhenius kinetics, neglecting the consumption of the material. Approximate solutions are constructed for the governing nonlinear boundary value problem using regular perturbation techniques together with a special type of Hermite-Padé approximants and important properties of the flow structure including bifurcations and thermal criticality conditions are discussed.

William A. Massey (Princeton University)

Dynamical queueing systems
November 4, 2006

Technological innovations are creating new types of communication systems such as call centers, electronic commerce, and wireless communications. Communication services managers must make important business decisions to stay competitive and profitable. They have to maximize the communication resources that they are making available to the customer. However, managers must also minimize their costs for providing these resources, which results in maximizing profits for their companies.

The mathematical field of queueing theory was successfully introduced in the first half of the 20th century to model voice communication networks. It has traditionally provided managers with a useful set of decision making formulas, algorithms and policies for designing communication systems and services. Another major triumph for queueing theory happened in the second half of the 20th century when it was applied to data communication systems and contributed to the design of the first prototype for the Internet. Both types of voice and data queueing models made significant use of the steady state theory for continuous time Markov chains.

Given the new types of communication systems and services available in the 21st century, it is no longer possible to make many of the simplifying assumptions of classical queueing theory. One major theme of my research has been to move away from the static steady state analysis of the past and develop a theory of queues that captures more of the true dynamic behavior that is found in real communications operations. My talk will discuss the types of mathematical tools needed to create a dynamical queueing theory.

This involves new types of perturbation analysis applied to the differential equations of the transition probabilities for the underlying, time-inhomogeneous Markov chain, queueing model. Moreover, we also use the theory of strong approximations to apply this asymptotic analysis directly to the random sample paths of these stochastic processes. We can also relax these Markovian assumptions by using the theory of Poisson random measures. Finally, we can establish fundamental limit theorems that approximate many of these random processes by dynamical systems. From these results, we can then apply the dynamic optimization techniques of variational calculus and classical mechanics to the efficient design of these queueing models.

David Murillo (Arizona State University)
http://mathpost.asu.edu/~dlm35

Change in host behavior and its impact on the co-evolution of dengue
December 31, 1969

The joint evolutionary dynamics of dengue strains are poorly understood despite its high prevalence around the world. Two dengue strains are put in competition in a population where behavioral changes can affect the probability of infection. The destabilizing dynamic effect of even "minor" behavioral changes are discussed and their role in dengue control is explained

Josue C. Noyola-Martinez (Rice University)

Error estimates between the stochastic simulation algorithm (SSA) and the tau-leap method
December 31, 1969

The use of the relatively new tau-leap algorithm to model the kinematics of genetic regulatory systems is of great interest, however, the algorithm's accuracy is not known. We introduce a new method which enables us to establish the accuracy of the tau-leap method effectively. Gillespie introduced both the Stochastic Simulation Algorithm (SSA) and the tau-leap method to simulate chemical systems which can model the dynamics of cellular processes. The SSA is an exact method but is computationally inefficient. The tau-leap is an approximate method which has computational advantages over the SSA. There have been some efforts to quantify the error between the SSA and the tau-leap method, but the accuracy of these efforts is questionable. We propose an adaptation of a non-homogeneous Poisson process to couple the SSA and tau-leap so that we can make direct comparisons between individual realizations of their simulations. Our method has not been attempted in the literature and we demonstrate that it gives far better error estimates than anything proposed previously.

Kathleen O'Hara (Mathematical Sciences Research Institute)

Mathematical Sciences Research Institute
December 31, 1969

Come learn about opportunities at MSRI.

Kasso Okoudjou (University of Maryland)
http://www.math.umd.edu/~kasso

Asymptotics of eigenvalue clusters for Schroedinger operators on the Sierpinski gasket
December 31, 1969

In this talk we shall present some results on the asymptotic behavior of spectra of Schrodinger operators with continuous potential on the Sierpinski gasket SG. In particular, using the extence of localized eigenfunctions for the Laplacian on SG we show that the eigenvalues of the Schrodinger opeartor break into clusters around certain eigenvalue of the Laplacian. Moreover, we prove that the characteristic measure of these clusters converges to a measure.

Janis Oldham (North Carolina Agricultural and Technical State University)

Progress report on the NSA mathematics enhancement grant: Developing a mathematics culture among undergraduate mathematics majors at North Carolina A&T State University
December 31, 1969

From July 1, 1998 - September 30, 2001 North Carolina A&T's Math Department conducted a project, funded through the National Security Agency. The project was designed to produce a core of undergraduate students having a “mathematics culture”, that is, a depth in proof based higher mathematics, the ability to articulate ideas, solve problems, and conduct inquiry and research. It was hoped this core would communicate its knowledge and experience on to successive classes of students, maintaining this newly developed culture. It was also originally hoped that the Math department would go on to develop an Honors program from this program, or at least incorporate the main program elements, especially the required problem sessions. Students not having developed in such a 'culture' meant not being prepared to do well in graduate school or have the expertise to work in government or industry.

The current state of affairs is that the culture did not persist. While the department did adopt 2 program elements, namely a freshman / new math major orientation course, and a required problem session with the Logic/Proof transitions course, university administrative edicts and university curriculum changes, impeded or gutted the effectiveness of those program elements. Nevertheless 72% of those who were in the program for 1, 2, or 3 years graduated with a degree in mathematics, applied mathematics, or mathematics education from an accredited institution. This included 3 who went on to earn Ph.D.'s, and many more who earned masters degrees. These students had gpa's from 2.5 through just under 4.0. Students who currently hold these gpa's are not developing as the students did during the period of the NSA grant. What we believe is that the specific intervention and high amount of contact hours with students, with the purpose of compelling, guiding, and developing the appropriate study discipline, made the difference. For such results to persist, designing methods to maintain the intervention until a math culture actually takes hold, is necessary.

Joanna Papakonstantinou (Rice University)
http://www.owlnet.rice.edu/~jpapa/

Historical development of the secant method: from the Babylonians to Wolfe
December 31, 1969

Many believe the Secant Method arose out of the finite difference approximation of the derivative in Newton's Method. However, historical evidence reveals that the Secant Method predated Newton's Method. It was originally referred to as the Rule of Double False Position and dates back to the Babylonians. We present a historical development of the Secant Method in 1-D. We introduce the definition of general position, present the n+1 point interpolation idea, and outline Wolfe's formulation to compute the basic secant approximation. We explain how the method is numerically unstable, because it leads to ill-conditioning due to the deterioration of general positioning.

Carlos Andrés Quintero Salazar (University of Texas)

Automated parameter estimation and sensitivity analysis
December 31, 1969

We present the computational issues that will be considered for the implementation of hybrid optimization approaches oriented to automated parameter estimation problems. The proposed hybrid optimization approaches are based on the coupling of the Simultaneous Perturbation Stochastic Approximation (SPSA) approach (a global and derivative free optimization method) and a globalized Newton-Krylov Interior Point algorithm (NKIP) (a global and derivative dependent optimization method). The first coupling will imply the generation of a metamodel that will allow to incorporate derivative information on a simpler representation of the original problem. The second type of coupling assumes that there is some derivative information available but its utilization is postponed until the SPSA algorithm has made sufficient progress toward the solution. We implement the hybrid optimization approach on a simple testcase, and present some numerical results.

Karen Raquel Ríos-Soto (University of Puerto Rico)
http://math.uprm.edu/~karen_rs

Epidemic spread in populations at demographic equilibrium
December 31, 1969

We introduce an integrodifference equation model to study the spatial spread of epidemics through populations with overlapping and non- overlapping epidemiological generations. Our focus is on the existence of travelling wave solutions and their minimum asymptotic speed of propagation c*. We contrast the results here with similar work carried out in the context of ecological invasions. We illustrate the theoretical results numerically in the context of SI (susceptible-infected) and SIS (susceptible-infected-susceptible) epidemic models.

Joaquin Rivera (The University of Iowa)
http://www.math.uiowa.edu/~rvera/

Existence of traveling waves solution for a nonlocal reaction-diffusion model of influenza A
December 31, 1969

In this paper we study the existence of traveling wave solutions for an integro-differential system of equations. The system was proposed by Lin et. al as a model for the spread for influenza A drift. The model uses diffusion to simulate the mutation of the virus along a one dimensional phenotype space. By considering the system under the traveling wave variable *z=x-ct* the PDE system is transformed to a higher dimensional ODE system. Applying the theory of geometric singular perturbation we constructed a traveling wave solution for the system.

Key words: traveling wave, reaction-diffusion, geometric singular perturbation.

Daniel Romero (Arizona State University)
mathpost.asu.edu/~romero

An epidemiological approach to the spread of minor political parties
December 31, 1969

Third political parties are influential in shaping American politics. In this work we study the spread of third parties ideologies in a voting population where we assume that party members are more influential in recruiting new third party voters than non-member third party voters (i.e., those who vote but do not pay party dues, officiate, campaign). The study is conducted using a ‘Susceptible-Infected’ epidemiological model with a system of nonlinear ordinary differential equations as applied to a case study, the Green Party. Through the analysis of our system we obtain the party-free and member-free equilibria as well as two endemic equilibria, one of which is stable. We consider the conditions for existence and stability (if applicable) of all equilibria and we identify two threshold parameters in our model that describe the different possible scenarios for a third political party and its spread. Of the two possible endemic states for the voting population we posit ideal threshold ranges for which the stable endemic equilibrium exists. Interestingly enough, our system produces a backward bifurcation that identifies parameter values under which a third party can either thrive or die depending on the initial number of members in the voting system. We then perform sensitivity analysis to the threshold conditions to isolate those parameters to which our model is most sensitive. We explore all results through numerical simulations and refer to data from the Green Party in the state of Pennsylvania as a case study for parameter estimation.

Flavia Sancier-Barbosa (Southern Illinois University)

Option pricing with memory
December 31, 1969

In this talk we introduce an option pricing model with delayed memory. The memory is introduced in the stock dynamics, which is described by a stochastic functional differential equation. The model has the following key features:

1. Volatility depends on a (delayed) history, i.e., its value at time t is a deterministic functional of the history of the stock from time t-L up to time t-l, where l is positive and less than or equal to L. Hence, due to this past-dependence on the stock price, the volatility is necessarily stochastic.

2. The randomness in the volatility is intrinsic, since it is generated by past values of the stock price.

3. The stock dynamics is driven by a single one-dimensional Brownian motion, and the model is one dimensional.

4. The market is complete.

5. For large delays (or at times relatively close to maturity) we obtain a closed-form representation for the fair price of the option, as well as for the hedging strategy.

6. The option price can be expressed in terms of the exact solution of a one-dimensional partial differential equation (PDE).

7. The classical Black-scholes model is a particular case of the delayed memory model.

8. We believe that our model is sufficiently flexible to fit real market data, in particular to account for observed "smiles" and "frowns".

David Tello (Arizona State University)
http://mathpost.asu.edu/~dtello

Mathematical aspects of dopamine's turnover
December 31, 1969

What do the world's champion Muhammad Ali and A Beautiful Mind's John F. Nash have in common? They both suffer from dopamine malfunction in one of the major dopaminegic pathways. It is believed that loss of dopamine activity in the nigrostriatal pathway is associated with Parkinson's Disease and that an imbalance of dopamine activity in the mesocorticalmesolimbic pathway is the cause of (positivenegative) symptoms of Schizophrenia.

I have assembled a collection of available literature concerning dopamine turnover (the cascade chemical process that takes place in the terminal button) and some of the available mathematical models describing the dopamine process. This collection constitutes a foundation of future work. I plan to develop a stochastic model describing the dopamine cascade in the different major dopaminergic pathways.

Sheila Tobias

Professional Science Masters programs
December 31, 1969

Why Industry should be interested in PSM

Companies are transforming their cultures and reshaping their business models to focus on high-impact innovation. This business strategy requires a skill set very different from the old Six Sigma. Universities have responded to this challenge by creating a new business and industry-oriented Professional Science (Mathematics) Masters degree (PSM). PSM degree holders are trained to work productively at what Business Week calls the "sweet spot" where design, customer understanding, and emerging technologies come together. PSM graduates have expertise in science, mathematics, and computational skills PLUS business basics, project management, regulatory affairs, technology transfer, teamwork, and communication.

Why Students should be interested in PSM

A two-year post-graduate terminal degree for mathematics/computational science majors, in areas of applied mathematics, including financial mathematics, industrial mathematics, computational science and at the intersection of disciplines including bioinformatics, proteomics, environmental decision making, biostatistics, statistics for entrepreneurship, and applications of GIS.

For more information, see

Conrad Tucker (University of Illinois at Urbana-Champaign)
https://netfiles.uiuc.edu/hmkim/www/index.htm

Optimal product portfolio formulation: Merging predictive data mining with analytical target cascading
December 31, 1969

This paper addresses two important fundamental areas in product family formulation that have recently begun to receive great attention. First is the incorporation of market demand that we address through a data mining approach where realistic customer survey data is translated into performance design targets. Second is platform architecture design that we model as a dynamic entity. The dynamic approach to product architecture optimization differs from conventional static approaches in that a predefined architecture is not present at the initial stage of product design, but rather evolves with fluctuations in customer performance preferences. The benefits of direct customer input in product family design will be realized through our cell phone product family example presented in this work. An optimal family of cell phones is created with modularity decisions made analytically at the enterprise level that maximize company profit.

Maria Cristina Villalobos (University of Texas Pan American)

Formulating Fano's Method as an Optimization Problem to obtain Broadband Tuning Limits on UWB Antennas
December 31, 1969

Modern broadband communications requires antennas with greatly improved frequency range and reduced size. It has been known since 1948 that there are basic physical limitations on the bandwidth that can be obtained for a given size antenna; however, the numerical results that have been available were until recently based entirely on a second-order model for the antenna that was (a) an approximation, and (b) only strictly applicable to relatively narrowband cases. In the last few years, a new approach based on "Fano's formulation" has been used which can apply over any bandwidth. We have reformulated Fano's method as an optimization problem and as a result have been able to obtain fundamental bandwidth limits that can in principle be calculated for any radiation mode. This means that one can now find the ultimate possible bandwidth performance for directional antennas, a result with immediate practical significance for designers of ultra-wideband antennas. Graphs of numerical limits on the in-band reflection coefficient tolerance versus electrical size for high-pass and band-pass tuning are presented.

This is joint work with H.D. Foltz and J.S. McLean

Rachel E. Vincent-Finley (Rice University)
www.caam.rice.edu/~rvincen

Reduced basis simulation
December 31, 1969

Molecular dynamics (MD) simulation provides a powerful tool to study molecular motion with respect to classical mechanics. When considering protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. Generally to capture motion at all levels using standard numerical integration techniques to solve the equations of motion requires time steps on the order of a femtosecond. To date, literature reports simulations of solvated proteins on the order of nanoseconds, however, simulations of this length do not provide adequate sampling for the study of large-scale molecular motion.

In this presentation we will describe a method for performing molecular simulations with respect to a reduced coordinate space. Given a standard MD trajectory we use principal component analysis (PCA) to identify k dominant characteristics of a trajectory and construct a k-dimensional (k-D) representation of the atomic coordinates with respect to these k characteristics. Using this model we define equations of motion and perform simulations with respect to the constructed k-D representation.

We apply our method to test molecules and compare the simulations to standard MD simulations of the molecules. Our method allows us to efficiently simulate test molecules by reducing the storage and the computation requirements. The results indicate that the molecular activity with respect to our simulation method is comparable to that observed in the standard MD simulations of these molecules.

Bryan Williams (Hampton University)

Large circuit pairs in matroids
December 31, 1969

Scott Smith conjectured in 1979 that two distinct longest cycles of a k-connected graph meet in at least k vertices when k is less than or equal to 2. This conjecture is known to be true for k is less than or equal to 10. Only the case k less than or equal to 6 appears in the literature, however. Reid and Wu generalized Smith's conjecture to k-connected matroids by considering largest circuits. The case k=2 of the matroid conjecture follows from a result of Seymour. In addition, McMurray, Reid, Sheppardson, Wei, and Wu established an extension of the matroid conjecture for k=2 and proved it for cographic matroids when k ≤ 6. In his Ph.D. dissertation, McMurray established the matroid conjecture for matroids of circumference four. I establish Reid and Wu's conjecture for several classes of matroids which include those that have connectivity three, circumference five, and spanning circuits, Along with some structured results for connectivity four. I am also looking at extending the dual result of Grotschel and Nemhauser's established result of Smith's conjecture for k less than or equal to 6, by considering largest bonds in graphs.

Isaac Woungang (Ryerson Polytechnical University)
http://v315.scs.ryerson.ca:8080/iwoungan/index.html

Algebraic characterizations of some classes of quasi-cyclic codes
December 31, 1969

The so-called Jensen's concatenation function has been found to be a powerful tool for the study of quasi-cyclic (QC) codes, and in general, of codes invariant under a permutation. In this paper, we introduce two novel applications of the aforementioned tool. First, we provide a trace description of a 1-generator QC code, which generalizes the well-known trace description of a cyclic code. Second, we provide an algebraic characterization of QC codes obtained as q-ary images of qm-ary irreducible cyclic codes. These QC codes are shown to be decomposable into the direct sum of a fixed number of irreducible components. Based upon this decomposition, we obtain some lower bounds on the minimum distances of some classes of such codes. Our numerical results show that our technique can yield optimal linear codes.

Margaret H. Wright (New York University)

Undergraduate, graduate, and postdoctoral opportunities at New York University
December 31, 1969

New York University, located in the heart of Greenwich Village in New York City, offers outstanding undergraduate, graduate, and postdoctoral opportunities. Material about all of these, especially those involving the Courant Institute of Mathematical Sciences, will be available, and the presenter will be happy to answer questions.

Wenyuan Wu (Michigan State University)
http://www.math.msu.edu/~wenyuanwu/

Differential elimination of PDEs by numerical algebraic geometry and numerical linear algebra
December 31, 1969

The computational difficulty of completing nonlinear PDE to involutive form by differential elimination algorithms is a significant obstacle in applications. We apply numerical methods to this problem which, unlike existing symbolic methods for exact systems, can be applied to approximate systems arising in applications.

We use Numerical Algebraic Geometry to process the lower order leading nonlinear parts of such PDE systems to obtain their witness sets. To check the conditions for involutivity Numerical Linear Algebra techniques are applied to constant matrices which are the leading linear parts of such systems evaluated at the generic points. Representations for the constraints result from applying a method based on Polynomial Matrix Theory. Examples to illustrate the new approach are given.

This is joint work with Greg Reid. The paper is available at publish.uwo.ca/~wwu26

Emmanuel Yomba (California State University)

Generalized hyperbolic functions to find soliton-like solutions of the inhomogeneous higher-order nonlinear Schrödinger equation
December 31, 1969

The inhomogeneous higher-order nonlinear Schrödinger (IHONLS) equation is studied by the use of generalized hyperbolic functions and the complex amplitude method. The results reveal that for the new bright soliton-type and dark soliton-type solutions obtained, one can control the velocity, the phase shift (by managing the distributed parameters of the system) and the shape (by choosing appropriately the two parameters introduced in the generalized hyperbolic functions).

Connect With Us:
Go