August 6 - 16, 2014
Keywords of the presentation: ordinary differential equations, Fourier analysis, statistical estimation, fast algorithms; transition-edge sensors, microcalorimeters, single-photon spectroscopy, optimal filtering, multiplexing, pulse pile-up
In recent years, microcalorimeter sensor systems have been developed at NIST, NASA, and elsewhere to measure the energy of single photons in every part of the electromagnetic spectrum, from microwaves to gamma rays. These microcalorimeters have demonstrated relative energy resolution, depending on the energy band, of better than 3 x 10^(-4), providing dramatic new capabilities for scientific and forensic investigations. They rely on superconducting transition-edge sensor (TES) thermometers and derive their exquisite energy resolution from the low thermal noise at typical operating temperatures near 0.1 K. They also function in exceptionally broad energy bands compared to other sensor technologies. At present, the principal limitation of this technology is its relatively low throughput, due to two causes: (1) limited collection area, which is being remedied through development of large sensor arrays; and (2) nonlinearity of detector response to photons arriving in rapid succession. Both introduce mathematical challenges, due to variations in sensor dynamics, nonstationarity of noise when detector response nears saturation, crosstalk between nearby or multiplexed sensors, and algorithm-dependent noise of multiplexing. Although there are certain inherent limitations on calibration data, this environment is extremely data-rich and we will exploit data to attack one of these mathematical challenges.
In real-life applications critical areas are often non- accessible for measurement and thus for inspection and control. For proper and safe operations one has to estimate their condition and predict their future alteration via inverse problem methods based on accessible data. Typically such situations are even complicated by unreliable or flawed data such as sensor data rising questions of reliability of model results. We will analyze and mathematically tackle such problems starting with physical vs. data driven modeling, numerical treatment of inverse problems, extension to stochastic models and statistical approaches to gain stochastic distributions and confidence intervals for safety critical parameters.
As project example we consider a blast furnace producing iron at temperatures around 2,000 °C. It is running several years without stop or any opportunity to inspect its inner geometry coated with firebrick. Its inner wall is aggressively penetrated by physical and chemical processes. Thickness of the wall, in particular evolvement of weak spots through wall thinning is extremely safety critical. The only available data stem from temperature sensors at the outer furnace surface. They have to be used to calculate wall thickness and its future alteration. We will address some of the numerous design and engineering questions such as placement of sensors, impact of sensor imprecision and failure.
Integrated circuits are manufactured by optical projection lithography. The circuit pattern is etched on a master copy, the photomask. Light is projected through the photomask and its image is formed on the semiconductor wafer under production. The image is transferred to the integrated circuit by a photographic process. On the order of 40 lithography steps are needed to produce an integrated circuit. Most advanced lithography is performed using the 193 nm ArF excimer wavelength, about three times smaller than the wavelength of visible red light. Critical dimensions of the circuit pattern are smaller than the wavelength of the projected light. Sub-wavelength resolution is achieved by optical resolution enhancement techniques and the non-linearity of the chemistry.
Calculating the optical image accurately and rapidly is required for two reasons: first the design of the photomask is an inverse problem. A good forward solution is needed to solve the inverse problem iteratively. Second, the photomask is inspected by a microscope to find manufacturing defects. The correct microscope image is calculated, and the actual microscope image is compared to the calculated reference image to find defects. The most significant part of the image calculation is the diffraction of the illuminating wave by the photomask. Although rigorous solution of Maxwell's Equations by numerical methods is well known, either the speed or the accuracy of known methods is not satisfactory. The most commonly used method is Kirchhoff approximation amended by some fudge factors to make it closer to the rigorous solution.
Kirchhoff solved the problem diffraction of light through an arbitrarily shaped aperture in an opaque screen at the end of 19th century. He had a very practical approximation for the near-field of the screen, on the side that is opposite to the light source. At a point on the screen, he ignored that there is an aperture. At a point at the aperture, he ignored that there is a screen. He used Green's theorem to propagate this estimate of the near-field to the far-field. Kirchhoff’s near-field approximation is accurate for points that are a few wave-lengths away from the edges. The Kirchhoff near-field is discontinuous at the edges and it violates boundary conditions for Maxwell’s Equations. To this day an amended form of Kirchhoff’s approximation provides the best known accuracy-speed trade-off to calculate the image of a photomask.