Campuses:

approximate dynamic programming

Thursday, December 6, 2018 - 10:30am - 11:30am
Xiaobo Li (National University of Singapore)
We consider a product rental network with a fixed number of rental units distributed across multiple locations. The units are accessed by customers without prior reservation and on an on-demand basis. Customers are provided with the flexibility to decide on how long to keep a unit and where to return it. Because of the randomness in demand and in the length of the rental periods and in unit returns, there is a need to periodically reposition inventory away from some locations and into others.
Wednesday, October 3, 2018 - 9:45am - 10:30am
Awi Federgruen (Columbia University)
Distribution systems for retail organizations are complex, whether they sell via brick-and mortar stores, online systems or combinations thereof (dual channels). The complexity stems from several sources, in particular:
(a) Retailers like Amazon deal with close to a billion distinct items;
(b) Inventories need to be kept at many locations: hundreds or thousands of stores ( Walmart) and close to a hundred fulfillment centers (Amazon).
(c) Retailers employ multiple sales channels and platforms (Amazon vs Amazon Marketplace)
Thursday, October 4, 2018 - 9:45am - 10:30am
Huseyin Topaloglu (Cornell University)
We present an approximation algorithm for network revenue management problems. In our approximation algorithm, we construct an approximate policy using value function approximations that are expressed as linear combinations of basis functions. We use a backward recursion to compute the coefficients of the basis functions in the linear combinations. If each product uses at most L resources, then the total expected revenue obtained by our approximate policy is at least 1/(1+L) of the optimal total expected revenue.
Wednesday, October 3, 2018 - 2:45pm - 3:30pm
Velibor Misic (University of California, Los Angeles)
Optimal stopping is the problem of deciding when to stop a stochastic system to obtain the greatest reward; this arises in numerous application areas, such as finance, healthcare and marketing. State-of-the-art methods for high-dimensional optimal stopping involve determining an approximation to the value function or to the continuation value, and then using that approximation within a greedy policy.
Subscribe to RSS - approximate dynamic programming