Catching-Up Algorithm for Moreau’s Sweeping Processes

6 Numerical Methods

This chapter briefly discusses numerical methods for computing approximate projections.

6.1 Computing Approximate Projections

The practical implementation of the catching-up algorithm requires efficient methods for computing approximate projections onto closed sets. Given \(C \subset H\), \(x \in H\), and tolerance \(\varepsilon {\gt} 0\), we seek \(\bar{x} \in C\) such that:

\[ \| x - \bar{x}\| ^2 {\lt} \inf _{y \in C} \| x - y\| ^2 + \varepsilon \]

This is equivalent to solving the optimization problem:

\[ \min _{y \in C} \| x - y\| ^2 \]

to within tolerance \(\varepsilon \).

6.2 Gradient-Based Methods

6.2.1 Projected Gradient Descent

For sets defined by inequality constraints \(C = \{ y : g_i(y) \le 0, i = 1,\ldots ,m\} \), projected gradient descent is natural:

  1. Start with \(y_0 \in C\)

  2. Iterate: \(y_{k+1} = \text{proj}_C(y_k - \alpha _k \nabla f(y_k))\) where \(f(y) = \| x - y\| ^2\)

  3. Stop when estimated distance satisfies the \(\varepsilon \)-condition

6.2.2 Interior Point Methods

For smooth inequality constraints, interior point methods can be effective by solving a sequence of barrier problems.

6.3 Frank-Wolfe Algorithm

One approach for obtaining approximate projections is through the Frank-Wolfe algorithm, which can be applied when the set admits efficient computation of linear optimization problems.

6.4 Separation Oracles

For certain classes of sets defined by sublevel sets of functions, separation oracles can be employed to construct approximate projections efficiently.

6.5 Implementation Considerations

The choice of numerical method depends on the specific structure of the moving sets \(C(t)\) and the required accuracy \(\varepsilon \).

6.5.1 Error Tolerance Scheduling

The error tolerances \((\varepsilon _k^n)\) should decrease appropriately with the discretization:

\[ \varepsilon _k^n = O(\delta _n^2) \quad \text{or} \quad \sum _{k=0}^{n-1} \varepsilon _k^n = O(\delta _n) \]

where \(\delta _n = \max _k |t_{k+1}^n - t_k^n|\) is the mesh size. In particular, each schedule should satisfy positivity as in definition 2.3.

6.5.2 Initialization Strategies

Good initialization accelerates convergence:

  • Use the previous solution \(x_k^n\) as starting point for computing \(x_{k+1}^n\)

  • For convex outer approximations, use analytical projections

  • Warm-start with solutions from coarser discretizations