Sliding Technique¶
The Sliding Technique enables Chebyshev approximation of high-dimensional functions by decomposing them into a sum of low-dimensional interpolants. This sidesteps the curse of dimensionality at the cost of losing cross-group interactions.
Motivation¶
A full tensor Chebyshev interpolant on \(n\) dimensions with \(m\) nodes per dimension requires \(m^n\) function evaluations. For \(n = 10\) and \(m = 11\), that is over 25 billion evaluations — clearly infeasible.
The sliding technique partitions the dimensions into small groups and builds a separate Chebyshev interpolant (a slide) for each group, with all other dimensions fixed at a pivot point. The total cost becomes the sum of the group grid sizes rather than their product.
Algorithm¶
Given \(f: \mathbb{R}^n \to \mathbb{R}\), a pivot point \(\mathbf{z} = (z_1, \ldots, z_n)\), and a partition of dimensions into \(k\) groups (Ruiz & Zeron 2021, Ch. 7):
- Evaluate the pivot value \(v = f(\mathbf{z})\).
- For each group \(i\), build a slide \(s_i\) — a Chebyshev interpolant on the group's dimensions, with all other dimensions fixed at their pivot values.
- Evaluate using the additive formula:
where \(\mathbf{x}_{G_i}\) denotes the components of \(\mathbf{x}\) belonging to group \(i\).
When to Use Sliding¶
Sliding works well when:
- The function is additively separable or nearly so (e.g., \(\sin(x_1) + \sin(x_2) + \sin(x_3)\)).
- Cross-group interactions are weak relative to within-group effects.
- The number of dimensions is too large for full tensor interpolation (say, \(n > 6\)).
Sliding does not work well when:
- Variables in different groups are strongly coupled (e.g., Black-Scholes where \(S\), \(T\), and \(\sigma\) interact multiplicatively).
- High accuracy is required far from the pivot point.
Alternative: Tensor Train
For general (non-separable) high-dimensional functions, consider
ChebyshevTT instead. TT-Cross captures cross-variable
coupling that the sliding decomposition misses, at the cost of using finite
differences for derivatives instead of analytical spectral differentiation.
Choosing the partition
Group variables that have strong non-linear interactions together. For example, if \(f = x_1^3 x_2^2 + x_3\), group \((x_1, x_2)\) in one slide and \(x_3\) in another.
Usage¶
import math
from pychebyshev import ChebyshevSlider
# Additively separable function
def f(x, _):
return math.sin(x[0]) + math.sin(x[1]) + math.sin(x[2])
slider = ChebyshevSlider(
function=f,
num_dimensions=3,
domain=[[-1, 1], [-1, 1], [-1, 1]],
n_nodes=[11, 11, 11],
partition=[[0], [1], [2]], # each dim is its own slide
pivot_point=[0.0, 0.0, 0.0],
)
slider.build()
# Evaluate function value
val = slider.eval([0.5, 0.3, -0.2], [0, 0, 0])
# Evaluate derivative w.r.t. x0
dfdx0 = slider.eval([0.5, 0.3, -0.2], [1, 0, 0])
Multi-dimensional slides¶
For functions with within-group coupling, use larger groups:
def g(x, _):
return x[0]**3 * x[1]**2 + math.sin(x[2]) + math.sin(x[3])
slider = ChebyshevSlider(
function=g,
num_dimensions=4,
domain=[[-2, 2], [-2, 2], [-1, 1], [-1, 1]],
n_nodes=[12, 12, 8, 8],
partition=[[0, 1], [2], [3]], # 2D + 1D + 1D
pivot_point=[0.0, 0.0, 0.0, 0.0],
)
slider.build()
Build cost comparison¶
# Full tensor: 12 * 12 * 8 * 8 = 9,216 evaluations
# Sliding: 12*12 + 8 + 8 = 160 evaluations (57x fewer)
print(f"Slider build evaluations: {slider.total_build_evals}")
Derivatives¶
The slider supports analytical derivatives through its slides. Only the slide containing the differentiated dimension contributes:
where \(j \in G_i\). The pivot value \(v\) is constant and drops out.
# Multiple derivatives at once
results = slider.eval_multi(
[0.5, 0.3, -0.2],
[
[0, 0, 0], # function value
[1, 0, 0], # d/dx0
[0, 1, 0], # d/dx1
[0, 0, 1], # d/dx2
],
)
Limitations¶
Cross-group derivatives are zero¶
Because slides are independent functions of disjoint variable groups, mixed partial derivatives across groups are exactly zero. For example, with partition [[0, 1], [2]]:
- \(\frac{\partial^2 f}{\partial x_0 \partial x_1}\) — computed within the
[0, 1]slide (correct) - \(\frac{\partial^2 f}{\partial x_0 \partial x_2}\) — returns 0 (x₀ and x₂ are in different slides)
This is mathematically correct for the sliding approximation, but may differ from the true function's cross-derivatives. If cross-group sensitivities matter, group those variables together or use full tensor interpolation.
Accuracy degrades far from pivot¶
The sliding approximation is most accurate near the pivot point. As the evaluation point moves away from the pivot in multiple dimensions simultaneously, cross-coupling errors accumulate. For strongly coupled functions like Black-Scholes, errors of 20--50% at domain boundaries have been observed in numerical experiments (see compare_methods_time_accuracy.py in the repository).
References¶
- Ruiz, G. & Zeron, M. (2021). Machine Learning for Risk Calculations. Wiley Finance. Chapter 7.
API Reference¶
Chebyshev Sliding approximation for high-dimensional functions.
Decomposes f(x_1, ..., x_n) into a sum of low-dimensional Chebyshev interpolants (slides) around a pivot point z:
f(x) ≈ f(z) + Σ_i [s_i(x_group_i) - f(z)]
where each slide s_i is a ChebyshevApproximation built on a subset of dimensions with the remaining dimensions fixed at z.
This trades accuracy for dramatically reduced build cost: instead of evaluating f at n_1 × n_2 × ... × n_d grid points (exponential), the slider evaluates at n_1 × n_2 + n_3 × n_4 + ... (sum of products within each group).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
function
|
callable
|
Function to approximate. Signature: |
required |
num_dimensions
|
int
|
Total number of input dimensions. |
required |
domain
|
list of (float, float)
|
Bounds [lo, hi] for each dimension. |
required |
n_nodes
|
list of int
|
Number of Chebyshev nodes per dimension. |
required |
partition
|
list of list of int
|
Grouping of dimension indices into slides. Each dimension must
appear in exactly one group. E.g. |
required |
pivot_point
|
list of float
|
Reference point z around which slides are built. |
required |
max_derivative_order
|
int
|
Maximum derivative order to pre-compute (default 2). |
2
|
Examples:
>>> import math
>>> def f(x, _):
... return math.sin(x[0]) + math.sin(x[1]) + math.sin(x[2])
>>> slider = ChebyshevSlider(
... f, 3, [[-1,1], [-1,1], [-1,1]], [11,11,11],
... partition=[[0], [1], [2]],
... pivot_point=[0.0, 0.0, 0.0],
... )
>>> slider.build(verbose=False)
>>> round(slider.eval([0.5, 0.3, 0.1], [0,0,0]), 4)
0.8764
total_build_evals
property
¶
Total number of function evaluations used during build.
build(verbose=True)
¶
Build all slides by evaluating the function at slide-specific grids.
For each slide, dimensions outside the slide group are fixed at their pivot values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
verbose
|
bool
|
If True, print build progress. Default is True. |
True
|
eval(point, derivative_order)
¶
Evaluate the slider approximation at a point.
Uses Equation 7.5 from Ruiz & Zeron (2021): f(x) ≈ f(z) + Σ_i [s_i(x_i) - f(z)]
For derivatives, only the slide containing that dimension contributes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
point
|
list of float
|
Evaluation point in the full n-dimensional space. |
required |
derivative_order
|
list of int
|
Derivative order for each dimension (0 = function value). |
required |
Returns:
| Type | Description |
|---|---|
float
|
Approximated function value or derivative. |
eval_multi(point, derivative_orders)
¶
Evaluate slider at multiple derivative orders for the same point.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
point
|
list of float
|
Evaluation point in the full n-dimensional space. |
required |
derivative_orders
|
list of list of int
|
Each inner list specifies derivative order per dimension. |
required |
Returns:
| Type | Description |
|---|---|
list of float
|
Results for each derivative order. |
error_estimate()
¶
Estimate the sliding approximation error.
Returns the sum of per-slide Chebyshev error estimates. Each slide's error is estimated independently using the Chebyshev coefficient method from Ruiz & Zeron (2021), Section 3.4.
Note: This captures per-slide interpolation error only. Cross-group interaction error (inherent to the sliding decomposition) is not included.
Returns:
| Type | Description |
|---|---|
float
|
Estimated interpolation error (per-slide sum). |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If |
__getstate__()
¶
Return picklable state, excluding the original function.
__setstate__(state)
¶
Restore state from a pickled dict.
save(path)
¶
Save the built slider to a file.
The original function is not saved — only the numerical data
needed for evaluation. The saved file can be loaded with
:meth:load without access to the original function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str or path - like
|
Destination file path. |
required |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If the slider has not been built yet. |
load(path)
classmethod
¶
Load a previously saved slider from a file.
The loaded object can evaluate immediately; no rebuild is needed.
The function attribute will be None. Assign a new function
before calling build() again if a rebuild is desired.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str or path - like
|
Path to the saved file. |
required |
Returns:
| Type | Description |
|---|---|
ChebyshevSlider
|
The restored slider. |
Warns:
| Type | Description |
|---|---|
UserWarning
|
If the file was saved with a different PyChebyshev version. |
.. warning::
|
This method uses :mod: |
extrude(params)
¶
Add new dimensions where the function is constant.
Each new dimension becomes its own single-dim slide group with
tensor_values = np.full(n, pivot_value), so that
s_new(x) - pivot_value = 0 for all x (no contribution to
the sliding sum). This is the partition-of-unity property:
the barycentric weights sum to 1, so a constant tensor
produces the same value for any coordinate.
Existing slide groups have their dimension indices remapped to account for the inserted dimensions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
tuple or list of tuples
|
Single |
required |
Returns:
| Type | Description |
|---|---|
ChebyshevSlider
|
A new, higher-dimensional slider (already built).
The result has |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If the slider has not been built yet. |
TypeError
|
If |
ValueError
|
If |
slice(params)
¶
Fix one or more dimensions at given values, reducing dimensionality.
Two cases per sliced dimension:
- Multi-dim group: The slide's
ChebyshevApproximationis sliced at the local dimension index via barycentric contraction. When the slice value coincides with a Chebyshev node (within 1e-14), the contraction reduces to an exactnp.take(fast path). The dimension is removed from the group. - Single-dim group: The slide is evaluated at the value,
giving a constant
s_val. The shiftdelta = s_val - pivot_valueis absorbed intopivot_valueand each remaining slide'stensor_values, and the group is removed entirely.
Remaining dimension indices in all groups are remapped downward to stay contiguous.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
tuple or list of tuples
|
Single |
required |
Returns:
| Type | Description |
|---|---|
ChebyshevSlider
|
A new, lower-dimensional slider (already built).
The result has |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If the slider has not been built yet. |
TypeError
|
If |
ValueError
|
If a slice value is outside the domain, if slicing all
dimensions, or if |