autokoopman package#

Subpackages#

Submodules#

autokoopman.autokoopman module#

Main AutoKoopman Function (Convenience Function)

autokoopman.autokoopman.auto_koopman(training_data: TrajectoriesData | Sequence[ndarray], inputs_training_data: Sequence[ndarray] | None = None, learn_continuous: bool = False, sampling_period: float | None = None, normalize: bool = False, opt: str | HyperparameterTuner = 'monte-carlo', max_opt_iter: int = 100, max_epochs: int = 500, n_splits: int | None = None, obs_type: str | KoopmanObservable = 'rff', cost_func: str | Callable[[TrajectoriesData, TrajectoriesData], float] = 'total', scoring_weights: Sequence[ndarray] | Dict[Hashable, ndarray] | None = None, learning_weights: Sequence[ndarray] | Dict[Hashable, ndarray] | None = None, n_obs: int = 100, rank: Tuple[int, int] | Tuple[int, int, int] | None = None, grid_param_slices: int = 10, lengthscale: Tuple[float, float] = (0.0001, 10.0), enc_dim: Tuple[int, int, int] = (2, 64, 16), n_layers: Tuple[int, int, int] = (1, 8, 2), torch_device: str | None = None, verbose: bool = True)#
AutoKoopman Convenience Function

This is an interface to the dynamical systems learning functionality of the AutoKoopman library. The user can select estimators classes at a high level. A tuner can be chosen to find the best hyperparameter values.

Parameters:
  • training_data – training trajectories data from which to learn the system

  • inputs_training_data – optional input trajectories data from which to learn the system (this isn’t needed if the training data has inputs already)

  • learn_continuous – whether to learn a continuous time or discrete time Koopman estimator

  • sampling_period – (for discrete time system) sampling period of training data

  • normalize – normalize the states of the training trajectories

  • opt – hyperparameter optimizer {“grid”, “monte-carlo”, “bopt”}

  • max_opt_iter – maximum iterations for the tuner to use

  • max_epochs – maximum number of training epochs

  • n_splits – (for optimizers) if set, switches to k-folds bootstrap validation for the hyperparameter tuning. This is useful for things like RFF tuning where the results have noise.

  • obs_type – (for koopman) koopman observables to use {“rff”, “quadratic”, “poly”, “id”, “deep”}

  • cost_func – cost function to use for hyperparameter optimization {“total”, “end”, “relative”}

  • n_obs – (for koopman) number of observables to use (if applicable)

  • rank – (for koopman) rank range (start, stop) or (start, stop, step)

  • grid_param_slices – (for grid tuner) resolution to slice continuous valued parameters into

  • lengthscale – (for RFF observables) RFF kernel lengthscale

  • enc_dim – (for deep learning) number of dimensions in the latent space

  • n_layers – (for deep learning) number of hidden layers in the encoder / decoder

  • torch_device – (for deep learning) specify torch compute device

  • verbose – whether to print progress and messages

Returns:

Tuned Model and Metadata

Example:
from autokoopman.benchmark.fhn import FitzHughNagumo
from autokoopman import auto_koopman

# let's build an example dataset
data = fhn.solve_ivps(
    initial_states=[[0.0, -4.0], [1.0, 3.4], [1.0, 1.0], [0.1, -0.1]],
    tspan=[0.0, 1.0], sampling_period=0.01
)

# learn a system
results = auto_koopman(
    data,
    obs_type="rff",
    opt="grid",
    n_obs=200,
    max_opt_iter=200,
    grid_param_slices=10,
    n_splits=3,
    rank=(1, 200, 20)
)

# results = {'tuned_model': <StepDiscreteSystem Dimensions: 2 States: [X1, X2]>,
# 'model_class': 'koopman-rff',
# 'hyperparameters': ['gamma', 'rank'],
# 'hyperparameter_values': (0.004641588833612782, 21),
# 'tuner_score': 0.14723275426562,
# 'tuner': <autokoopman.tuner.gridsearch.GridSearchTuner at 0x7f0f92f95580>,
# 'estimator': <autokoopman.estimator.koopman.KoopmanDiscEstimator at 0x7f0f92ff0610>}

Module contents#