culebra.tools.Experiment class

class Experiment(trainer: Trainer, untie_best_fitness_function: FitnessFunction | None = None, test_fitness_function: FitnessFunction | None = None, results_base_filename: str | None = None, hyperparameters: dict | None = None)

Bases: Evaluation

Set a trainer evaluation.

Parameters:
  • trainer (Trainer) – The trainer method

  • untie_best_fitness_function (FitnessFunction) – The fitness function used to select the best solution from those found by the trainer in case of a tie. If omitted, the training fitness function will be used. Defaults to None

  • test_fitness_function (FitnessFunction) – The fitness function used to test. If omitted, the training fitness function will be used. Defaults to None

  • results_base_filename (str) – The base filename to save the results. If omitted, _default_results_base_filename is used. Defaults to None

  • hyperparameters (dict) – Hyperparameter values used in this evaluation, optional

Raises:
  • TypeError – If trainer is not a valid trainer

  • TypeError – If test_fitness_function is not a valid fitness function

  • TypeError – If results_base_filename is not a valid file name

  • TypeError – If hyperparameters is not a dictionary

  • ValueError – If the keys in hyperparameters are not strings

  • ValueError – If any key in hyperparameters is reserved

Class attributes

Experiment.feature_metric_functions = {'Rank': <function Metrics.rank>, 'Relevance': <function Metrics.relevance>}

Metrics calculated for the features in the set of solutions.

Experiment.stats_functions = {'Avg': <function mean>, 'Max': <function max>, 'Min': <function min>, 'Std': <function std>}

Statistics calculated for the solutions.

Class methods

classmethod Experiment.from_config(config_script_filename: str | None = None) Evaluation

Generate a new evaluation from a configuration file.

Parameters:

config_script_filename (str) – Path to the configuration file. If omitted, DEFAULT_CONFIG_SCRIPT_FILENAME is used. Defaults to None

Raises:

RuntimeError – If config_script_filename is an invalid file path or an invalid configuration file

classmethod Experiment.generate_run_script(config_filename: str | None = None, run_script_filename: str | None = None) None

Generate a script to run an evaluation.

The parameters for the evaluation are taken from a configuration file.

Parameters:
Raises:
  • TypeError – If config_filename or run_script_filename are not a valid filename

  • ValueError – If the extensions of config_filename or run_script_filename are not valid

classmethod Experiment.load(filename: str) Base

Load a serialized object from a file.

Parameters:

filename (str) – The file name.

Returns:

The loaded object

Raises:

Properties

property Experiment.best_representatives: list[list[Solution]] | None

Best representatives found by the trainer.

Return type:

list[list[Solution]]

property Experiment.best_solutions: tuple[HallOfFame] | None

Best solutions found by the trainer.

Returns:

One Hall of Fame for each species

Return type:

tuple[HallOfFame]

property Experiment.excel_results_filename: str

Filename used to save the results in Excel format.

Return type:

str

property Experiment.hyperparameters: dict | None

Hyperparameter values used for the evaluation.

Return type:

dict

Setter:

Set the hyperparameter values used for the evaluation

Parameters:

values (dict) – Hyperparameter values used in this evaluation

Raises:
  • TypeError – If values is not a dictionary

  • ValueError – If the keys in values are not strings

  • ValueError – If any key in values is reserved

property Experiment.results: Results | None

Results obtained.

Return type:

Results

property Experiment.results_base_filename: str | None

Results base filename.

Return type:

str

Setter:

Set a new results base filename.

Parameters:

filename (str) – New results base filename. If set to None, _default_results_base_filename is used

Raises:

TypeError – If filename is not a valid file name

property Experiment.serialized_results_filename: str

Filename used to save the serialized results.

Return type:

str

property Experiment.test_fitness_function: FitnessFunction | None

Test fitness function.

Return type:

FitnessFunction

Setter:

Set a new test fitness function.

Parameters:

func (FitnessFunction) – New test fitness function. If set to None, the training fitness function will also be used for testing

Raises:

TypeError – If func is not a valid fitness function

property Experiment.trainer: Trainer

Trainer method.

Return type:

Trainer

Setter:

Set a new trainer method

Parameters:

value (Trainer) – New trainer

Raises:

TypeError – If trainer is not a valid trainer

property Experiment.untie_best_fitness_function: FitnessFunction | None

Fitness function to untie the best solutions.

Return type:

FitnessFunction

Setter:

Set a new fitness function to untie the best solutions

Parameters:

func (FitnessFunction) – New untie fitness function. If set to None and several tied solutions are found by the trainer, the first of them will be returned

Raises:

TypeError – If func is not a valid fitness function

Private properties

property Experiment._default_results_base_filename: str

Default base name for results files.

Returns:

DEFAULT_RESULTS_BASE_FILENAME

Return type:

str

Methods

Experiment.dump(filename: str) None

Serialize this object and save it to a file.

Parameters:

filename (str) – The file name.

Raises:
Experiment.reset() None

Reset the results.

Overridden to reset the best solutions and best representatives.

Experiment.run() None

Execute the evaluation and save the results.

Private methods

Experiment._add_best(best: Sequence[Solution], fitness_func: FitnessFunction, result_key: str) None

Add the best solution to the experiment results.

The best solution should have been selected according to the validation fitness. It is evaluated only with the species that compose the best solution, without any other representative

Parameters:
  • best (Sequence[Solution]) – The best solution (one per species)

  • fitness_func (FitnessFunction) – Fitness fuction to evaluate the best solution

  • result_key (str) – Result key

Experiment._add_execution_metric(metric: str, value: Any) None

Add an execution metric to the experiment results.

Parameters:
  • metric (str) – Name of the metric

  • value (object) – Value of the metric

Experiment._add_feature_metrics() None

Perform stats about features frequency.

Experiment._add_fitness(result_key: str) None

Add the fitness values to the solutions found.

Parameters:

result_key (str) – Result key.

Experiment._add_fitness_stats(result_key: str) None

Perform some stats on the best solutions fitness.

Parameters:

result_key (str) – Result key.

Experiment._add_training_stats() None

Add the training stats to the experiment results.

Experiment._do_test() None

Perform the test step.

Test the solutions found by the trainer append their fitness to the best solutions dataframe.

Experiment._do_training() None

Perform the training step.

Train the trainer and get the best solutions and the training stats.

Experiment._execute() None

Execute the trainer method.

Experiment._find_best_lexicographically() tuple[Solution]

Find the best solution in the Pareto front.

Since the Pareto front solutions are not comparable, they are ordered lexicographilly and the best solution is selected.

The validation fitness should be used to sort the solutions.

Returns:

The solution (one per species)

Return type:

tuple[Solution]

Experiment._is_reserved(name: str) bool

Check if a hyperparameter name is reserved

Parameters:

name (str) – Hyperparameter name

Returns:

True if the given hyperparameter name is reserved

Return type:

bool