culebra.tools.Evaluation class

class Evaluation(trainer: Trainer, untie_best_fitness_function: FitnessFunction | None = None, test_fitness_function: FitnessFunction | None = None, results_base_filename: str | None = None, hyperparameters: dict | None = None)

Set a trainer evaluation.

Parameters:
  • trainer (Trainer) – The trainer method

  • untie_best_fitness_function (FitnessFunction, optional) – The fitness function used to select the best solution from those found by the trainer in case of a tie. If set to None, the training fitness function will be used. Defaults to None.

  • test_fitness_function (FitnessFunction, optional) – The fitness function used to test. If set to None, the training fitness function will be used. Defaults to None.

  • results_base_filename (str, optional) – The base filename to save the results. If set to None, DEFAULT_RESULTS_BASENAME is used. Defaults to None

  • hyperparameters (dict, optional) – Hyperparameter values used in this evaluation

Raises:
  • TypeError – If trainer is not a valid trainer

  • TypeError – If test_fitness_function is not a valid fitness function

  • TypeError – If results_base_filename is not a valid file name

  • TypeError – If hyperparameters is not a dictionary

  • ValueError – If the keys in hyperparameters are not strings

  • ValueError – If any key in hyperparameters is reserved

Class attributes

Evaluation.feature_metric_functions = {'Rank': <function Metrics.rank>, 'Relevance': <function Metrics.relevance>}

Metrics calculated for the features in the set of solutions.

Evaluation.stats_functions = {'Avg': <function mean>, 'Max': <function max>, 'Min': <function min>, 'Std': <function std>}

Statistics calculated for the solutions.

Class methods

classmethod Evaluation.load_pickle(filename: str) Base

Load a pickled object from a file.

Parameters:

filename (str) – The file name.

Raises:
classmethod Evaluation.from_config(config_script_filename: str | None = None) Evaluation

Generate a new evaluation from a configuration file.

Parameters:

config_script_filename (str, optional) – Path to the configuration file. If set to None, DEFAULT_CONFIG_SCRIPT_FILENAME is used. Defaults to None

Raises:

RuntimeError – If config_script_filename is an invalid file path or an invalid configuration file

classmethod Evaluation.generate_run_script(config_filename: str | None = None, run_script_filename: str | None = None) None

Generate a script to run an evaluation.

The parameters for the evaluation are taken from a configuration file.

Parameters:
Raises:
  • TypeError – If config_filename or run_script_filename are not a valid filename

  • ValueError – If the extensions of config_filename or run_script_filename are not valid.

Properties

property Evaluation.trainer: Trainer

Get and set the trainer method.

Getter:

Return the trainer method

Setter:

Set a new trainer method

Type:

Trainer

Raises:

TypeError – If set to a value which is not a valid trainer

property Evaluation.untie_best_fitness_function: FitnessFunction | None

Get and set the fitness function to untie the best solutions.

Getter:

Return the untie fitness function

Setter:

Set a new untie fitness function. If set to None and several tied solutions are found by the trainer, the first of them will be returned.

Type:

FitnessFunction

Raises:

TypeError – If set to a value which is not a valid fitness funtion

property Evaluation.test_fitness_function: FitnessFunction | None

Get and set the test fitness function.

Getter:

Return the test fitness function

Setter:

Set a new test fitness function. If set to None, the training fitness function will also be used for testing.

Type:

FitnessFunction

Raises:

TypeError – If set to a value which is not a valid fitness funtion

property Evaluation.results_base_filename: str | None

Get and set the results base filename.

Getter:

Return the results base filename

Setter:

Set a new results base filename. If set to None, DEFAULT_RESULTS_BASENAME is used.

Raises:

TypeError – If set to an invalid file name

property Evaluation.results_pickle_filename: str

Get the filename used to save the pickled results.

Type:

str

property Evaluation.results_excel_filename: str

Get the filename used to save the results in Excel format.

Type:

str

property Evaluation.hyperparameters: dict | None

Get and set the hyperparameter values used for the evaluation.

Getter:

Return the hyperparameter values

Setter:

Set a new set of hyperparameter values.

Raises:
  • TypeError – If the hyperparameters are not in a dictionary

  • ValueError – If the keys of the dictionary are not strings

  • ValueError – If any key in the dictionary is reserved

property Evaluation.results: Dict[str, DataFrame] | None

Get all the results provided.

Type:

Results

Methods

Evaluation.save_pickle(filename: str) None

Pickle this object and save it to a file.

Parameters:

filename (str) – The file name.

Raises:
Evaluation.reset() None

Reset the results.

Evaluation.run() None

Execute the evaluation and save the results.

Private methods

abstract Evaluation._execute() None

Execute the evaluation.

This method must be overridden by subclasses to return a correct value.

Evaluation._is_reserved(name: str) bool

Return True if the given hyperparameter name is reserved.

Parameters:

name – Hyperparameter name