culebra.tools.Evaluation class

class Evaluation(trainer: Trainer, untie_best_fitness_function: FitnessFunction | None = None, test_fitness_function: FitnessFunction | None = None, results_base_filename: str | None = None, hyperparameters: dict | None = None)

Bases: Base

Set a trainer evaluation.

Parameters:
  • trainer (Trainer) – The trainer method

  • untie_best_fitness_function (FitnessFunction) – The fitness function used to select the best solution from those found by the trainer in case of a tie. If omitted, the training fitness function will be used. Defaults to None

  • test_fitness_function (FitnessFunction) – The fitness function used to test. If omitted, the training fitness function will be used. Defaults to None

  • results_base_filename (str) – The base filename to save the results. If omitted, _default_results_base_filename is used. Defaults to None

  • hyperparameters (dict) – Hyperparameter values used in this evaluation, optional

Raises:
  • TypeError – If trainer is not a valid trainer

  • TypeError – If test_fitness_function is not a valid fitness function

  • TypeError – If results_base_filename is not a valid file name

  • TypeError – If hyperparameters is not a dictionary

  • ValueError – If the keys in hyperparameters are not strings

  • ValueError – If any key in hyperparameters is reserved

Class attributes

Evaluation.feature_metric_functions = {'Rank': <function Metrics.rank>, 'Relevance': <function Metrics.relevance>}

Metrics calculated for the features in the set of solutions.

Evaluation.stats_functions = {'Avg': <function mean>, 'Max': <function max>, 'Min': <function min>, 'Std': <function std>}

Statistics calculated for the solutions.

Class methods

classmethod Evaluation.from_config(config_script_filename: str | None = None) Evaluation

Generate a new evaluation from a configuration file.

Parameters:

config_script_filename (str) – Path to the configuration file. If omitted, DEFAULT_CONFIG_SCRIPT_FILENAME is used. Defaults to None

Raises:

RuntimeError – If config_script_filename is an invalid file path or an invalid configuration file

classmethod Evaluation.generate_run_script(config_filename: str | None = None, run_script_filename: str | None = None) None

Generate a script to run an evaluation.

The parameters for the evaluation are taken from a configuration file.

Parameters:
Raises:
  • TypeError – If config_filename or run_script_filename are not a valid filename

  • ValueError – If the extensions of config_filename or run_script_filename are not valid

classmethod Evaluation.load(filename: str) Base

Load a serialized object from a file.

Parameters:

filename (str) – The file name.

Returns:

The loaded object

Raises:

Properties

property Evaluation.excel_results_filename: str

Filename used to save the results in Excel format.

Return type:

str

property Evaluation.hyperparameters: dict | None

Hyperparameter values used for the evaluation.

Return type:

dict

Setter:

Set the hyperparameter values used for the evaluation

Parameters:

values (dict) – Hyperparameter values used in this evaluation

Raises:
  • TypeError – If values is not a dictionary

  • ValueError – If the keys in values are not strings

  • ValueError – If any key in values is reserved

property Evaluation.results: Results | None

Results obtained.

Return type:

Results

property Evaluation.results_base_filename: str | None

Results base filename.

Return type:

str

Setter:

Set a new results base filename.

Parameters:

filename (str) – New results base filename. If set to None, _default_results_base_filename is used

Raises:

TypeError – If filename is not a valid file name

property Evaluation.serialized_results_filename: str

Filename used to save the serialized results.

Return type:

str

property Evaluation.test_fitness_function: FitnessFunction | None

Test fitness function.

Return type:

FitnessFunction

Setter:

Set a new test fitness function.

Parameters:

func (FitnessFunction) – New test fitness function. If set to None, the training fitness function will also be used for testing

Raises:

TypeError – If func is not a valid fitness function

property Evaluation.trainer: Trainer

Trainer method.

Return type:

Trainer

Setter:

Set a new trainer method

Parameters:

value (Trainer) – New trainer

Raises:

TypeError – If trainer is not a valid trainer

property Evaluation.untie_best_fitness_function: FitnessFunction | None

Fitness function to untie the best solutions.

Return type:

FitnessFunction

Setter:

Set a new fitness function to untie the best solutions

Parameters:

func (FitnessFunction) – New untie fitness function. If set to None and several tied solutions are found by the trainer, the first of them will be returned

Raises:

TypeError – If func is not a valid fitness function

Private properties

property Evaluation._default_results_base_filename: str

Default base name for results files.

Returns:

DEFAULT_RESULTS_BASE_FILENAME

Return type:

str

Methods

Evaluation.dump(filename: str) None

Serialize this object and save it to a file.

Parameters:

filename (str) – The file name.

Raises:
Evaluation.reset() None

Reset the results.

Evaluation.run() None

Execute the evaluation and save the results.

Private methods

abstract Evaluation._execute() None

Execute the evaluation.

This method must be overridden by subclasses to return a correct value.

Evaluation._is_reserved(name: str) bool

Check if a hyperparameter name is reserved

Parameters:

name (str) – Hyperparameter name

Returns:

True if the given hyperparameter name is reserved

Return type:

bool