culebra.trainer.aco.abc.MaxPheromonePACO class

class MaxPheromonePACO(solution_cls: type[Ant], species: Species, fitness_function: FitnessFunction, initial_pheromone: float | Sequence[float, ...], max_pheromone: float | Sequence[float, ...], heuristic: ndarray[float] | Sequence[ndarray[float], ...] | None = None, pheromone_influence: float | Sequence[float, ...] | None = None, heuristic_influence: float | Sequence[float, ...] | None = None, exploitation_prob: float | None = None, max_num_iters: int | None = None, custom_termination_func: Callable[[MaxPheromonePACO], bool] | None = None, col_size: int | None = None, pop_size: int | None = None, checkpoint_activation: bool | None = None, checkpoint_freq: int | None = None, checkpoint_filename: str | None = None, verbosity: bool | None = None, random_seed: int | None = None)

Bases: PACO

Create a new population-based ACO trainer.

Parameters:
Raises:
  • TypeError – If any argument is not of the appropriate type

  • ValueError – If any argument has an incorrect value

Class attributes

MaxPheromonePACO.objective_stats = {'Avg': <function mean>, 'Max': <function max>, 'Min': <function min>, 'Std': <function std>}

Statistics calculated for each objective.

MaxPheromonePACO.stats_names = ('Iter', 'NEvals')

Statistics calculated each iteration.

Class methods

classmethod MaxPheromonePACO.load(filename: str) Base

Load a serialized object from a file.

Parameters:

filename (str) – The file name.

Returns:

The loaded object

Raises:

Properties

property MaxPheromonePACO.checkpoint_activation: bool

Checkpointing activation.

Returns:

True if checkpointing is active, or False otherwise

Return type:

bool

Setter:

Modify the checkpointing activation

Parameters:

value (bool) – New value for the checkpoint activation. If set to None, _default_checkpoint_activation is chosen

Raises:

TypeError – If value is not a boolean value

property MaxPheromonePACO.checkpoint_filename: str

Checkpoint file path.

Return type:

str

Setter:

Modify the checkpoint file path

Parameters:

value (str) – New value for the checkpoint file path. If set to None, _default_checkpoint_filename is chosen

Raises:
property MaxPheromonePACO.checkpoint_freq: int

Checkpoint frequency.

Return type:

int

Setter:

Modify the checkpoint frequency

Parameters:

value (int) – New value for the checkpoint frequency. If set to None, _default_checkpoint_freq is chosen

Raises:
property MaxPheromonePACO.choice_info: ndarray[float] | None

Choice information for all the graph’s arcs.

The choice information is generated from both the pheromone and the heuristic matrices, modified by other parameters (depending on the ACO approach) and is used to obtain the probalility of following the next feasible arc for the node.

Returns:

The choice information or None if the search process has not begun

Return type:

ndarray[float]

property MaxPheromonePACO.col: list[Ant] | None

Colony.

Returns:

The colony or None if it has not been generated yet

Return type:

list[Ant]

property MaxPheromonePACO.col_size: int

Colony size.

Return type:

int

Setter:

Set a new value for the colony size

Parameters:

size (int) – The new colony size. If set to None, _default_col_size is chosen

Raises:
property MaxPheromonePACO.container: Trainer | None

Container of this trainer.

The trainer container is only used by distributed trainers. For the rest of trainers defaults to None.

Return type:

Trainer

Setter:

Set a new value for container of this trainer

Parameters:

value (Trainer) – New value for the container or None

Raises:

TypeError – If value is not a valid trainer

property MaxPheromonePACO.current_iter: int | None

Current iteration.

Returns:

The current iteration or None if the search has not been done yet

Return type:

int

property MaxPheromonePACO.custom_termination_func: Callable[[Trainer], bool] | None

Custom termination criterion.

Although the trainer will always stop when the max_num_iters are reached, a custom termination criterion can be set to detect convergente and stop the trainer earlier. This custom termination criterion must be a function which receives the trainer as its unique argument and returns a boolean value, True if the search should terminate or False otherwise.

If more than one arguments are needed to define the termination condition, functools.partial() can be used:

from functools import partial

def my_crit(trainer, max_iters):
    if trainer.current_iter < max_iters:
        return False
    return True

trainer.custom_termination_func = partial(my_crit, max_iters=10)
Setter:

Set a new custom termination criterion

Parameters:

func (Callable) – The new custom termination criterion. If set to None, the default termination criterion is used

Raises:

TypeError – If func is not callable

property MaxPheromonePACO.exploitation_prob: float

Exploitation probability (\({q_0}\)).

Return type:

float

Setter:

Set a new value for the exploitation probability

Parameters:

prob (float) – The new probability. If set to None, _default_exploitation_prob is chosen

Raises:
property MaxPheromonePACO.fitness_function: FitnessFunction

Training fitness function.

Return type:

FitnessFunction

Setter:

Set a new fitness function

Parameters:

func (FitnessFunction) – The new training fitness function

Raises:

TypeError – If func is not a valid fitness function

property MaxPheromonePACO.heuristic: tuple[ndarray[float], ...]

Heuristic matrices.

Return type:

tuple[ndarray[float]]

Setter:

Set new heuristic matrices

Parameters:

values (ndarray[float] | Sequence[ndarray[float], ...]) – The new heuristic matrices. Both a single matrix or a sequence of matrices are allowed. If a single matrix is provided, it will be replicated for all the heuristic matrices. If set to None, _default_heuristic is chosen

Raises:
property MaxPheromonePACO.heuristic_influence: tuple[float, ...]

Relative influence of heuristic (\({\beta}\)).

Returns:

One value for each heuristic matrix

Return type:

tuple[float]

Setter:

Set new values for the relative influence of each heuristic matrix

Parameters:

values (float | Sequence[float]) – New value for the relative influence of each heuristic matrix. Both a scalar value or a sequence of values are allowed. If a scalar value is provided, it will be used for all the heuristic matrices. If set to None, _default_heuristic_influence is chosen

Raises:
abstract property MaxPheromonePACO.heuristic_shapes: tuple[tuple[int, int], ...]

Shape of the heuristic matrices.

This property must be overridden by subclasses to return a correct value.

Return type:

tuple[tuple[int]]

Raises:

NotImplementedError – If has not been overridden

property MaxPheromonePACO.index: int

Trainer index.

The trainer index is only used by distributed trainers. For the rest of trainers _default_index is used.

Return type:

int

Setter:

Set a new value for trainer index.

Parameters:

value (int) – New value for the trainer index. If set to None, _default_index is chosen

Raises:
property MaxPheromonePACO.initial_pheromone: tuple[float, ...]

Initial value for each pheromone matrix.

Returns:

One initial value for each pheromone matrix

Return type:

tuple[float]

Setter:

Set the initial value for each pheromone matrix

Parameters:

values (float | Sequence[float]) – New initial value for each pheromone matrix. Both a scalar value or a sequence of values are allowed. If a scalar value is provided, it will be used for all the pheromone matrices

Raises:
property MaxPheromonePACO.logbook: Logbook | None

Trainer logbook.

Returns:

A logbook with the statistics of the search or None if the search has not been done yet

Return type:

Logbook

property MaxPheromonePACO.max_num_iters: int

Maximum number of iterations.

Return type:

int

Setter:

Set a new value for the maximum number of iterations

Parameters:

value (int) – The new maximum number of iterations. If set to None, the default maximum number of iterations, _default_max_num_iters, is chosen

Raises:
property MaxPheromonePACO.max_pheromone: tuple[float, ...]

Maximum value for each pheromone matrix.

Return type:

tuple[float]

Setter:

Set the maximum value for each pheromone matrix

Parameters:

values (float | Sequence[float]) – New maximum value for each pheromone matrix. Both a scalar value or a sequence of values are allowed. If a scalar value is provided, it will be used for all the num_pheromone_matrices pheromone matrices.

Raises:
  • TypeError – If values is neither a float nor a Sequence of float values

  • ValueError – If any element in values is negative or zero

  • ValueError – If any element in values is lower than or equal to its corresponding initial pheromone value

  • ValueError – If values is a sequence and it length is different from num_pheromone_matrices

property MaxPheromonePACO.num_evals: int | None

Number of evaluations performed while training.

Returns:

The number of evaluations or None if the search has not been done yet

Return type:

int

abstract property MaxPheromonePACO.num_heuristic_matrices: int

Number of heuristic matrices used by this trainer.

This property must be overridden by subclasses to return a correct value.

Return type:

int

Raises:

NotImplementedError – If has not been overridden

abstract property MaxPheromonePACO.num_pheromone_matrices: int

Number of pheromone matrices used by this trainer.

This property must be overridden by subclasses to return a correct value.

Return type:

int

Raises:

NotImplementedError – If has not been overridden

property MaxPheromonePACO.pheromone: list[ndarray[float], ...] | None

Pheromone matrices.

Returns:

The pheromone matrices or None if the search process has not begun

Return type:

list[ndarray[float]]

property MaxPheromonePACO.pheromone_influence: tuple[float, ...]

Relative influence of pheromone (\({\alpha}\)).

Returns:

One value for each pheromone matrix

Return type:

tuple[float]

Getter:

Return the relative influence of each pheromone matrix.

Setter:

Set new values for the relative influence of each pheromone matrix

Parameters:

values (float | Sequence[float]) – New value for the relative influence of each pheromone matrix. Both a scalar value or a sequence of values are allowed. If a scalar value is provided, it will be used for all the pheromone matrices. If set to None, _default_pheromone_influence is chosen

Raises:
abstract property MaxPheromonePACO.pheromone_shapes: tuple[tuple[int, int], ...]

Shape of the pheromone matrices.

This property must be overridden by subclasses to return a correct value.

Return type:

tuple[tuple[int]]

Raises:

NotImplementedError – If has not been overridden

property MaxPheromonePACO.pop: list[Ant] | None

Population.

Returns:

The population or None if it has not been generated

Return type:

list[Ant]

property MaxPheromonePACO.pop_size: int

Population size.

Return type:

int

Setter:

Set the population size

Parameters:

size (int) – The new population size. If set to None, _default_pop_size is chosen

Raises:
property MaxPheromonePACO.random_seed: int

Random seed used by this trainer.

Return type:

int

Setter:

Set a new value for the random seed

Parameters:

value (int) – New value

property MaxPheromonePACO.representatives: list[list[Solution | None]] | None

Representatives of the other species.

Only used by cooperative trainers. If the trainer does not use representatives, None is returned.

Return type:

list[list[Solution]]

property MaxPheromonePACO.runtime: float | None

Training runtime.

Returns:

The training runtime or None if the search has not been done yet.

Return type:

float

property MaxPheromonePACO.solution_cls: type[Solution]

Solution class.

Return type:

type[Solution]

Setter:

Set a new solution class

Parameters:

cls (type[Solution]) – The new class

Raises:

TypeError – If cls is not valid solution class

property MaxPheromonePACO.species: Species

Species.

Return type:

Species

Setter:

Set a new species

Parameters:

value (Species) – The new species

Raises:

TypeError – If value is not a valid species

property MaxPheromonePACO.verbosity: bool

Verbosity of this trainer.

Return type:

bool

Setter:

Set a new value for the verbosity

Parameters:

value (bool) – The verbosity. If set to None, _default_verbosity is chosen

Raises:

TypeError – If value is not boolean

Private properties

property MaxPheromonePACO._default_checkpoint_activation: bool

Default checkpointing activation.

Returns:

DEFAULT_CHECKPOINT_ACTIVATION

Return type:

bool

property MaxPheromonePACO._default_checkpoint_filename: str

Default checkpointing file name.

Returns:

DEFAULT_CHECKPOINT_FILENAME

Return type:

str

property MaxPheromonePACO._default_checkpoint_freq: int

Default checkpointing frequency.

Returns:

DEFAULT_CHECKPOINT_FREQ

Return type:

int

abstract property MaxPheromonePACO._default_col_size: int

Default colony size.

This property must be overridden by subclasses to return a correct default value.

Return type:

int

Raises:

NotImplementedError – If has not been overridden

property MaxPheromonePACO._default_exploitation_prob: float

Default exploitation probability (\({q_0}\)).

Returns:

attr:~culebra.trainer.aco.DEFAULT_EXPLOITATION_PROB

Return type:

float

abstract property MaxPheromonePACO._default_heuristic: tuple[ndarray[float], ...]

Default heuristic matrices.

This property must be overridden by subclasses to return a correct value.

Return type:

tuple[ndarray[float]]

Raises:

NotImplementedError – If has not been overridden

property MaxPheromonePACO._default_heuristic_influence: tuple[float, ...]

Default relative influence of heuristic (\({\beta}\)).

Returns:

The DEFAULT_HEURISTIC_INFLUENCE for each pheromone matrix

Return type:

tuple[float]

property MaxPheromonePACO._default_index: int

Default index.

Returns:

DEFAULT_INDEX

Return type:

int

property MaxPheromonePACO._default_max_num_iters: int

Default maximum number of iterations.

Returns:

DEFAULT_MAX_NUM_ITERS

Return type:

int

property MaxPheromonePACO._default_pheromone_influence: tuple[float, ...]

Default relative influence of pheromone (\({\alpha}\)).

Returns:

The DEFAULT_PHEROMONE_INFLUENCE for each pheromone matrix

Return type:

tuple[float]

property MaxPheromonePACO._default_pop_size: int

Default Population size.

Returns:

col_size

Return type:

int

property MaxPheromonePACO._default_verbosity: bool

Default verbosity.

Returns:

DEFAULT_VERBOSITY

Return type:

bool

Methods

MaxPheromonePACO.best_representatives() list[list[Solution]] | None

Return a list of representatives from each species.

Only used for cooperative trainers.

Returns:

A list of representatives lists if the trainer is cooperative or None in other cases.

Return type:

list[list[Solution]]

MaxPheromonePACO.best_solutions() tuple[HallOfFame]

Get the best solutions found for each species.

Returns:

One Hall of Fame for each species

Return type:

tuple[HallOfFame]

MaxPheromonePACO.dump(filename: str) None

Serialize this object and save it to a file.

Parameters:

filename (str) – The file name.

Raises:
MaxPheromonePACO.evaluate(sol: Solution, fitness_func: FitnessFunction | None = None, index: int | None = None, representatives: Sequence[Sequence[Solution | None]] | None = None) None

Evaluate one solution.

Its fitness will be modified according with the fitness function results. Besides, if called during training, the number of evaluations will be also updated.

Parameters:
  • sol (Solution) – The solution

  • fitness_func (FitnessFunction) – The fitness function. If omitted, the default training fitness function (fitness_function) is used

  • index (int) – Index where sol should be inserted in the representatives sequence to form a complete solution for the problem. If omitted, index is used

  • representatives (Sequence[Sequence[Solution]]) – Sequence of representatives of other species or None (if no representatives are needed to evaluate sol). If omitted, the current value of representatives is used

MaxPheromonePACO.reset() None

Reset the trainer.

Delete the state of the trainer (with _reset_state()) and also all the internal data structures needed to perform the search (with _reset_internals()).

This method should be invoqued each time a hyper parameter is modified.

MaxPheromonePACO.test(best_found: Sequence[HallOfFame], fitness_func: FitnessFunction | None = None, representatives: Sequence[Sequence[Solution]] | None = None) None

Apply the test fitness function to the solutions found.

Update the solutions in best_found with their test fitness.

Parameters:
Raises:
  • TypeError – If any parameter has a wrong type

  • ValueError – If any parameter has an invalid value.

MaxPheromonePACO.train(state_proxy: DictProxy | None = None) None

Perform the training process.

Parameters:

state_proxy (DictProxy) – dictionary proxy to copy the output state of the trainer procedure. Only used if train is executed within a multiprocess.Process. Defaults to None

Private methods

MaxPheromonePACO._ant_choice_info(ant: Ant) ndarray[float]

Return the choice info to obtain the next node the ant will visit.

All the previously visited nodes are discarded. Subclasses should override this method if the Species constraining the solutions of the problem supports node banning.

Parameters:

ant (Ant) – The ant

Return type:

ndarray[float]

abstract MaxPheromonePACO._calculate_choice_info() None

Calculate the choice information.

The choice information is generated from both the pheromone and the heuristic matrices, modified by other parameters (depending on the ACO approach) and is used to obtain the probalility of following the next feasible arc for the node.

This method should be overridden by subclasses.

Raises:

NotImplementedError – If has not been overridden

MaxPheromonePACO._default_termination_func() bool

Default termination criterion.

Returns:

True if max_num_iters iterations have been run

Return type:

bool

MaxPheromonePACO._deposit_pheromone(ants: Sequence[Ant], weight: float = 1.0) None

Make some ants deposit weighted pheromone.

This method must be overridden by subclasses to take into account the correct number and shape of the pheromone matrices.

Parameters:
Raises:

NotImplementedError – If has not been overridden

MaxPheromonePACO._do_iteration() None

Implement an iteration of the search process.

MaxPheromonePACO._do_iteration_stats() None

Perform the iteration stats.

MaxPheromonePACO._finish_iteration() None

Finish an iteration.

Finish the iteration metrics (number of evaluations, execution time) after each iteration is run.

Finish the search process.

This method is called after the search has finished. It can be overridden to perform any treatment of the solutions found.

MaxPheromonePACO._generate_ant() Ant

Generate a new ant.

The ant makes its path and gets evaluated.

Returns:

The new ant

Return type:

Ant

MaxPheromonePACO._generate_col() None

Fill the colony with evaluated ants.

MaxPheromonePACO._get_state() dict[str, Any]

Return the state of this trainer.

Overridden to add the current population to the trainer’s state.

Return type:

dict

MaxPheromonePACO._init_internals() None

Set up the trainer internal data structures to start searching.

Create all the internal objects, functions and data structures needed to run the search process. For the PACO class, the pheromone matrices are created. Subclasses which need more objects or data structures should override this method.

MaxPheromonePACO._init_pheromone() None

Init the pheromone matrix(ces) according to the initial value(s).

MaxPheromonePACO._init_representatives() None

Init the representatives of the other species.

Only used for cooperative approaches, which need representatives of all the species to form a complete solution for the problem. Cooperative subclasses of the Trainer class should override this method to get the representatives of the other species initialized.

Init the search process.

Initialize the state of the trainer (with _init_state()) and all the internal data structures needed (with _init_internals()) to perform the search.

MaxPheromonePACO._init_state() None

Init the trainer state.

If there is any checkpoint file, the state is initialized from it with the _load_state() method. Otherwise a new initial state is generated with the _new_state() method.

MaxPheromonePACO._load_state() None

Load the state of the last checkpoint.

Raises:

Exception – If the checkpoint file can’t be loaded

MaxPheromonePACO._new_state() None

Generate a new trainer state.

Overridden to create an empty population.

MaxPheromonePACO._next_choice(ant: Ant) int | None

Choose the next node for an ant.

The election is made from the feasible neighborhood of the current node, which is composed of those nodes neither discarded nor visited yet by the ant and connected to its current node.

The best possible node is selected with probability exploitation_prob. In case the best node is not chosen, the next node is selected probabilistically according to the choice_info matrix.

Parameters:

ant (Ant) – The ant

Returns:

The index of the chosen node or None if there isn’t any feasible node

Return type:

int

MaxPheromonePACO._pheromone_amount(ant: Ant) tuple[float, ...]

Return the amount of pheromone to be deposited by an ant.

All the ants deposit/remove the same amount of pheromone, which is obtained as (initial_pheromone - initial_pheromone) / pop_size.

Parameters:

ant (Ant) – The ant

Returns:

The amount of pheromone to be deposited for each objective

Return type:

tuple[float]

MaxPheromonePACO._postprocess_iteration() None

Postprocess after doing the iteration.

Subclasses should override this method to make any postprocessment after performing an iteration.

MaxPheromonePACO._preprocess_iteration() None

Preprocess before doing the iteration.

Subclasses should override this method to make any preprocessment before performing an iteration.

MaxPheromonePACO._reset_internals() None

Reset the internal structures of the trainer.

Overridden to reset the pheromone matrices. If subclasses overwrite the _init_internals() method to add any new internal object, this method should also be overridden to reset all the internal objects of the trainer.

MaxPheromonePACO._reset_state() None

Reset the trainer state.

Overridden to reset the population.

MaxPheromonePACO._save_state() None

Save the state at a new checkpoint.

Raises:

Exception – If the checkpoint file can’t be written

Apply the search algorithm.

Execute the trainer until the termination condition is met. Each iteration is composed by the following steps:

MaxPheromonePACO._set_cooperative_fitness(sol: Solution, fitness_trials_values: [Sequence[tuple[float]]]) None

Estimate a solution fitness from multiple evaluation trials.

Applies an average of the fitness trials values. Trainers requiring another estimation should override this method.

Parameters:
  • sol (Solution) – The solution

  • fitness_trials_values (Sequence[tuple[float]]) – Sequence of fitness trials values. Each trial should be obtained with a different context in a cooperative trainer approach.

MaxPheromonePACO._set_state(state: dict[str, Any]) None

Set the state of this trainer.

Overridden to add the current population to the trainer’s state.

Parameters:

state (dict) – The last loaded state

MaxPheromonePACO._start_iteration() None

Start an iteration.

Prepare the iteration metrics (number of evaluations, execution time) before each iteration is run and create an empty ant colony. Overridden to calculate the choice information before executing the next iteration.

MaxPheromonePACO._termination_criterion() bool

Control the search termination.

Returns:

True if either the default termination criterion or a custom termination criterion is met. The default termination criterion is implemented by the _default_termination_func() method. Another custom termination criterion can be set with custom_termination_func method.

Return type:

bool

abstract MaxPheromonePACO._update_pheromone() None

Update the pheromone trails.

Raises:

NotImplementedError – If has not been overridden

abstract MaxPheromonePACO._update_pop() None

Update the population.

This method should be overridden by subclasses.

Raises:

NotImplementedError – If has not been overridden