Skip to content

Repulsing Hill Climbing

Introduction

The repulsing hill climbing optimization algorithm inherits from the regular hill-climbing algorithm and adds a way to escape local optima by increasing the step-size once to jump away from its current region.

Example

from hyperactive import Hyperactive
from hyperactive.optimizers import RepulsingHillClimbingOptimizer

...

optimizer = RepulsingHillClimbingOptimizer(repulsion_factor=3)

hyper = Hyperactive()
hyper.add_search(model, search_space, n_iter=50, optimizer=optimizer)
hyper.run()

About the implementation

Similar to other hill climbing based algorithms the repulsing hill climbing inherits the methods from the regular hill climbing and adds a functionality to escape local optima. The repulsing hill climbing temporally increases epsilon by multiplying it with the repulsion_factor for the next iteration.

\[ \sigma = \text{dim-size} * \epsilon * \text{repulsion_factor} \]

This way the algorithm jumps away from the current position to explore other regions of the search-space. So the repulsing hill climbing is different from stochastic hill climbing and simulated annealing in that it always activates its methods to espace local optima instead of activating them with a certain probability.

Parameters

epsilon

The step-size of the hill climbing algorithm. Increasing epsilon also increases the average step-size, because its proportional to the standard-deviation of the distribution of the hill-climbing-based algorithm. If epsilon is too large the newly selected positions will be at the edge of the search space. If its value is very low it might not find new positions.

  • type: float
  • default: 0.03
  • typical range: 0.01 ... 0.3

distribution

The mathematical distribution the algorithm draws samples from. All available distributions are taken from the numpy-package.

  • type: string
  • default: "normal"
  • possible values: "normal", "laplace", "logistic", gumbel"

n_neighbours

The number of positions the algorithm explores from its current postion before setting its current position to the best of those neighbour positions. If the value of n_neighbours is large the hill-climbing-based algorithm will take a lot of time to choose the next position to move to, but the choice will probably be a good one. It might be a prudent approach to increase n_neighbours of the search-space has a lot of dimensions, because there are more possible directions to move to.

  • type: int
  • default: 3
  • typical range: 1 ... 10

repulsion_factor

If the algorithm does not find a better position the repulsion factor increases epsilon for the next iteration by the repulsion_factor. This way the algorithm escapes the region that does not offer better positions.

  • type: float
  • default: 5
  • typical range: 2 ... 10