Skip to content

Powell's Method

Introduction

This powell's method implementation works by optimizing each search space dimension at a time with a hill climbing algorithm. It works by setting the search space range for all dimensions except one to a single value. The hill climbing algorithms searches the best position within this dimension.

Example

from hyperactive.opt.gfo import PowellsMethod
from hyperactive.experiment.integrations import SklearnCvExperiment
from sklearn.ensemble import AdaBoostClassifier
from sklearn.datasets import load_digits

# Load dataset
X, y = load_digits(return_X_y=True)

# Define search space
param_grid = {
    "n_estimators": [50, 100, 150, 200],
    "learning_rate": [0.01, 0.1, 0.5, 1.0],
    "algorithm": ["SAMME", "SAMME.R"]
}

# Create experiment
experiment = SklearnCvExperiment(
    estimator=AdaBoostClassifier(random_state=42),
    param_grid=param_grid,
    X=X, y=y,
    cv=3
)

# Create optimizer
optimizer = PowellsMethod(experiment=experiment)

# Run optimization
best_params = optimizer.solve()
print("Best parameters:", best_params)

About the implementation

The powell's method implemented in Gradient-Free-Optimizers does only see one dimension at a time. This differs from the original idea of creating (and searching through) one search-vector at a time, that spans through multiple dimensions. After iters_p_dim iterations the next dimension is searched, while the search space range from the previously searched dimension is set to the best position, This way the algorithm finds new best positions one dimension at a time.

Parameters

iters_p_dim

Number of iterations the algorithm will let the hill-climbing algorithm search to find the best position before it changes to the next dimension of the search space.

  • type: int
  • default: 10
  • typical range: 5 ... 15