This tutorial illustrates the core visualization utilities available in Ax.
import numpy as np
from ax.service.ax_client import AxClient
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import(
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 08-10 23:19:51] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x ** 2).sum()) + noise2, noise_sd)
}
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objective_name="hartmann6",
minimize=True,
outcome_constraints=["l2norm <= 1.25"]
)
[INFO 08-10 23:19:51] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 2 decimal points. [INFO 08-10 23:19:51] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-10 23:19:51] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-10 23:19:51] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-10 23:19:51] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-10 23:19:51] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-10 23:19:51] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict. [INFO 08-10 23:19:51] ax.modelbridge.dispatch_utils: Using GPEI (Bayesian optimization) since there are more continuous parameters than there are categories for the unordered categorical parameters. [INFO 08-10 23:19:51] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 6 trials, GPEI for subsequent trials]). Iterations after 6 will take longer to generate due to model-fitting.
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters))
[INFO 08-10 23:19:51] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.34, 'x2': 0.42, 'x3': 0.17, 'x4': 0.13, 'x5': 0.37, 'x6': 0.62}. [INFO 08-10 23:19:51] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-1.58, 0.1), 'l2norm': (1.0, 0.1)}. [INFO 08-10 23:19:51] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.49, 'x2': 0.28, 'x3': 0.25, 'x4': 0.26, 'x5': 0.8, 'x6': 0.38}. [INFO 08-10 23:19:51] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (0.04, 0.1), 'l2norm': (1.18, 0.1)}. [INFO 08-10 23:19:51] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.03, 'x2': 0.53, 'x3': 0.54, 'x4': 0.57, 'x5': 0.74, 'x6': 0.67}. [INFO 08-10 23:19:51] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-0.22, 0.1), 'l2norm': (1.31, 0.1)}. [INFO 08-10 23:19:51] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.82, 'x2': 0.16, 'x3': 0.75, 'x4': 0.81, 'x5': 0.02, 'x6': 0.75}. [INFO 08-10 23:19:51] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.37, 0.1), 'l2norm': (1.77, 0.1)}. [INFO 08-10 23:19:51] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.11, 'x2': 0.03, 'x3': 0.7, 'x4': 0.93, 'x5': 0.5, 'x6': 0.68}. [INFO 08-10 23:19:51] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.13, 0.1), 'l2norm': (1.29, 0.1)}. [INFO 08-10 23:19:51] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.15, 'x2': 0.19, 'x3': 0.97, 'x4': 0.9, 'x5': 0.57, 'x6': 0.3}. [INFO 08-10 23:19:51] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-0.09, 0.1), 'l2norm': (1.47, 0.1)}. [INFO 08-10 23:20:02] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.33, 'x2': 0.44, 'x3': 0.15, 'x4': 0.1, 'x5': 0.26, 'x6': 0.66}. [INFO 08-10 23:20:02] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (-1.35, 0.1), 'l2norm': (0.83, 0.1)}. [INFO 08-10 23:20:16] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.3, 'x2': 0.45, 'x3': 0.14, 'x4': 0.09, 'x5': 0.42, 'x6': 0.72}. [INFO 08-10 23:20:16] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-1.04, 0.1), 'l2norm': (1.01, 0.1)}. [INFO 08-10 23:20:23] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.36, 'x2': 0.39, 'x3': 0.21, 'x4': 0.18, 'x5': 0.3, 'x6': 0.56}. [INFO 08-10 23:20:23] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-1.77, 0.1), 'l2norm': (0.91, 0.1)}. [INFO 08-10 23:20:31] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.39, 'x2': 0.35, 'x3': 0.2, 'x4': 0.12, 'x5': 0.31, 'x6': 0.47}. [INFO 08-10 23:20:31] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-1.09, 0.1), 'l2norm': (0.86, 0.1)}. [INFO 08-10 23:20:43] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.36, 'x2': 0.41, 'x3': 0.23, 'x4': 0.26, 'x5': 0.28, 'x6': 0.6}. [INFO 08-10 23:20:43] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-2.25, 0.1), 'l2norm': (0.94, 0.1)}. [INFO 08-10 23:20:50] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.36, 'x2': 0.44, 'x3': 0.22, 'x4': 0.33, 'x5': 0.25, 'x6': 0.66}. [INFO 08-10 23:20:50] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (-2.12, 0.1), 'l2norm': (0.9, 0.1)}. [INFO 08-10 23:20:54] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.3, 'x2': 0.44, 'x3': 0.31, 'x4': 0.28, 'x5': 0.26, 'x6': 0.62}. [INFO 08-10 23:20:54] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-2.07, 0.1), 'l2norm': (1.09, 0.1)}. [INFO 08-10 23:20:58] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.29, 'x2': 0.42, 'x3': 0.15, 'x4': 0.31, 'x5': 0.27, 'x6': 0.59}. [INFO 08-10 23:20:58] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-1.93, 0.1), 'l2norm': (0.71, 0.1)}. [INFO 08-10 23:21:03] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.43, 'x2': 0.48, 'x3': 0.26, 'x4': 0.28, 'x5': 0.28, 'x6': 0.62}. [INFO 08-10 23:21:03] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-1.94, 0.1), 'l2norm': (0.89, 0.1)}. [INFO 08-10 23:21:06] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.36, 'x2': 0.35, 'x3': 0.26, 'x4': 0.28, 'x5': 0.26, 'x6': 0.66}. [INFO 08-10 23:21:06] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-2.58, 0.1), 'l2norm': (0.98, 0.1)}. [INFO 08-10 23:21:13] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.41, 'x2': 0.28, 'x3': 0.27, 'x4': 0.28, 'x5': 0.24, 'x6': 0.7}. [INFO 08-10 23:21:13] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-2.36, 0.1), 'l2norm': (1.01, 0.1)}. [INFO 08-10 23:21:16] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.37, 'x2': 0.32, 'x3': 0.28, 'x4': 0.3, 'x5': 0.33, 'x6': 0.69}. [INFO 08-10 23:21:16] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-2.61, 0.1), 'l2norm': (1.19, 0.1)}. [INFO 08-10 23:21:30] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.4, 'x2': 0.31, 'x3': 0.29, 'x4': 0.33, 'x5': 0.3, 'x6': 0.63}. [INFO 08-10 23:21:30] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-2.38, 0.1), 'l2norm': (1.01, 0.1)}. [INFO 08-10 23:21:37] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.3, 'x2': 0.31, 'x3': 0.28, 'x4': 0.29, 'x5': 0.3, 'x6': 0.72}. [INFO 08-10 23:21:37] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-2.66, 0.1), 'l2norm': (1.04, 0.1)}.
The plot below shows the response surface for hartmann6
metric as a function of the x1
, x2
parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name='hartmann6'))
The plot below allows toggling between different pairs of parameters to view the contours.
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name='hartmann6'))
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
Tile plots are useful for viewing the effect of each arm.
Total runtime of script: 2 minutes, 15.58 seconds.