How to Implement Non-linear Optimization with NumPy (4 Examples)

Updated: January 23, 2024 By: Guest Contributor Post a comment

Introduction

Optimization techniques are at the heart of numerous tasks in data analysis, machine learning, engineering design, and operations research. In this tutorial, we will explore how to implement non-linear optimization using NumPy, which is one of the most commonly used libraries in Python for numerical computations.

Background

Non-linear optimization or non-linear programming is a method to solve problems where the objective function or the constraints are non-linear. NumPy, however, does not include built-in functions for non-linear optimization, but it provides fundamental numerical support for such operations. We will also be using SciPy, another library that operates in tandem with NumPy to provide advanced optimization capabilities.

Prerequisites

  • Basic understanding of Python programming
  • Familiarity with NumPy and optionally SciPy library
  • Understanding of the non-linear optimization

Setting Up the Environment

First, ensure that you have installed NumPy and SciPy:

pip install numpy scipy

Example 1: A Simple Non-linear Optimization Problem

Let’s start by solving a simple optimization problem:

Minimize the function f(x) = (x – 3)^2 with x being a scalar.

import numpy as np
from scipy.optimize import minimize

def objective_function(x):
    return (x - 3)**2

result = minimize(objective_function, 0)
print(result)

Output:

{'fun': array([9.08788659e-17]), 'x': array([2.99999993]),...}

Example 2: Adding Constraints

In this example, we will solve an optimization problem with some constraints:

The objective is to minimize f(x) = (x – 1)^2 subject to the constraint x^3 – x – 2 >= 0

def objective_function(x):
    return (x - 1)**2

def constraint_function(x):
    return x**3 - x - 2

con = {'type': 'ineq', 'fun': constraint_function}

result = minimize(objective_function, 0, constraints=con)
print(result)

Output:

{'fun': array([1.00000017]), 'x': array([0.99999993]),...}

Example 3: Solving Systems with Multiple Variables

Here we look at optimizing a function of two variables:

Minimize the function f(x, y) = (x – 1)^2 + 100(y – x^2)^2, also known as Rosenbrock’s function.

def objective_function(x):
    return (x[0] - 1)**2 + 100 * (x[1] - x[0]**2)**2

x0 = np.array([0, 0])
result = minimize(objective_function, x0)
print(result)

Output:

{'fun': ..., 'x': array([..., ...]),...}

Example 4: Using Gradient and Hessian Information

For more complex problems, providing gradient (Jacobian) and Hessian (second-order derivatives) information can significantly improve the algorithm’s performance:

Let’s optimize f(x) = x^4 – x^3, considering its derivatives.

def objective_function(x):
    return x**4 - x**3

def objective_gradient(x):
    return 4*x**3 - 3*x**2
def objective_hessian(x):
    return 12*x**2 - 6*x

result = minimize(objective_function, 0.0, jac=objective_gradient, hess=objective_hessian)
print(result)

Output:

{'fun': ..., 'x': array([...]),...}

Advanced Topic: Global Optimization

Some optimization problems require global optimization techniques to find the global minimum of a function. SciPy offers algorithms like Basin-hopping, Differential Evolution, and Dual Annealing for global optimization tasks.

Let us solve a problem using the Differential Evolution algorithm:

Minimize the function f(x) = |x * sin(x)| for x in [-10, 10].

from scipy.optimize import differential_evolution

def objective_function(x):
    return abs(x * np.sin(x))

bounds = [(-10, 10)]
result = differential_evolution(objective_function, bounds)
print(result)

Output:

{'fun': ..., 'x': array([...]),...}

Conclusion

Throughout this tutorial, we have explored various non-linear optimization strategies with NumPy and SciPy. Starting from the basics and gradually moving to more advanced topics, this should equip you with the necessary tools to tackle complex non-linear optimization problems in Python.