Skip to content

Mathematical programming and optimization methods

Notifications You must be signed in to change notification settings

iamgeorgp/math_prog

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

MP

Mathematical programming and optimization methods

This folder contains implementations of different methods for finding extrema in mathematical functions. Each method has its own strengths and may be suitable for different types of optimization problems.

Methods Included:

  1. Gradient Method with Step Splitting

The gradient method with step splitting is an iterative optimization algorithm that uses the gradient (first-order derivative) of a function to find its minimum.

  1. Fastest Descent Method (Gradient Descent Method)

The fastest descent method, commonly known as the gradient descent method, is an iterative optimization algorithm used for finding the minimum of a function. It involves taking steps proportional to the negative of the gradient of the function at the current point.

  1. Newton's Method

Newton's method is an iterative numerical technique for finding the roots (or minima/maxima) of a function. It employs second-order derivative information in its optimization process.

  1. Conjugate Gradient Method

The conjugate gradient method is an iterative technique for solving systems of linear equations, which can also be applied to optimization problems. It combines aspects of the gradient descent method and direct methods for solving linear systems.

  1. Coordinate Descent Method

The coordinate descent method is an optimization algorithm that updates one variable at a time, holding the others fixed. It iteratively minimizes the function with respect to each variable.

About

Mathematical programming and optimization methods

Topics

Resources

Stars

Watchers

Forks