Skip to content

Commit

Permalink
Merge pull request #136 from BethanyL/HW8
Browse files Browse the repository at this point in the history
Hw8
  • Loading branch information
jakevdp committed Dec 2, 2014
2 parents c9eb454 + f692ade commit a5c79de
Showing 1 changed file with 170 additions and 0 deletions.
170 changes: 170 additions & 0 deletions BethanyL/HW8.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
{
"metadata": {
"name": "",
"signature": "sha256:dc1c3da6007690bc487895568b6b72e889aa80eafefe5373c2231b135127715f"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "code",
"collapsed": false,
"input": [
"\"\"\"\n",
"A script to compare different root-finding algorithms.\n",
"\n",
"This version of the script is buggy and does not execute. It is your task\n",
"to find an fix these bugs.\n",
"\n",
"The output of the script sould look like:\n",
"\n",
" Benching 1D root-finder optimizers from scipy.optimize:\n",
" brenth: 604678 total function calls\n",
" brentq: 594454 total function calls\n",
" ridder: 778394 total function calls\n",
" bisect: 2148380 total function calls\n",
"\"\"\"\n",
"from itertools import product\n",
"\n",
"import numpy as np\n",
"from scipy import optimize\n",
"\n",
"FUNCTIONS = (np.tan, # Dilating map\n",
" np.tanh, # Contracting map\n",
" lambda x: x**3 + 1e-4*x, # Almost null gradient at the root\n",
" lambda x: x+np.sin(2*x), # Non monotonous function\n",
" lambda x: 1.1*x+np.sin(4*x), # Fonction with several local maxima\n",
" )\n",
"\n",
"OPTIMIZERS = (optimize.brenth, optimize.brentq,\n",
" optimize.ridder, optimize.bisect)\n",
"\n",
"\n",
"\n",
"def apply_optimizer(optimizer, func, a, b):\n",
" \"\"\" Return the number of function calls given an root-finding optimizer, \n",
" a function and upper and lower bounds.\n",
" \"\"\"\n",
" \n",
" #whatis(optimizer(func, a, b, full_output=True)[1].function_calls)\n",
" return optimizer(func, a, b, full_output=True)[1].function_calls\n",
"\n",
"\n",
"def bench_optimizer(optimizer, param_grid):\n",
" \"\"\" Find roots for all the functions, and upper and lower bounds\n",
" given and return the total number of function calls.\n",
" \"\"\"\n",
"\n",
" x = 0\n",
" for func, a, b in param_grid:\n",
" x = x + apply_optimizer(optimizer, func, a, b)\n",
" return x\n",
"\n",
"\n",
"def compare_optimizers(optimizers):\n",
" \"\"\" Compare all the optimizers given on a grid of a few different\n",
" functions all admitting a signle root in zero and a upper and\n",
" lower bounds.\n",
" \"\"\"\n",
" random_a = -1.3 + np.random.random(size=100)\n",
" random_b = .3 + np.random.random(size=100)\n",
" #param_grid = product(FUNCTIONS, random_a, random_b)\n",
" print(\"Benching 1D root-finder optimizers from scipy.optimize:\")\n",
" for optimizer in OPTIMIZERS:\n",
" param_grid = product(FUNCTIONS, random_a, random_b)\n",
" ncalls = bench_optimizer(optimizer, param_grid)\n",
" print('{name}: {ncalls} total function calls'.format(\n",
" name=optimizer.__name__, ncalls=ncalls))\n",
"\n",
"\n",
"if __name__ == '__main__':\n",
" compare_optimizers(OPTIMIZERS)\n"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"Benching 1D root-finder optimizers from scipy.optimize:\n",
"brenth: 602899 total function calls"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"brentq: 591665 total function calls"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"ridder: 772996 total function calls"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"bisect: 2147620 total function calls"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n"
]
}
],
"prompt_number": 1
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Step 1: This code: <br>\n",
"&emsp;return sum(apply_optimizer(optimizer, func, a, b)<br>\n",
"&emsp;&emsp;for func, a, b in param_grid)<br>\n",
"is not a valid for loop, so I split it up like this: <br>\n",
"&emsp;x = 0<br>\n",
"&emsp;for func, a, b in param_grid:<br>\n",
"&emsp;&emsp;x = x + apply_optimizer(optimizer, func, a, b)<br>\n",
"&emsp;return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Step 2: I got a TypeError because x is an integer and apply_optimizer is returning a tuple, so I checked the apply_optimizer function. This led me to check the documentation of scipy/optimize (and even the code itself). I saw that it should be returning an int, not a tuple. I used pdb to see that \"optimizer(func, a, b, full_output=True)[1].function_calls\" does indeed return a tuple. Then I realized that the comma must make a tuple. Sure enough, removing the comma fixes the problem."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Step 3: The new problem is that only brenth runs - the rest have 0 function calls. Checking the documentation did not help. However, if you comment out brenth, you see that the first optimizer will run, no matter what it is. I used pdb to step through and realized that the loop in bench_optimizer was not running. The I investigated param_grid and the documentation for product. I guessed that re-creating param_grid in the loop would fix the problem, which was correct."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}

0 comments on commit a5c79de

Please sign in to comment.