Skip to content

Commit

Permalink
Merge pull request #148 from bbloss/master
Browse files Browse the repository at this point in the history
Added HW8, Bloss
  • Loading branch information
jakevdp committed Dec 2, 2014
2 parents e93b873 + 325a5fc commit ea19263
Showing 1 changed file with 195 additions and 0 deletions.
195 changes: 195 additions & 0 deletions bbloss/HW8.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
{
"metadata": {
"name": "",
"signature": "sha256:d009cb31497e3dbc6ad8351e5954682e5b42495af0c4981885462a591649f7d3"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Homework \\#8, Bloss"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Homework\n",
"Below is a script taken from [scipy lectures](http:https://scipy-lectures.github.io/advanced/debugging/)\n",
"\n",
"It is meant to compare the performance of several root-finding algorithms within the ``scipy.optimize``\n",
"package, but it breaks. Use one or more of the above tools to figure out what's going on and to fix\n",
"the script.\n",
"\n",
"When you turn in this homework (via github pull request, of course), please **write a one to two paragraph summary**\n",
"of the process you used to debug this, including any dead ends (it may help to take notes as you go)."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"\n",
"\"\"\"\n",
"A script to compare different root-finding algorithms.\n",
"\n",
"This version of the script is buggy and does not execute. It is your task\n",
"to find an fix these bugs.\n",
"\n",
"The output of the script sould look like:\n",
"\n",
" Benching 1D root-finder optimizers from scipy.optimize:\n",
" brenth: 604678 total function calls\n",
" brentq: 594454 total function calls\n",
" ridder: 778394 total function calls\n",
" bisect: 2148380 total function calls\n",
"\"\"\"\n",
"from itertools import product\n",
"\n",
"import numpy as np\n",
"from scipy import optimize\n",
"\n",
"FUNCTIONS = [np.tan, # Dilating map\n",
" np.tanh, # Contracting map\n",
" lambda x: (x**3) + (1e-4*x), # Almost null gradient at the root\n",
" lambda x: x+np.sin(2*x), # Non monotonous function\n",
" lambda x: (1.1*x)+np.sin(4*x), # Fonction with several local maxima\n",
" ]\n",
"\n",
"OPTIMIZERS = [optimize.brenth, optimize.brentq,\n",
" optimize.ridder, optimize.bisect]\n",
"\n",
"\n",
"def apply_optimizer(optimizer, func, a, b):\n",
" \"\"\" Return the number of function calls given an root-finding optimizer, \n",
" a function and upper and lower bounds.\n",
" \"\"\"\n",
" #print(optimizer)\n",
" return int(optimizer(func, a, b, full_output=True)[1].function_calls),\n",
"\n",
"\n",
"def bench_optimizer(optimizer, param_grid):\n",
" \"\"\" Find roots for all the functions, and upper and lower bounds\n",
" given and return the total number of function calls.\n",
" \"\"\"\n",
" #print(optimizer)\n",
" #print(param_grid)\n",
" return sum([apply_optimizer(optimizer, func, a, b)[0] for (func, a, b) in param_grid])\n",
"\n",
"\n",
"def compare_optimizers(OPTIMIZERS):\n",
" \"\"\" Compare all the optimizers given on a grid of a few different\n",
" functions all admitting a signle root in zero and a upper and\n",
" lower bounds.\n",
" \"\"\"\n",
" random_a = -1.3 + np.random.random(size=100)\n",
" random_b = 0.3 + np.random.random(size=100)\n",
" param_grid = product(FUNCTIONS, random_a, random_b)\n",
" print(\"Benching 1D root-finder optimizers from scipy.optimize:\")\n",
" for optimizer in OPTIMIZERS:\n",
" param_grid = product(FUNCTIONS, random_a, random_b)\n",
" ncalls = bench_optimizer(optimizer, param_grid)\n",
" print('{name}: {ncalls} total function calls'.format(name=optimizer.__name__, ncalls=ncalls))\n",
"\n",
"\n",
"if __name__ == '__main__':\n",
" compare_optimizers(OPTIMIZERS)\n"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"Benching 1D root-finder optimizers from scipy.optimize:\n",
"brenth: 601187 total function calls"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"brentq: 591643 total function calls"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"ridder: 773496 total function calls"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n",
"bisect: 2146245 total function calls"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n"
]
}
],
"prompt_number": 30
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Summary\n",
"\n",
"My general approach was to use simple debugging on this problem. I don't believe I could have sped things up too significantly using heavier tools. First, I examined the code to understand its function. During this first once over, several errors and potential errors stood out (clear syntax errors and order of operations that could be clarified). I also searched for a few 'dirty programmer tricks' that I had run into in the past; for example, replacing a number '1' with a lower case 'l' in many fonts results in an error without being obvious. After correcting the obvious errors, I attempted a run. This uncovered that I had made an incorrect assumption about the .function_calls variable, thinking it a function. I also saw that I needed to move the sum() outside of the list comprehension. \n",
"\n",
"After re-running, I notice that I was not getting what I expected out of the list comprehension; I was getting tuples instead of ints. Later on, reviewing the code for the optimizers, I think I see where this comes from. At the time I simply included an index to pluck out the integer I wanted. Once those errors were cleared away I could next see that only a single optimizer was actually running. My first approach was to confirm that it was related to the ordering, not a specific optimizer. After looking over the code and not finding any hints, I placed a few print statements in the function calls. This confirmed that the other iterators were being called but not functioning. One of my print statements returned an iterator object. This was the clear pointer to the final problem. Realizing the iterator was not refreshed on each pass, I placed it inside the for loop, solving the final problem.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Notes\n",
"\n",
"#### First approach: MK-I Eyeballs:\n",
"* Observed random_b = missing leading zero in \"0.3 + np.random......\"\n",
"* Upon trying to edit also found that the above was enclosed as a markdown cell masquerading as executable code\n",
"* A giveaway for this was the lack on an \"In []: Box\"\n",
"* Also did a find for 'l' (lowercase 'L') to make sure there weren't any '1's masquerading (numeral 1). [an old dirty trick].\n",
"* Preemptively added parenthesis to enforce expected grouping\n",
"* Observed parentheses used where lists [] should be used, fixed\n",
"* Observed poor attempt at list comprehension in return statement from bench_optimizer. Corrected.\n",
"#### ==First Attempted Run==\n",
"* Errors: Int object not callable (I made an incorrect assumption on ....[1].function_calls and had thought it was a function). Fixed\n",
"* List comprehension correction failed in the line using the sum. Realized I need to move that outside the list comprehension.\n",
"* Attempted fix.\n",
"#### ==Next Attempted Run==\n",
"* Errors: unspported operand type(s) for +: 'int' and 'tuple' => Still a problem in the sum() return statement\n",
"* Confusion about why apply_optimizer is returning tuples instead of int\n",
"* Accepted that it was, added index to pluck out the first element of the tuple.\n",
"* Corrected case of \"OPTIMIZERS\" for readability\n",
"#### ==Next Attempted Run==\n",
"* Found that the first optimizer only was running. Changed order, confirmed it was related to place, not specific optimizer.\n",
"* Debugged with print statements (==Several runs==), realized the iterator for param_grid was outside of the loop and thus exhausted.\n",
"* Corrected.\n",
"#### ==Next Attempted Run==\n",
"* Correct Operation\n"
]
}
],
"metadata": {}
}
]
}

0 comments on commit ea19263

Please sign in to comment.