Skip to content

Commit

Permalink
dev version 190627_2300
Browse files Browse the repository at this point in the history
  • Loading branch information
lbborkowski committed Jun 28, 2019
1 parent 2ed5b44 commit 4267d6f
Showing 1 changed file with 11 additions and 7 deletions.
18 changes: 11 additions & 7 deletions WindTurbineDetector_dev2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,9 @@
"This notebook provides the full pipeline to perform training and inference for a wind turbine object detection model using publically available aerial images and the [TensorFlow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). It is designed to run in [Google Colab](https://colab.research.google.com/notebooks/welcome.ipynb), a Jupyter notebook environment running on a virtual machine (VM) that provides free access to a Tesla K80 GPU for up to 12 hours.\n",
"\n",
"\n",
"The aerial image data set used in this notebook is obtained from the [National Agriculture Imagery Program (NAIP) database](https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/) using [USGS EarthExplorer](https://earthexplorer.usgs.gov/). The particular NAIP images used to train, test, and validate this model are from three wind farms located in west-central Iowa containing turbines of varying capacity, style, and manufacturer. A sample NAIP image is presented below in the \"Sample NAIP image\" section. The original NAIP images are 5978 x 7648 so they had to be chopped into smaller individual images to avoid excessive memory use. In addition, the ratio of object size to image size is improved by this operation. An image size of 300 x 300 was chosen since the TensorFlow object detection SSD-based models rescale all input images to this size. \n",
"The aerial image data set used in this notebook is obtained from the [National Agriculture Imagery Program (NAIP) database](https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/) using [USGS EarthExplorer](https://earthexplorer.usgs.gov/). The particular NAIP images used to train, test, and validate this model are from three wind farms located in west-central Iowa containing turbines of varying capacity, style, and manufacturer. A sample NAIP image is presented below in the \"Sample NAIP image\" section. The original NAIP images are 5978 x 7648 so they had to be chipped into smaller individual images to avoid excessive memory use. In addition, the ratio of object size to image size is improved by this operation. An image size of 300 x 300 was chosen since the TensorFlow object detection SSD-based models rescale all input images to this size. \n",
"\n",
"A total of 488 images, all containing at least one full wind turbine, were collected and split into train (\\~80%), test (\\~16%), and validate (\\~4%) sets. [LabelImg](https://github.com/tzutalin/labelImg) was then used to label all the images in the train and test sets. Samples of the chopped and annotated images are shown below in the \"Sample chopped and annotated NAIP images\" section. Annotating the images in LabelImg creates an XML file corresponding to each image. These XML files must be converted to CSV and then TFRecords. Sample code for this can be found [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) or [here](https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html) (among other places)."
"A total of 488 images, all containing at least one full wind turbine, were collected and split into train (\\~80%), test (\\~16%), and validate (\\~4%) sets. [LabelImg](https://github.com/tzutalin/labelImg) was then used to label all the images in the train and test sets. Samples of the chipped and annotated images are shown below in the \"Sample chipped and annotated NAIP images\" section. Annotating the images in LabelImg creates an XML file corresponding to each image. These XML files must be converted to CSV and then TFRecords. Sample code for this can be found [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) or [here](https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html) (among other places)."
]
},
{
Expand Down Expand Up @@ -97,7 +97,7 @@
"colab_type": "text"
},
"source": [
"### Sample chopped and annotated NAIP images"
"### Sample chipped and annotated NAIP images"
]
},
{
Expand Down Expand Up @@ -803,7 +803,7 @@
"print(TEST_IMAGE_PATHS)\n",
"\n",
"# Size, in inches, of the output images.\n",
"IMAGE_SIZE = (8, 8)"
"IMAGE_SIZE = (6, 6)"
],
"execution_count": 0,
"outputs": []
Expand All @@ -827,7 +827,9 @@
"colab": {}
},
"source": [
"ii=0\n",
"for image_path in TEST_IMAGE_PATHS:\n",
" ii+=1\n",
" image = Image.open(image_path)\n",
" # the array based representation of the image will be used later in order to prepare the\n",
" # result image with boxes and labels on it.\n",
Expand All @@ -846,9 +848,11 @@
" instance_masks=output_dict.get('detection_masks'),\n",
" use_normalized_coordinates=True,\n",
" line_thickness=8)\n",
" plt.figure(figsize=IMAGE_SIZE)\n",
" fig = plt.figure(figsize=IMAGE_SIZE)\n",
" plt.axis('off')\n",
" plt.imshow(image_np)"
" #plt.savefig('%s_%02d_%02d.jpg' % (image_path.replace('.jpg',''), ii, jj),bbox_inches='tight')\n",
" plt.imshow(image_np)\n",
" #fig.savefig('valid_%02d.png' % (ii), bbox_inches='tight', pad_inches=0)"
],
"execution_count": 0,
"outputs": []
Expand Down Expand Up @@ -1048,7 +1052,7 @@
"ii=0\n",
"for BBs in BBsWTs:\n",
" centerBBs[ii][:]=[np.mean([BBs[1],BBs[3]]),np.mean([[BBs[0],BBs[2]]])]\n",
" ii+=ii"
" ii+=1"
],
"execution_count": 0,
"outputs": []
Expand Down

0 comments on commit 4267d6f

Please sign in to comment.