diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..cbf174c --- /dev/null +++ b/.gitignore @@ -0,0 +1,14 @@ +__pycache__ +__pycache__/* + +files_done.txt + +app_files/src/__pycache__ +app_files/src/__pycache__/* + +app_files/logs/*.log + +labeled/*.zip +assets/*.jpg +assets/*.jpeg +assets/*.JPG diff --git a/README.md b/README.md index 4e4324c..a400142 100644 --- a/README.md +++ b/README.md @@ -1,20 +1,67 @@ + + ![](doodler-logo.png) Check out the [Doodler website](https://dbuscombe-usgs.github.io/dash_doodler/) -> Daniel Buscombe, Marda Science -> Developed for the USGS Coastal Marine Geology program, as part of the Florence Supplemental project +## Changes in 07/30/21. v 1.2.5 + +This is a major update and may require a new clone. + +Changes include: +* revised file structure, to be modular/easier to read and parse + * new directory `app_files` contains `cache-directory`, `logs`, and `src` + * `logs` contains program logs - now reports RAM usage throughout program for troubleshooting. Requires new dependency `psutil` + * `src` contains the main program code, including all of the segmentation codes (in `image_segmentation.py` and `annotations_to_segmentations.py`) and most of the app codes (`app_funcs.py` and `plot_utils.py`) + * `cache-directory` - clear cache for the program independently of the browser by deleting files here. Requires new dependency `flask-caching` + * `assets` comes with no imagery, but sample imagery is, by default, downloaded automatically into that folder. You can disable the automatic downloading of the imagery by editing `environment/settings.py` where indicated + * `install` folder is now called `environment` + * removed the large gifs from the `assets/logos` folder (they now render in html from a github release) + * you still run `doodler.py`, but most of the app is now in `app.py`, and a lot more of the functions are in the `app_files/src/` functions. This allows for better readability and modularity, therefore portability into other frameworks, and more effective troubleshooting +* example results files are now downloadable [here](https://github.com/dbuscombe-usgs/dash_doodler/releases/download/example-results/results2021-06-21-11-05.zip) rather than shipped by default with the program. Run `python download_sample_results.py` from the `results` folder +* overall the download is much, much smaller, and will break less with `git pull` because the `.gitignore` contains the common gotchas +* removed superfluous buttons in the modebar +* code commented better and better docstrings and README details +* added example Dockerfile +* ports, IPS and other deployment variables can be changed or added to `environment\settings.py` that gets imported at startup +* added [Developer's Notes](#developers) to this README, with more details about program setup and function -> This is a "Human-In-The-Loop" machine learning tool for partially supervised image segmentation and is based on code previously contained in the "doodle_labeller" [repository](https://github.com/dbuscombe-usgs/doodle_labeller) which implemenets a similar algorithm in OpenCV +## Overview +> Daniel Buscombe, Marda Science / USGS Pacific Coastal and Marine Science Center -> The Conditional Random Field (CRF) model used by this tool is described by [Buscombe and Ritchie (2018)](https://www.mdpi.com/2076-3263/8/7/244) +> Developed for the USGS Coastal Marine Geology program, as part of the Florence Supplemental project + +This is a "Human-In-The-Loop" machine learning tool for partially supervised image segmentation and is based on code previously contained in the "doodle_labeller" [repository](https://github.com/dbuscombe-usgs/doodle_labeller) which implements a similar algorithm in OpenCV. The Conditional Random Field (CRF) model used by this tool is described by [Buscombe and Ritchie (2018)](https://www.mdpi.com/2076-3263/8/7/244) -The video shows a basic usage of doodler. 1) Annotate the scene with a few examples of each class (colorful buttons). 2) Check 'compute and show segmentation' and wait for the result. The label image is written to the 'results' folder, and you can also download a version of it from your browser for quick viewing +The video shows a basic usage of doodler. 1) Annotate the scene with a few examples of each class (colorful buttons). 2) Check 'compute and show segmentation' and wait for the result. The label image is written to the 'results' folder -![Doodler](https://raw.githubusercontent.com/dbuscombe-usgs/dash_doodler/main/assets/logos/quick-satshoreline-x2c.gif) +![Doodler](https://github.com/dbuscombe-usgs/dash_doodler/releases/download/gifs/quick-satshoreline-x2c.gif) -![Doodler](https://raw.githubusercontent.com/dbuscombe-usgs/dash_doodler/main/assets/logos/quick-satshore2-x2c.gif) +![Doodler](https://github.com/dbuscombe-usgs/dash_doodler/releases/download/gifs/quick-satshore2-x2c.gif) ## Contents @@ -25,6 +72,7 @@ The video shows a basic usage of doodler. 1) Annotate the scene with a few examp * [Outputs](#outputs) * [Acknowledgments](#ack) * [Contribute](#contribute) +* [Developer's Notes](#developers) * [Progress](#progress) * [Roadmap](#roadmap) @@ -41,6 +89,8 @@ This is python software that is designed to be used from within a `conda` enviro ## Installation +Open a terminal + Clone/download this repository ``` @@ -50,20 +100,27 @@ git clone --depth 1 https://github.com/dbuscombe-usgs/dash_doodler.git Install the requirements ```bash -conda env create --file install/dashdoodler.yml +conda env create --file environment/dashdoodler-clean.yml conda activate dashdoodler ``` +*If* the above doesn't work, try this: +```bash +conda env create --file environment/dashdoodler.yml +conda activate dashdoodler +``` -*If* the above doesn't work, try this: +*If neither of the above* work, try this: ```bash conda create --name dashdoodler python=3.6 conda activate dashdoodler conda install -c conda-forge pydensecrf cairo -pip install -r install/requirements.txt +pip install -r environment/requirements.txt ``` +and good luck to you! + ## Use Move your images into the `assets` folder. For the moment, they must be jpegs with the `.jpg` (or `JPG` or `jpeg`) extension. Support for other image types forthcoming ... @@ -77,6 +134,8 @@ Open a browser and go to 127.0.0.1:8050. You may have to hit the refresh button. ### Example screenshots of use with example dataset +(note: these are screengrabs of an older version of the program, so the buttons and their names are now slightly different) + #### `doodler.py` ![Example 1](https://raw.githubusercontent.com/dbuscombe-usgs/dash_doodler/main/assets/logos/doodler_py.png) @@ -89,23 +148,18 @@ Open a browser and go to 127.0.0.1:8050. You may have to hit the refresh button. ### Videos More demonstration videos (older version of the program): -![Doodler example 2](https://raw.githubusercontent.com/dbuscombe-usgs/dash_doodler/main/assets/logos/quick-saturban-x2c.gif) +![Doodler example 2](https://github.com/dbuscombe-usgs/dash_doodler/releases/download/gifs/quick-saturban-x2c.gif) -![Doodler example 3](https://raw.githubusercontent.com/dbuscombe-usgs/dash_doodler/main/assets/logos/doodler-demo-2-9-21-short.gif) +![Doodler example 3](https://github.com/dbuscombe-usgs/dash_doodler/releases/download/gifs/doodler-demo-2-9-21-short.gif) -![Elwha example](https://raw.githubusercontent.com/dbuscombe-usgs/dash_doodler/main/assets/logos/doodler-demo-2-9-21-short-elwha.gif) +![Elwha example](https://github.com/dbuscombe-usgs/dash_doodler/releases/download/gifs/doodler-demo-2-9-21-short-elwha.gif) -![Coast Train example](https://raw.githubusercontent.com/dbuscombe-usgs/dash_doodler/main/assets/logos/doodler-demo-2-9-21-short-coast.gif) +![Coast Train example](https://github.com/dbuscombe-usgs/dash_doodler/releases/download/gifs/doodler-demo-2-9-21-short-coast.gif) -![Coast Train example 2](https://raw.githubusercontent.com/dbuscombe-usgs/dash_doodler/main/assets/logos/doodler-demo-2-9-21-short-coast2.gif) +![Coast Train example 2](https://github.com/dbuscombe-usgs/dash_doodler/releases/download/gifs/doodler-demo-2-9-21-short-coast2.gif) - + +## Docker workflows -To build your own docker based on miniconda `continuumio/miniconda3` +To build your own docker image based on miniconda `continuumio/miniconda3`, called `doodler_docker_image`: ``` -cp install/Dockerfile.miniconda ./Dockerfile -sudo docker build -t doodler_docker_image . +docker build -t doodler_docker_image . ``` -then when it has finished building, check its size +then when it has finished building (it takes a while), check its size ``` sudo docker image ls doodler_docker_image ``` -It is large - 4.8 GB. Run it: +It is large - 4.8 GB. Run it in a container called `www`: ``` sudo docker run -p 8050:8050 -d -it --name www doodler_docker_image ``` -Build with pip instead: +The terminal will show no output, but you can see the process running a few different ways + +Lists running containers: ``` -cp install/Dockerfile.pip ./Dockerfile -sudo docker build -t doodler_docker_image_pip . +docker ps ``` -How large is that? - +the container name will be at the end of the line of output of docker ps (images don't have logs; they're like classes) ``` -sudo docker image ls doodler_docker_image_pip +docker logs [container_name] - ``` + To stop and remove: ``` sudo docker stop www sudo docker rm www -``` --> +``` + +Don't ask me about Docker. That's all I know. Please contribute Docker workflows and suggestions! ## Acknowledgements -Based on [this plotly example](https://github.com/plotly/dash-sample-apps/tree/master/apps/dash-image-segmentation) and the previous openCV based implementation [doodle_labeller](https://github.com/dbuscombe-usgs/doodle_labeller) +Based on [this plotly example](https://github.com/plotly/dash-sample-apps/tree/master/apps/dash-image-segmentation) and the previous openCV based implementation [doodle_labeller](https://github.com/dbuscombe-usgs/doodle_labeller), that actually has origins in a USGS CDI-sponsored class I taught in summer of 2018, called [dl-tools](https://github.com/dbuscombe-usgs/dl_tools). So, it's been a 3+ year effort! ## Contributing Contributions are welcome, and they are greatly appreciated! Credit will always be given. @@ -174,6 +232,7 @@ Please include: * Your operating system name and version. * Any details about your local setup that might be helpful in troubleshooting. * Detailed steps to reproduce the bug. + * the log file made by the program during the session, found in #### Fix Bugs @@ -220,6 +279,10 @@ Commit your changes and push your branch to GitHub: Submit a pull request through the GitHub website. +## Developers notes + + + ## Progress report 10/20/20: @@ -376,6 +439,19 @@ https://dbuscombe-usgs.github.io/dash_doodler/ * partially fixed bug with file select (interval just 200 milliseconds) * cleaned up and further tested all utils scripts +07/30/21. v 1.2.5 +* code tidied up and commented, added docstrings +* removed last traces of RF code implementation +* added logging details, including RAM utilization throughout +* CRF and feature extraction only happen in parallel now when RAM < 10GB and usage is <50%, i.e. there is 5GB of available RAM for parallel processing +* %d-%m-%Y-%H-%M-%S changed to sortable %Y-%m-%d-%H-%M-%S format in log file +* reorganized code into modular components +* moved assets to downloadable zipped file than downloads and unpacks automatically +* removed samples +* moved gifs to a release, so could be linked in this README (used a lot of space, large download) +* two new dependencies, Flasking-Cahing, and psutil +* flask-caching is for caching - clear cache for the program independently of the browser by deleting files in the app_files/cache_directory +* now has a `.gitignore` file to ignore cached files ## Roadmap @@ -387,7 +463,7 @@ https://dbuscombe-usgs.github.io/dash_doodler/ * Delay running the model until all of the coefficients are adjusted...right now it jumps right into the calcs as soon a slider is moved, but maybe you want to adjust two sliders first. Maybe change the compute segmentation to a button that changes color if the model is out of date wrt to the current settings. [here](https://github.com/dbuscombe-usgs/dash_doodler/issues/2) -* pymongo (mongoDB) database backend - thanks Evan and Shah @UNCG-DAISY! See [here](https://api.mongodb.com/python/current/tools.html), [here](https://strapi.io/pricing) + * on Ctrl+C, clear 'labeled' folder, etc diff --git a/app.py b/app.py new file mode 100644 index 0000000..052d15c --- /dev/null +++ b/app.py @@ -0,0 +1,851 @@ +# Written by Dr Daniel Buscombe, Marda Science LLC +# for the USGS Coastal Change Hazards Program +# +# MIT License +# +# Copyright (c) 2020-2021, Marda Science LLC +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +#======================================================== +## ``````````````````````````` local imports +# allows loading of functions from the src directory +import sys,os +sys.path.insert(1, 'app_files'+os.sep+'src') +from annotations_to_segmentations import * + +#======================================================== +## ``````````````````````````` imports +##======================================================== + +## dash/plotly/flask +import plotly.express as px +import dash +from dash.dependencies import Input, Output, State +import dash_html_components as html +import dash_core_components as dcc +from flask import Flask +from flask_caching import Cache + +#others +import base64, PIL.Image, json, shutil, time, logging, psutil +from datetime import datetime + +#======================================================== +## defaults +#======================================================== + +DEFAULT_IMAGE_PATH = "assets"+os.sep+"logos"+os.sep+"dash-default.jpg" + +try: + from my_defaults import * + print('Hyperparameters imported from my_defaults.py') +except: + from defaults import * + print('Default hyperparameters imported from src/my_defaults.py') + +#======================================================== +## logs +#======================================================== + +logging.basicConfig(filename=os.getcwd()+os.sep+'app_files'+os.sep+'logs'+ + os.sep+datetime.now().strftime("%Y-%m-%d-%H-%M")+'.log', + level=logging.INFO) +logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p') + +#======================================================== +## folders +#======================================================== + +UPLOAD_DIRECTORY = os.getcwd()+os.sep+"assets" +LABELED_DIRECTORY = os.getcwd()+os.sep+"labeled" +results_folder = 'results'+os.sep+'results'+datetime.now().strftime("%Y-%m-%d-%H-%M") + +try: + os.mkdir(results_folder) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info("Folder created: %s" % (results_folder)) +except: + pass + +logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) +logging.info("Results will be written to %s" % (results_folder)) + +if not os.path.exists(UPLOAD_DIRECTORY): + os.makedirs(UPLOAD_DIRECTORY) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Made the directory '+UPLOAD_DIRECTORY) + + +##======================================================== +## classes +#======================================================== + +# the number of different classes for labels +DEFAULT_LABEL_CLASS = 0 + +try: + with open('classes.txt') as f: + classes = f.readlines() +except: #in case classes.txt does not exist + print("classes.txt not found or badly formatted. \ + Exit the program and fix the classes.txt file ... \ otherwise, will continue using default classes. ") + classes = ['water', 'land'] + +class_label_names = [c.strip() for c in classes] + +NUM_LABEL_CLASSES = len(class_label_names) + +#======================================================== +## colormap +#======================================================== + +if NUM_LABEL_CLASSES<=10: + class_label_colormap = px.colors.qualitative.G10 +else: + class_label_colormap = px.colors.qualitative.Light24 + +# we can't have fewer colors than classes +assert NUM_LABEL_CLASSES <= len(class_label_colormap) + +class_labels = list(range(NUM_LABEL_CLASSES)) + +logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) +logging.info('loaded class labels:') +for f in class_label_names: + logging.info(f) + +#======================================================== +## image asset files +#======================================================== +files = get_asset_files() + +logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) +logging.info('loaded files:') +for f in files: + logging.info(f) + +##======================================================== +# app, server, and cache +#======================================================== + +# Normally, Dash creates its own Flask server internally. By creating our own, +# we can create a route for downloading files directly: +server = Flask(__name__) +app = dash.Dash(server=server) + +#app = dash.Dash(__name__) +#server = app.server +app.config.suppress_callback_exceptions=True +# app = dash.Dash(__name__) +# app = dash.Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP]) +# server = app.server +# app.config.suppress_callback_exceptions = True + +cache = Cache(app.server, config={ + 'CACHE_TYPE': 'filesystem', + 'CACHE_DIR': 'app_files'+os.sep+'cache-directory' +}) + +##======================================================== +## app layout +##======================================================== +app.layout = html.Div( + id="app-container", + children=[ + #======================================================== + ## tab 1 + #======================================================== + + html.Div( + id="banner", + children=[ + html.H1( + "Doodler: Interactive Image Segmentation", + id="title", + className="seven columns", + ), + + html.Img(id="logo", src=app.get_asset_url("logos"+os.sep+"dash-logo-new.png")), + # html.Div(html.Img(src=app.get_asset_url('logos/dash-logo-new.png'), style={'height':'10%', 'width':'10%'})), #id="logo", + + html.H2(""), + dcc.Upload( + id="upload-data", + children=html.Div( + [" "] #(Label all classes that are present, in all regions of the image those classes occur) + ), + style={ + "width": "100%", + "height": "30px", + "lineHeight": "70px", + "borderWidth": "1px", + "borderStyle": "none", + "borderRadius": "1px", + "textAlign": "center", + "margin": "10px", + }, + multiple=True, + ), + html.H2(""), + html.Ul(id="file-list"), + + ], #children + ), #div banner id + + dcc.Tabs([ + dcc.Tab(label='Imagery and Controls', children=[ + + html.Div( + id="main-content", + children=[ + + html.Div( + id="left-column", + children=[ + dcc.Loading( + id="segmentations-loading", + type="cube", + children=[ + # Graph + dcc.Graph( + id="graph", + figure=make_and_return_default_figure( + images=[DEFAULT_IMAGE_PATH], + stroke_color=convert_integer_class_to_color(class_label_colormap,DEFAULT_LABEL_CLASS), + pen_width=DEFAULT_PEN_WIDTH, + shapes=[], + ), + config={ + 'displayModeBar': 'hover', + "displaylogo": False, + "modeBarButtonsToRemove": [ + "toImage", + "hoverClosestCartesian", + "hoverCompareCartesian", + "toggleSpikelines", + ], + "modeBarButtonsToAdd": [ + "drawopenpath", + "eraseshape", + ] + }, + ), + ], + ), + + ], + className="ten columns app-background", + ), + + html.Div( + id="right-column", + children=[ + + + html.H6("Label class"), + # Label class chosen with buttons + html.Div( + id="label-class-buttons", + children=[ + html.Button( + #"%2d" % (n,), + "%s" % (class_label_names[n],), + id={"type": "label-class-button", "index": n}, + style={"background-color": convert_integer_class_to_color(class_label_colormap,c)}, + ) + for n, c in enumerate(class_labels) + ], + ), + + html.H6(id="pen-width-display"), + # Slider for specifying pen width + dcc.Slider( + id="pen-width", + min=0, + max=5, + step=1, + value=DEFAULT_PEN_WIDTH, + ), + + + # Indicate showing most recently computed segmentation + dcc.Checklist( + id="crf-show-segmentation", + options=[ + { + "label": "Compute/Show segmentation", + "value": "Show segmentation", + } + ], + value=[], + ), + + dcc.Markdown( + ">Post-processing settings" + ), + + html.H6(id="theta-display"), + # Slider for specifying pen width + dcc.Slider( + id="crf-theta-slider", + min=1, + max=100, + step=1, + value=DEFAULT_CRF_THETA, + ), + + html.H6(id="mu-display"), + # Slider for specifying pen width + dcc.Slider( + id="crf-mu-slider", + min=1, + max=100, + step=1, + value=DEFAULT_CRF_MU, + ), + + html.H6(id="crf-downsample-display"), + # Slider for specifying pen width + dcc.Slider( + id="crf-downsample-slider", + min=1, + max=6, + step=1, + value=DEFAULT_CRF_DOWNSAMPLE, + ), + + html.H6(id="crf-gtprob-display"), + # Slider for specifying pen width + dcc.Slider( + id="crf-gtprob-slider", + min=0.5, + max=0.95, + step=0.05, + value=DEFAULT_CRF_GTPROB, + ), + + dcc.Markdown( + ">Classifier settings" + ), + + html.H6(id="rf-downsample-display"), + # Slider for specifying pen width + dcc.Slider( + id="rf-downsample-slider", + min=1, + max=20, + step=1, + value=DEFAULT_RF_DOWNSAMPLE, + ), + + ], + className="three columns app-background", + ), + ], + className="ten columns", + ), #main content Div + + #======================================================== + ## tab 2 + #======================================================== + + ]), + dcc.Tab(label='File List and Instructions', children=[ + + html.H4(children="Doodler"), + dcc.Markdown( + "> A user-interactive tool for fast segmentation of imagery (designed for natural environments), using a Multilayer Perceptron classifier and Conditional Random Field (CRF) refinement. \ + Doodles are used to make a classifier model, which maps image features to unary potentials to create an initial image segmentation. The segmentation is then refined using a CRF model." + ), + + dcc.Input(id='my-id', value='Enter-user-ID', type="text"), + html.Button('Submit', id='button'), + html.Div(id='my-div'), + + html.H3("Select Image"), + dcc.Dropdown( + id="select-image", + optionHeight=15, + style={'fontSize': 13}, + options = [ + {'label': image.split('assets/')[-1], 'value': image } \ + for image in files + ], + + value='assets/logos/dash-default.jpg', # + multi=False, + ), + html.Div([html.Div(id='live-update-text'), + dcc.Interval(id='interval-component', interval=500, n_intervals=0)]), + + + html.P(children="This image/Copy"), + dcc.Textarea(id="thisimage_output", cols=80), + html.Br(), + + dcc.Markdown( + """ + **Instructions:** + * Before you begin, make a new 'classes.txt' file that contains a list of the classes you'd like to label + * Optionally, you can copy the images you wish to label into the 'assets' folder (just jpg, JPG or jpeg extension, or mixtures of those, for now) + * Enter a user ID (initials or similar). This will get appended to your results to identify you. Results are also timestamped. You may enter a user ID at any time (or not at all) + * Select an image from the list (often you need to select the image twice: make sure the image selected matches the image name shown in the box) + * Make some brief annotations ('doodles') of every class present in the image, in every region of the image that class is present + * Check 'Show/compute segmentation'. The computation time depends on image size, and the number of classes and doodles. Larger image or more doodles/classes = greater time and memory required + * If you're not happy, uncheck 'Show/compute segmentation' and play with the parameters. However, it is often better to leave the parameters and correct mistakes by adding or removing doodles, or using a different pen width. + * Once you're happy, you can download the label image, but it is already saved in the 'results' folder. + * Before you move onto the next image from the list, uncheck 'Show/compute segmentation'. + * Repeat. Happy doodling! Press Ctrl+C to end the program. Results are in the 'results' folder, timestamped. Session logs are also timestamped and found in the 'logs' directory. + * As you go, the program only lists files that are yet to be labeled. It does this irrespective of your opinion of the segmentation, so you get 'one shot' before you select another image (i.e. you cant go back to redo) + * [Code on GitHub](https://github.com/dbuscombe-usgs/dash_doodler). + """ + ), + dcc.Markdown( + """ + **Tips:** 1) Works best for small imagery, typically much smaller than 3000 x 3000 px images. This prevents out-of-memory errors, and also helps you identify small features\ + 2) Less is usually more! It is often best to use small pen width and relatively few annotations. Don't be tempted to spend too long doodling; extra doodles can be strategically added to correct segmentations \ + 3) Make doodles of every class present in the image, and also every region of the image (i.e. avoid label clusters) \ + 4) If things get weird, hit the refresh button on your browser and it should reset the application. Don't worry, all your previous work is saved!\ + 5) Remember to uncheck 'Show/compute segmentation' before you change parameter values or change image\ + """ + ), + + ]),]), + + #======================================================== + ## components that are not displayed, used for storing data in localhost + #======================================================== + + html.Div( + id="no-display", + children=[ + dcc.Store(id="image-list-store", data=[]), + # Store for user created masks + # data is a list of dicts describing shapes + dcc.Store(id="masks", data={"shapes": []}), + # Store for storing segmentations from shapes + # the keys are hashes of shape lists and the data are pngdata + # representing the corresponding segmentation + # this is so we can download annotations and also not recompute + # needlessly old segmentations + dcc.Store(id="segmentation", data={}), + dcc.Store(id="classified-image-store", data=""), + ], + ), #nos-display div + + ], #children +) #app layout + + +# ##======================================================== +##======================================================== +## app callbacks +##======================================================== +@app.callback( + [ + Output("select-image","options"), + Output("graph", "figure"), + Output("image-list-store", "data"), + Output("masks", "data"), + Output('my-div', 'children'), + Output("segmentation", "data"), + Output('thisimage_output', 'value'), + Output("pen-width-display", "children"), + Output("theta-display", "children"), + Output("mu-display", "children"), + Output("crf-downsample-display", "children"), + Output("crf-gtprob-display", "children"), + Output("rf-downsample-display", "children"), + Output("classified-image-store", "data"), + ], + [ + Input("upload-data", "filename"), + Input("upload-data", "contents"), + Input("graph", "relayoutData"), + Input( + {"type": "label-class-button", "index": dash.dependencies.ALL}, + "n_clicks_timestamp", + ), + Input("crf-theta-slider", "value"), + Input('crf-mu-slider', "value"), + Input("pen-width", "value"), + Input("crf-show-segmentation", "value"), + Input("crf-downsample-slider", "value"), + Input("crf-gtprob-slider", "value"), + Input("rf-downsample-slider", "value"), + Input("select-image", "value"), + Input('interval-component', 'n_intervals'), + ], + [ + State("image-list-store", "data"), + State('my-id', 'value'), + State("masks", "data"), + State("segmentation", "data"), + State("classified-image-store", "data"), + ], +) + +# ##======================================================== +##======================================================== +## app callback function +##======================================================== +def update_output( + uploaded_filenames, + uploaded_file_contents, + graph_relayoutData, + any_label_class_button_value, + crf_theta_slider_value, + crf_mu_slider_value, + pen_width_value, + show_segmentation_value, + crf_downsample_value, + gt_prob, + rf_downsample_value, + select_image_value, + n_intervals, + image_list_data, + my_id_value, + masks_data, + segmentation_data, + segmentation_store_data, + ): + """ + This is where all the action happens, and is called any time a button is pressed + This function is automatically called, and the inputs and outputs match, in order, + the list of callback inputs and outputs above + + The callback context is first defined, which dictates what the function does + """ + + callback_context = [p["prop_id"] for p in dash.callback_context.triggered][0] + #print(callback_context) + + multichannel = True + intensity = True + edges = True + texture = True + + image_list_data = [] + all_image_value = '' + files = '' + options = [] + + if callback_context=='interval-component.n_intervals': + #this file must exist - it contains a list of images labeled in this session + filelist = 'files_done.txt' + files, labeled_files = uploaded_files(filelist,UPLOAD_DIRECTORY,LABELED_DIRECTORY) + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('File list written to %s' % (filelist)) + + files = [f.split('assets/')[-1] for f in files] + labeled_files = [f.split('labeled/')[-1] for f in labeled_files] + + files = list(set(files) - set(labeled_files)) + files = sorted(files) + + options = [{'label': image, 'value': image } for image in files] + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Checked assets and labeled lists and revised list of images yet to label') + + if 'assets' not in select_image_value: + select_image_value = 'assets'+os.sep+select_image_value + + if callback_context == "graph.relayoutData": + try: + if "shapes" in graph_relayoutData.keys(): + masks_data["shapes"] = graph_relayoutData["shapes"] + else: + return dash.no_update + except: + return dash.no_update + + elif callback_context == "select-image.value": + masks_data={"shapes": []} + segmentation_data={} + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('New image selected') + + pen_width = pen_width_value + + # find label class value by finding button with the greatest n_clicks + if any_label_class_button_value is None: + label_class_value = DEFAULT_LABEL_CLASS + else: + label_class_value = max( + enumerate(any_label_class_button_value), + key=lambda t: 0 if t[1] is None else t[1], + )[0] + + fig = make_and_return_default_figure( + images = [select_image_value], + stroke_color=convert_integer_class_to_color(class_label_colormap,label_class_value), + pen_width=pen_width, + shapes=masks_data["shapes"], + ) + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Main figure window updated with new image') + + if ("Show segmentation" in show_segmentation_value) and ( + len(masks_data["shapes"]) > 0): + # to store segmentation data in the store, we need to base64 encode the + # PIL.Image and hash the set of shapes to use this as the key + # to retrieve the segmentation data, we need to base64 decode to a PIL.Image + # because this will give the dimensions of the image + sh = shapes_to_key( + [ + masks_data["shapes"], + '', #segmentation_features_value, + '', #sigma_range_slider_value, + ] + ) + + segimgpng = None + + # start timer + if os.name=='posix': # true if linux/mac or cygwin on windows + start = time.time() + else: # windows + start = time.clock() + + # this is the function that computes and updates the segmentation whenever the checkbox is checked + segimgpng, seg, img, color_doodles, doodles = show_segmentation( + [select_image_value], masks_data["shapes"], callback_context, + crf_theta_slider_value, crf_mu_slider_value, results_folder, rf_downsample_value, crf_downsample_value, gt_prob, my_id_value, + multichannel, intensity, edges, texture,class_label_colormap + ) + + logging.info('... showing segmentation on screen') + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + + if os.name=='posix': # true if linux/mac + elapsed = (time.time() - start)#/60 + else: # windows + elapsed = (time.clock() - start)#/60 + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Processing took %s seconds' % (str(elapsed))) + + lstack = (np.arange(seg.max()) == seg[...,None]-1).astype(int) #one-hot encode the 2D label into 3D stack of IxJxN classes + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('One-hot encoded label stack created') + + + if type(select_image_value) is list: + if 'jpg' in select_image_value[0]: + colfile = select_image_value[0].replace('assets',results_folder).replace('.jpg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') + if 'JPG' in select_image_value[0]: + colfile = select_image_value[0].replace('assets',results_folder).replace('.JPG','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') + if 'jpeg' in select_image_value[0]: + colfile = select_image_value[0].replace('assets',results_folder).replace('.jpeg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') + + if np.ndim(img)==3: + imsave(colfile,label_to_colors(seg-1, img[:,:,0]==0, alpha=128, colormap=class_label_colormap, color_class_offset=0, do_alpha=False)) + else: + imsave(colfile,label_to_colors(seg-1, img==0, alpha=128, colormap=class_label_colormap, color_class_offset=0, do_alpha=False)) + + else: + if 'jpg' in select_image_value: + colfile = select_image_value.replace('assets',results_folder).replace('.jpg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') + if 'JPG' in select_image_value: + colfile = select_image_value.replace('assets',results_folder).replace('.JPG','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') + if 'jpeg' in select_image_value: + colfile = select_image_value.replace('assets',results_folder).replace('.jpeg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') + + if np.ndim(img)==3: + imsave(colfile,label_to_colors(seg-1, img[:,:,0]==0, alpha=128, colormap=class_label_colormap, color_class_offset=0, do_alpha=False)) + else: + imsave(colfile,label_to_colors(seg-1, img==0, alpha=128, colormap=class_label_colormap, color_class_offset=0, do_alpha=False)) + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('RGB label image saved to %s' % (colfile)) + + settings_dict = np.array([pen_width, crf_downsample_value, rf_downsample_value, crf_theta_slider_value, crf_mu_slider_value, gt_prob]) + + if type(select_image_value) is list: + if 'jpg' in select_image_value[0]: + numpyfile = select_image_value[0].replace('assets',results_folder).replace('.jpg','_'+my_id_value+'.npz') + if 'JPG' in select_image_value[0]: + numpyfile = select_image_value[0].replace('assets',results_folder).replace('.JPG','_'+my_id_value+'.npz') + if 'jpeg' in select_image_value[0]: + numpyfile = select_image_value[0].replace('assets',results_folder).replace('.jpeg','_'+my_id_value+'.npz') + + if os.path.exists(numpyfile): + saved_data = np.load(numpyfile) + savez_dict = dict() + for k in saved_data.keys(): + tmp = saved_data[k] + name = str(k) + savez_dict['0'+name] = tmp + del tmp + + savez_dict['image'] = img.astype(np.uint8) + savez_dict['label'] = lstack.astype(np.uint8) + savez_dict['color_doodles'] = color_doodles.astype(np.uint8) + savez_dict['doodles'] = doodles.astype(np.uint8) + savez_dict['settings'] = settings_dict + savez_dict['classes'] = class_label_names + np.savez(numpyfile, **savez_dict ) + + else: + savez_dict = dict() + savez_dict['image'] = img.astype(np.uint8) + savez_dict['label'] = lstack.astype(np.uint8) + savez_dict['color_doodles'] = color_doodles.astype(np.uint8) + savez_dict['doodles'] = doodles.astype(np.uint8) + savez_dict['settings'] = settings_dict + savez_dict['classes'] = class_label_names + np.savez(numpyfile, **savez_dict ) #save settings too + + else: + if 'jpg' in select_image_value: + numpyfile = select_image_value.replace('assets',results_folder).replace('.jpg','_'+my_id_value+'.npz') + if 'JPG' in select_image_value: + numpyfile = select_image_value.replace('assets',results_folder).replace('.JPG','_'+my_id_value+'.npz') + if 'jpeg' in select_image_value: + numpyfile = select_image_value.replace('assets',results_folder).replace('.jpeg','_'+my_id_value+'.npz') + + if os.path.exists(numpyfile): + saved_data = np.load(numpyfile) + savez_dict = dict() + for k in saved_data.keys(): + tmp = saved_data[k] + name = str(k) + savez_dict['0'+name] = tmp + del tmp + + savez_dict['image'] = img.astype(np.uint8) + savez_dict['label'] = lstack.astype(np.uint8) + savez_dict['color_doodles'] = color_doodles.astype(np.uint8) + savez_dict['doodles'] = doodles.astype(np.uint8) + savez_dict['settings'] = settings_dict + savez_dict['classes'] = class_label_names + np.savez(numpyfile, **savez_dict )#save settings too + + else: + savez_dict = dict() + savez_dict['image'] = img.astype(np.uint8) + savez_dict['label'] = lstack.astype(np.uint8) + savez_dict['color_doodles'] = color_doodles.astype(np.uint8) + savez_dict['doodles'] = doodles.astype(np.uint8) + savez_dict['settings'] = settings_dict + savez_dict['classes'] = class_label_names + np.savez(numpyfile, **savez_dict )#save settings too + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + + del img, seg, lstack, doodles, color_doodles + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Numpy arrays saved to %s' % (numpyfile)) + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + + segmentation_data = shapes_seg_pair_as_dict( + segmentation_data, sh, segimgpng + ) + try: + segmentation_store_data = pil2uri( + seg_pil( + select_image_value, segimgpng, do_alpha=True + ) #plot_utils. + ) + shutil.copyfile(select_image_value, select_image_value.replace('assets', 'labeled')) #move + except: + segmentation_store_data = pil2uri( + seg_pil( + PIL.Image.open(select_image_value), segimgpng, do_alpha=True + ) #plot_utils. + ) + shutil.copyfile(select_image_value, select_image_value.replace('assets', 'labeled')) #move + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('%s moved to labeled folder' % (select_image_value.replace('assets', 'labeled'))) + + + images_to_draw = [] + if segimgpng is not None: + images_to_draw = [segimgpng] + + fig = add_layout_images_to_fig(fig, images_to_draw) #plot_utils. + + show_segmentation_value = [] + image_list_data.append(select_image_value) + + try: + os.remove('my_defaults.py') + except: + pass + + #write defaults back out to file + with open('my_defaults.py', 'a') as the_file: + the_file.write('DEFAULT_PEN_WIDTH = {}\n'.format(pen_width)) + the_file.write('DEFAULT_CRF_DOWNSAMPLE = {}\n'.format(crf_downsample_value)) + the_file.write('DEFAULT_RF_DOWNSAMPLE = {}\n'.format(rf_downsample_value)) + the_file.write('DEFAULT_CRF_THETA = {}\n'.format(crf_theta_slider_value)) + the_file.write('DEFAULT_CRF_MU = {}\n'.format(crf_mu_slider_value)) + the_file.write('DEFAULT_CRF_GTPROB = {}\n'.format(gt_prob)) + print('my_defaults.py overwritten with parameter settings') + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('my_defaults.py overwritten with parameter settings') + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + + if len(files) == 0: + return [ + options, + fig, + image_list_data, + masks_data, + segmentation_data, + 'User ID: "{}"'.format(my_id_value) , + select_image_value, + "Pen width (default: %d): %d" % (DEFAULT_PEN_WIDTH,pen_width), + "Blur factor (default: %d): %d" % (DEFAULT_CRF_THETA, crf_theta_slider_value), + "Model independence factor (default: %d): %d" % (DEFAULT_CRF_MU,crf_mu_slider_value), + "CRF downsample factor (default: %d): %d" % (DEFAULT_CRF_DOWNSAMPLE,crf_downsample_value), + "Probability of doodle (default: %f): %f" % (DEFAULT_CRF_GTPROB,gt_prob), + "Classifier downsample factor (default: %d): %d" % (DEFAULT_RF_DOWNSAMPLE,rf_downsample_value), + segmentation_store_data, + ] + else: + return [ + options, + fig, + image_list_data, + masks_data, + segmentation_data, + 'User ID: "{}"'.format(my_id_value) , + select_image_value, + "Pen width (default: %d): %d" % (DEFAULT_PEN_WIDTH,pen_width), + "Blur factor (default: %d): %d" % (DEFAULT_CRF_THETA, crf_theta_slider_value), + "Model independence factor (default: %d): %d" % (DEFAULT_CRF_MU,crf_mu_slider_value), + "CRF downsample factor (default: %d): %d" % (DEFAULT_CRF_DOWNSAMPLE,crf_downsample_value), + "Probability of doodle (default: %f): %f" % (DEFAULT_CRF_GTPROB,gt_prob), + "Classifier downsample factor (default: %d): %d" % (DEFAULT_RF_DOWNSAMPLE,rf_downsample_value), + segmentation_store_data, + ] diff --git a/app_files/cache-directory/2029240f6d1128be89ddc32729463129 b/app_files/cache-directory/2029240f6d1128be89ddc32729463129 new file mode 100644 index 0000000..8e5a560 Binary files /dev/null and b/app_files/cache-directory/2029240f6d1128be89ddc32729463129 differ diff --git a/logs/files-here.txt b/app_files/cache-directory/files-here.txt similarity index 100% rename from logs/files-here.txt rename to app_files/cache-directory/files-here.txt diff --git a/results/files-here.txt b/app_files/logs/files-here.txt similarity index 100% rename from results/files-here.txt rename to app_files/logs/files-here.txt diff --git a/src/annotations_to_segmentations.py b/app_files/src/annotations_to_segmentations.py similarity index 67% rename from src/annotations_to_segmentations.py rename to app_files/src/annotations_to_segmentations.py index 87c408c..984c1b3 100644 --- a/src/annotations_to_segmentations.py +++ b/app_files/src/annotations_to_segmentations.py @@ -1,10 +1,9 @@ # Written by Dr Daniel Buscombe, Marda Science LLC -# for "ML Mondays", a course supported by the USGS Community for Data Integration -# and the USGS Coastal Change Hazards Program +# for the USGS Coastal Change Hazards Program # # MIT License # -# Copyright (c) 2020, Marda Science LLC +# Copyright (c) 2020-2021, Marda Science LLC # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal @@ -25,21 +24,120 @@ # SOFTWARE. ##======================================================== -import PIL.Image -import numpy as np +#======================================================== +## ``````````````````````````` imports +##======================================================== -import skimage.util -import skimage.io -import skimage.color -import io, os +import numpy as np +import PIL.Image, skimage.util, skimage.io, skimage.color +import io, os, psutil, logging, base64, json from datetime import datetime from image_segmentation import segmentation import plotly.express as px from skimage.io import imsave, imread - from cairosvg import svg2png from datetime import datetime -import logging + +from app_funcs import * +from plot_utils import * + +##======================================================== +def show_segmentation(image_path, + mask_shapes, + callback_context, + crf_theta_slider_value, + crf_mu_slider_value, + results_folder, + rf_downsample_value, + crf_downsample_factor, + gt_prob, + my_id_value, + multichannel, + intensity, + edges, + texture, + class_label_colormap + ): + + """ adds an image showing segmentations to a figure's layout """ + + # add 1 because classifier takes 0 to mean no mask + shape_layers = [convert_color_class(class_label_colormap,shape["line"]["color"]) + 1 for shape in mask_shapes] + + label_to_colors_args = { + "colormap": class_label_colormap, + "color_class_offset": -1, + } + + sigma_min=1; sigma_max=16 + + segimg, seg, img, color_doodles, doodles = compute_segmentations( + mask_shapes, crf_theta_slider_value,crf_mu_slider_value, + results_folder, rf_downsample_value, + crf_downsample_factor, gt_prob, my_id_value, callback_context, + multichannel, intensity, edges, texture, sigma_min, sigma_max, + img_path=image_path, + shape_layers=shape_layers, + label_to_colors_args=label_to_colors_args, + ) + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Showing segmentation on screen ...') + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + + logging.info('... converting to PIL array ...') + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + # get the classifier that we can later store in the Store + segimgpng = img_array_2_pil(segimg) #plot_utils. + + return (segimgpng, seg, img, color_doodles, doodles ) + +##======================================================== +def img_array_2_pil(ia): + """ converst image byte array to PIL Image""" + ia = skimage.util.img_as_ubyte(ia) + img = PIL.Image.fromarray(ia) + return img + +##======================================================== +def convert_integer_class_to_color(class_label_colormap,n): + """ + class to color + """ + return class_label_colormap[n] + +##======================================================== +def convert_color_class(class_label_colormap,c): + """ + color to class + """ + return class_label_colormap.index(c) + +##======================================================== +def shapes_to_key(shapes): + """ + convert shapes to json + """ + return json.dumps(shapes) + +##======================================================== +def shapes_seg_pair_as_dict(d, key, seg, remove_old=True): + """ + Stores shapes and segmentation pair in dict d + seg is a PIL.Image object + if remove_old True, deletes all the old keys and values. + """ + bytes_to_encode = io.BytesIO() + seg.save(bytes_to_encode, format="png") + bytes_to_encode.seek(0) + + data = base64.b64encode(bytes_to_encode.read()).decode() + + if remove_old: + return {key: data} + d[key] = data + + return d ##======================================================== def shape_to_svg_code(shape, fig=None, width=None, height=None): @@ -169,7 +267,6 @@ def label_to_colors( colormap[0]. """ - colormap = [ tuple([fromhex(h[s : s + 2]) for s in range(0, len(h), 2)]) for h in [c.replace("#", "") for c in colormap] @@ -202,8 +299,6 @@ def compute_segmentations( gt_prob, my_id_value, callback_context, - rf_file, - data_file, multichannel, intensity, edges, @@ -213,8 +308,11 @@ def compute_segmentations( img_path="assets/logos/dash-default.jpg", shape_layers=None, label_to_colors_args={}, - SAVE_RF=False): - """ segments the image based on the user annotations""" + ): + """ + segments the image based on the user annotations + calls segmentation() from image_segmentation + """ # load original image img = img_to_ubyte_array(img_path) @@ -233,11 +331,16 @@ def compute_segmentations( else: color_annos = label_to_colors(mask, img==0, alpha=128, do_alpha=True, **label_to_colors_args) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + logging.info('Calling segmentation function') - seg = segmentation(img, img_path, results_folder, rf_file, data_file, callback_context, + seg = segmentation(img, img_path, results_folder, callback_context, #rf_file, data_file, crf_theta_slider_value, crf_mu_slider_value, rf_downsample_value, #median_filter_value, crf_downsample_factor, gt_prob, mask, multichannel, intensity, edges, texture, - sigma_min, sigma_max, SAVE_RF) #n_estimators, + sigma_min, sigma_max)#, SAVE_RF) #n_estimators, + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Segmentation computed') #print(np.unique(seg)) if np.ndim(img)==3: @@ -245,6 +348,10 @@ def compute_segmentations( else: color_seg = label_to_colors(seg, img==0, alpha=128, do_alpha=True, **label_to_colors_args) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Color segmentation computed') + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + # color_seg is a 3d tensor representing a colored image whereas seg is a # matrix whose entries represent the classes return (color_seg, seg, img, color_annos[:,:,:3], mask ) #colored image, label image, input image, color annotations, greyscale annotations diff --git a/app_files/src/app_funcs.py b/app_files/src/app_funcs.py new file mode 100644 index 0000000..8001b90 --- /dev/null +++ b/app_files/src/app_funcs.py @@ -0,0 +1,172 @@ +# Written by Dr Daniel Buscombe, Marda Science LLC +# for the USGS Coastal Change Hazards Program +# +# MIT License +# +# Copyright (c) 2020-2021, Marda Science LLC +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +from glob import glob +import dash_html_components as html +import io, os, psutil, logging, base64, PIL.Image +from plot_utils import dummy_fig, add_layout_images_to_fig + +##======================================================== +def get_asset_files(): + files = sorted(glob('assets/*.jpg')) + sorted(glob('assets/*.JPG')) + sorted(glob('assets/*.jpeg')) + + files = [f for f in files if 'dash' not in f] + return files + +##======================================================== +def parse_contents(contents, filename, date): + return html.Div([ + html.H5(filename), + html.H6(datetime.fromtimestamp(date)), + + # HTML images accept base64 encoded strings in the same format + # that is supplied by the upload + html.Img(src=contents), + html.Hr(), + html.Div('Raw Content'), + html.Pre(contents[0:200] + '...', style={ + 'whiteSpace': 'pre-wrap', + 'wordBreak': 'break-all' + }) + ]) + +##======================================================== +def look_up_seg(d, key): + """ Returns a PIL.Image object """ + data = d[key] + img_bytes = base64.b64decode(data) + img = PIL.Image.open(io.BytesIO(img_bytes)) + return img + +##======================================================== +def listToString(s): + # initialize an empty string + str1 = " " + # return string + return (str1.join(s)) + +##======================================================== +def uploaded_files(filelist,UPLOAD_DIRECTORY,LABELED_DIRECTORY): + """List the files in the upload directory.""" + files = [] + for filename in os.listdir(UPLOAD_DIRECTORY): + path = os.path.join(UPLOAD_DIRECTORY, filename) + if os.path.isfile(path): + if 'jpg' in filename: + files.append(filename) + if 'JPG' in filename: + files.append(filename) + if 'jpeg' in filename: + files.append(filename) + + labeled_files = [] + for filename in os.listdir(LABELED_DIRECTORY): + path = os.path.join(LABELED_DIRECTORY, filename) + if os.path.isfile(path): + if 'jpg' in filename: + labeled_files.append(filename) + if 'JPG' in filename: + labeled_files.append(filename) + if 'jpeg' in filename: + labeled_files.append(filename) + + with open(filelist, 'w') as filehandle: + for listitem in labeled_files: + filehandle.write('%s\n' % listitem) + # logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + # logging.info('File list written to %s' % (filelist)) + + return sorted(files), sorted(labeled_files) + + +##======================================================== +def make_and_return_default_figure( + images,#=[DEFAULT_IMAGE_PATH], + stroke_color,#=convert_integer_class_to_color(class_label_colormap,DEFAULT_LABEL_CLASS), + pen_width,#=DEFAULT_PEN_WIDTH, + shapes#=[], +): + """ + create and return the default Dash/plotly figure object + """ + fig = dummy_fig() #plot_utils. + + add_layout_images_to_fig(fig, images) #plot_utils. + + fig.update_layout( + { + "dragmode": "drawopenpath", + "shapes": shapes, + "newshape.line.color": stroke_color, + "newshape.line.width": pen_width, + "margin": dict(l=0, r=0, b=0, t=0, pad=4), + "height": 650 + } + ) + + return fig + +# +# ##============================================================ +# def save_file(name, content): +# """Decode and store a file uploaded with Plotly Dash.""" +# data = content.encode("utf8").split(b";base64,")[1] +# with open(os.path.join(UPLOAD_DIRECTORY, name), "wb") as fp: +# fp.write(base64.decodebytes(data)) +# +# +# def uploaded_files(): +# """List the files in the upload directory.""" +# files = [] +# for filename in os.listdir(UPLOAD_DIRECTORY): +# path = os.path.join(UPLOAD_DIRECTORY, filename) +# if os.path.isfile(path): +# if 'jpg' in filename: +# files.append(filename) +# if 'JPG' in filename: +# files.append(filename) +# if 'jpeg' in filename: +# files.append(filename) +# +# labeled_files = [] +# for filename in os.listdir(LABELED_DIRECTORY): +# path = os.path.join(LABELED_DIRECTORY, filename) +# if os.path.isfile(path): +# if 'jpg' in filename: +# labeled_files.append(filename) +# if 'JPG' in filename: +# labeled_files.append(filename) +# if 'jpeg' in filename: +# labeled_files.append(filename) +# +# filelist = 'files_done.txt' +# +# with open(filelist, 'w') as filehandle: +# for listitem in labeled_files: +# filehandle.write('%s\n' % listitem) +# logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) +# logging.info('File list written to %s' % (filelist)) +# +# return sorted(files), sorted(labeled_files) diff --git a/src/defaults.py b/app_files/src/defaults.py similarity index 78% rename from src/defaults.py rename to app_files/src/defaults.py index 9e38680..107f693 100644 --- a/src/defaults.py +++ b/app_files/src/defaults.py @@ -1,10 +1,9 @@ # Written by Dr Daniel Buscombe, Marda Science LLC -# for "ML Mondays", a course supported by the USGS Community for Data Integration -# and the USGS Coastal Change Hazards Program +# for the USGS Coastal Change Hazards Program # # MIT License # -# Copyright (c) 2020, Marda Science LLC +# Copyright (c) 2020-2021, Marda Science LLC # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal @@ -24,16 +23,9 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -DEFAULT_PEN_WIDTH = 2 - -DEFAULT_CRF_DOWNSAMPLE = 3 - -DEFAULT_RF_DOWNSAMPLE = 10 - -DEFAULT_CRF_THETA = 40 - -DEFAULT_CRF_MU = 100 - -DEFAULT_RF_NESTIMATORS = 3 - +DEFAULT_PEN_WIDTH = 3 +DEFAULT_CRF_DOWNSAMPLE = 1 +DEFAULT_RF_DOWNSAMPLE = 1 +DEFAULT_CRF_THETA = 1 +DEFAULT_CRF_MU = 1 DEFAULT_CRF_GTPROB = 0.9 diff --git a/src/image_segmentation.py b/app_files/src/image_segmentation.py similarity index 68% rename from src/image_segmentation.py rename to app_files/src/image_segmentation.py index 7c4233f..2439d39 100644 --- a/src/image_segmentation.py +++ b/app_files/src/image_segmentation.py @@ -1,10 +1,9 @@ # Written by Dr Daniel Buscombe, Marda Science LLC -# for "ML Mondays", a course supported by the USGS Community for Data Integration -# and the USGS Coastal Change Hazards Program +# for the USGS Coastal Change Hazards Program # # MIT License # -# Copyright (c) 2020, Marda Science LLC +# Copyright (c) 2020-2021, Marda Science LLC # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal @@ -24,30 +23,41 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -import itertools +#======================================================== +## ``````````````````````````` imports +##======================================================== + +#numerical import numpy as np +np.seterr(divide='ignore', invalid='ignore') -from skimage import filters, feature, img_as_float32 +#classifier # from sklearn.ensemble import RandomForestClassifier from sklearn.neural_network import MLPClassifier from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler -from tempfile import TemporaryFile -import plotly.express as px -from skimage.io import imsave -from datetime import datetime +#spatial filters +from skimage.morphology import remove_small_holes, remove_small_objects +from scipy import ndimage +from scipy.signal import convolve2d +#crf import pydensecrf.densecrf as dcrf from pydensecrf.utils import create_pairwise_bilateral, unary_from_labels -from skimage.transform import resize + +#utility +from tempfile import TemporaryFile from joblib import dump, load, Parallel, delayed -import io, os, logging -from skimage.morphology import remove_small_holes, remove_small_objects -from scipy import ndimage -from scipy.signal import convolve2d +import io, os, logging, psutil, itertools +from skimage.io import imsave +from datetime import datetime +from skimage import filters, feature, img_as_float32 +from skimage.transform import resize + +#plotly +# import plotly.express as px -np.seterr(divide='ignore', invalid='ignore') ##======================================================== def fromhex(n): @@ -67,7 +77,11 @@ def rescale(dat, ##==================================== def standardize(img): - #standardization using adjusted standard deviation + ''' + standardize a 3 band image using adjusted standard deviation + (1-band images are standardized and returned as 3-band images) + ''' + # N = np.shape(img)[0] * np.shape(img)[1] s = np.maximum(np.std(img), 1.0/np.sqrt(N)) m = np.mean(img) @@ -82,7 +96,14 @@ def standardize(img): ##======================================================== def filter_one_hot(label, blobsize): - #filter the one-hot encoded binary masks + # + ''' + filter the one-hot encoded label images by + a) converting to a stack of binary one-hote encoded masks + b) removing small holes and islands + and + c) argmax the filtered label stack + ''' lstack = (np.arange(label.max()) == label[...,None]-1).astype(int) #one-hot encode for kk in range(lstack.shape[-1]): @@ -98,6 +119,13 @@ def filter_one_hot(label, blobsize): ##======================================================== def filter_one_hot_spatial(label, distance): #filter the one-hot encoded binary masks + ''' + filter the one-hot encoded label images by + a) converting to a stack of binary one-hot encoded masks + b) flagging pixels that are in class transition areas + c) argmax the filtered label stack + d) zeroing flagged pixels + ''' lstack = (np.arange(label.max()) == label[...,None]-1).astype(int) #one-hot encode tmp = np.zeros_like(label) @@ -115,15 +143,10 @@ def filter_one_hot_spatial(label, distance): return label # ##======================================================== -# def inpaint_zeros(label): -# valid_mask = label>0 -# coords = np.array(np.nonzero(valid_mask)).T -# values = label[valid_mask] -# it = interpolate.LinearNDInterpolator(coords, values, fill_value=0) -# out = it(list(np.ndindex(label.shape))).reshape(label.shape) -# return out - def inpaint_nans(im): + ''' + quick and dirty nan inpainting using kernel trick + ''' ipn_kernel = np.array([[1,1,1],[1,0,1],[1,1,1]]) # kernel for inpaint_nans nans = np.isnan(im) while np.sum(nans)>0: @@ -156,17 +179,15 @@ def crf_refine(label, OUTPUTS: label [ndarray]: label image 2D matrix of integers """ - #gx,gy = np.meshgrid(np.arange(img.shape[1]), np.arange(img.shape[0])) - #img = np.dstack((img,np.sqrt(gx**2 + gy**2))) #gx,gy)) Horig = label.shape[0] Worig = label.shape[1] l_unique = np.unique(label.flatten())#.tolist() scale = 1+(5 * (np.array(img.shape).max() / 3000)) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('CRF scale: %f' % (scale)) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('CRF downsample factor: %f' % (crf_downsample_factor)) logging.info('CRF theta parameter: %f' % (crf_theta_slider_value)) logging.info('CRF mu parameter: %f' % (crf_mu_slider_value)) @@ -176,8 +197,10 @@ def crf_refine(label, img = img[::crf_downsample_factor,::crf_downsample_factor, :] # do the same for the label image label = label[::crf_downsample_factor,::crf_downsample_factor] + # yes, I know this aliases, but considering the task, it is ok; the objective is to + # make fast inference and resize the output - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Images downsampled by a factor os %f' % (crf_downsample_factor)) Hnew = label.shape[0] @@ -188,13 +211,10 @@ def crf_refine(label, if l_unique[0]==0: n = (orig_mx-orig_mn)#+1 - else: n = (orig_mx-orig_mn)+1 - label = (label - orig_mn)+1 - mn = np.min(np.array(label).flatten()) mx = np.max(np.array(label).flatten()) @@ -213,19 +233,18 @@ def crf_refine(label, normalization=dcrf.NORMALIZE_SYMMETRIC) feats = create_pairwise_bilateral( sdims=(crf_theta_slider_value, crf_theta_slider_value), - # schan=(2,2,2,2,2,2), #add these when implement 6 band schan=(scale,scale,scale), img=img, chdim=2) d.addPairwiseEnergy(feats, compat=crf_mu_slider_value, kernel=dcrf.DIAG_KERNEL,normalization=dcrf.NORMALIZE_SYMMETRIC) #260 - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('CRF feature extraction complete ... inference starting') Q = d.inference(10) result = np.argmax(Q, axis=0).reshape((H, W)).astype(np.uint8) +1 - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('CRF inference made') uniq = np.unique(result.flatten()) @@ -234,7 +253,7 @@ def crf_refine(label, result = rescale(result, orig_mn, orig_mx).astype(np.uint8) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('label resized and rescaled ... CRF post-processing complete') return result, n @@ -256,11 +275,11 @@ def features_sigma(img, gx = filters.gaussian(gx, sigma) gy = filters.gaussian(gy, sigma) - features.append(np.sqrt(gx**2 + gy**2)) #gy) #use polar radius of pixel locations as cartesian coordinates + features.append(np.sqrt(gx**2 + gy**2)) #use polar radius of pixel locations as cartesian coordinates del gx, gy - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Location features extracted using sigma= %f' % (sigma)) img_blur = filters.gaussian(img, sigma) @@ -268,13 +287,13 @@ def features_sigma(img, if intensity: features.append(img_blur) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Intensity features extracted using sigma= %f' % (sigma)) if edges: features.append(filters.sobel(img_blur)) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Edge features extracted using sigma= %f' % (sigma)) if texture: @@ -290,10 +309,10 @@ def features_sigma(img, features.append(eigval_mat) del eigval_mat - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Texture features extracted using sigma= %f' % (sigma)) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Image features extracted using sigma= %f' % (sigma)) return features @@ -311,7 +330,7 @@ def extract_features_2d( """Features for a single channel image. ``img`` can be 2d or 3d. """ - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Extracting features from channel %i' % (dim)) # computations are faster as float32 @@ -325,16 +344,28 @@ def extract_features_2d( endpoint=True, ) - #n_sigmas = len(sigmas) - # all_results = [ - # features_sigma(img, sigma, intensity=intensity, edges=edges, texture=texture) - # for sigma in sigmas - # ] + if (psutil.virtual_memory()[0]>10000000000) & (psutil.virtual_memory()[2]<50): #>10GB and <50% utilization + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Extracting features in parallel') + logging.info('Total RAM: %i' % (psutil.virtual_memory()[0])) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) - all_results = Parallel(n_jobs=-2, verbose=0)(delayed(features_sigma)(img, sigma, intensity=intensity, edges=edges, texture=texture) for sigma in sigmas) + all_results = Parallel(n_jobs=-2, verbose=0)(delayed(features_sigma)(img, sigma, intensity=intensity, edges=edges, texture=texture) for sigma in sigmas) + else: + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Extracting features in series') + logging.info('Total RAM: %i' % (psutil.virtual_memory()[0])) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Features from channel %i in parallel for all scales' % (dim)) + n_sigmas = len(sigmas) + all_results = [ + features_sigma(img, sigma, intensity=intensity, edges=edges, texture=texture) + for sigma in sigmas + ] + + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Features from channel %i for all scales' % (dim)) return list(itertools.chain.from_iterable(all_results)) @@ -373,9 +404,23 @@ def extract_features( sigma_min=sigma_min, sigma_max=sigma_max, ) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Feature extraction complete') + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + logging.info('Memory mapping features to temporary file') + + features = memmap_feats(features) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + + return features #np.array(features) + +##======================================================== +def memmap_feats(features): + """ + Memory-map data to a temporary file + """ features = np.array(features) dtype = features.dtype feats_shape = features.shape @@ -386,16 +431,17 @@ def extract_features( fp.flush() del features del fp + logging.info('Features memory mapped features to temporary file: %s' % outfile) #read back in again without using any memory features = np.memmap(outfile, dtype=dtype, mode='r', shape=feats_shape) - - return features #np.array(features) - + return features ##======================================================== -def do_rf(img,rf_file,data_file,mask,multichannel,intensity,edges,texture,sigma_min,sigma_max, downsample_value, SAVE_RF): #n_estimators, - +def do_classify(img,mask,multichannel,intensity,edges,texture,sigma_min,sigma_max, downsample_value): + """ + Apply classifier to features to extract unary potentials for the CRF + """ if np.ndim(img)==3: features = extract_features( img, @@ -417,28 +463,17 @@ def do_rf(img,rf_file,data_file,mask,multichannel,intensity,edges,texture,sigma_ sigma_max=sigma_max, ) - n_estimators=3 - if mask is None: raise ValueError("If no classifier clf is passed, you must specify a mask.") training_data = features[:, mask > 0].T + + training_data = memmap_feats(training_data) + training_labels = mask[mask > 0].ravel() - # try: - # logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - # logging.info('Updating existing RF classifier') training_data = training_data[::downsample_value] training_labels = training_labels[::downsample_value] - if SAVE_RF: - print('loading model') - file_training_data, file_training_labels = load(data_file) - - training_data = np.concatenate((file_training_data, training_data)) - training_labels = np.concatenate((file_training_labels, training_labels)) - logging.info('Samples concatenated with those from file') - logging.info('Number of samples in training data: %i' % (training_data.shape[0])) - lim_samples = 100000 #200000 if training_data.shape[0]>lim_samples: @@ -450,57 +485,32 @@ def do_rf(img,rf_file,data_file,mask,multichannel,intensity,edges,texture,sigma_ logging.info('Number of samples in training data: %i' % (training_data.shape[0])) print(training_data.shape) - if SAVE_RF: - clf = load(rf_file) #load last model from file - # path = clf.cost_complexity_pruning_path(training_data, training_labels) + clf = make_pipeline( + StandardScaler(), + MLPClassifier( + solver='adam', alpha=1, random_state=1, max_iter=2000, + early_stopping=True, hidden_layer_sizes=[100, 60], + )) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('Initializing MLP model') - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Loading model from %s' % (rf_file)) - logging.info('Number of trees: %i' % (clf.n_estimators)) - - else: - - # clf = make_pipeline( - # StandardScaler(), - # RandomForestClassifier( - # n_estimators=n_estimators, n_jobs=-1,class_weight="balanced_subsample", min_samples_split=5 - # )) - # clf = RandomForestClassifier(n_estimators=n_estimators, n_jobs=-1,class_weight="balanced_subsample", min_samples_split=5)#, ccp_alpha=0.02) - # logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - # logging.info('Initializing RF model') - # - # clf.n_estimators += n_estimators #add more trees for the new data - # clf.fit(training_data, training_labels) # fit with with new data - # logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - # logging.info('RF model fit to data') - - clf = make_pipeline( - StandardScaler(), - MLPClassifier( - solver='adam', alpha=1, random_state=1, max_iter=2000, - early_stopping=True, hidden_layer_sizes=[100, 60], - )) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Initializing MLP model') - #print(clf.summary()) - - clf.fit(training_data, training_labels) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('MLP model fit to data') - - if SAVE_RF: - dump(clf, rf_file, compress=True) #save new file - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Model saved to %s'% rf_file) - dump((training_data, training_labels), data_file, compress=True) #save new file - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Data saved to %s'% data_file) + clf.fit(training_data, training_labels) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('MLP model fit to data') del training_data, training_labels + logging.info('Create and memory map model input data') + data = features[:, mask == 0].T + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + + data = memmap_feats(data) + logging.info('Memory mapped model input data') + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) + labels = clf.predict(data) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Model used on data to estimate labels') if mask is None: @@ -513,19 +523,17 @@ def do_rf(img,rf_file,data_file,mask,multichannel,intensity,edges,texture,sigma_ result2 = result.copy() del result - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('RF feature extraction and model fitting complete') + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) return result2 - ##======================================================== def segmentation( img, img_path, results_folder, - rf_file, - data_file, callback_context, crf_theta_slider_value, crf_mu_slider_value, @@ -539,51 +547,61 @@ def segmentation( texture,#=True, sigma_min,#=0.5, sigma_max,#=16, - SAVE_RF,#False ): + """ + 1) Calls do_classify to apply classifier to features to extract unary potentials for the CRF + then + 2) Calls the spatial filter + Then + 3) Calls crf_refine to apply CRF + """ # #standardization using adjusted standard deviation img = standardize(img) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Image standardized') - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) for ni in np.unique(mask[1:]): logging.info('examples provided of %i' % (ni)) if len(np.unique(mask)[1:])==1: - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Only one class annotation provided, skipping RF and CRF and coding all pixels %i' % (np.unique(mask)[1:])) result2 = np.ones(mask.shape[:2])*np.unique(mask)[1:] result2 = result2.astype(np.uint8) else: - result = do_rf(img,rf_file,data_file,mask,multichannel,intensity,edges,texture, sigma_min,sigma_max, rf_downsample_value,SAVE_RF) # n_estimators, + result = do_classify(img,mask,multichannel,intensity,edges,texture, sigma_min,sigma_max, rf_downsample_value)#,SAVE_RF) # n_estimators,rf_file,data_file, Worig = img.shape[0] result = filter_one_hot(result, 2*Worig) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('One-hot labels filtered') if Worig>512: result = filter_one_hot_spatial(result, 2) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('One-hot labels spatially filtered') + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('One-hot labels spatially filtered') + else: + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('One-hot labels not spatially filtered because width < 512 pixels') result = result.astype('float') result[result==0] = np.nan result = inpaint_nans(result).astype('uint8') - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Spatially filtered values inpainted') - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('RF model applied with sigma range %f : %f' % (sigma_min,sigma_max)) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) def tta_crf_int(img, result, k): k = int(k) @@ -596,29 +614,37 @@ def tta_crf_int(img, result, k): return result2, w,n - num_tta = 10 - try: + num_tta = 5#10 + + if (psutil.virtual_memory()[0]>10000000000) & (psutil.virtual_memory()[2]<50): #>10GB and <50% utilization + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('CRF parallel test-time augmentation') + logging.info('Total RAM: %i' % (psutil.virtual_memory()[0])) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) w = Parallel(n_jobs=-2, verbose=0)(delayed(tta_crf_int)(img, result, k) for k in np.linspace(0,int(img.shape[0])/5,num_tta)) R,W,n = zip(*w) - except: - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('CRF parallel test-time augmentation failed... reverting to serial') + else: + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) + logging.info('CRF serial test-time augmentation') + logging.info('Total RAM: %i' % (psutil.virtual_memory()[0])) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) R = []; W = []; n = [] for k in np.linspace(0,int(img.shape[0])/5,num_tta): r,w,nn = tta_crf_int(img, result, k) R.append(r); W.append(w); n.append(nn) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('CRF model applied with %i test-time augmentations' % ( num_tta)) result2 = np.round(np.average(np.dstack(R), axis=-1, weights = W)).astype('uint8') del R,W - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Weighted average applied to test-time augmented outputs') - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('CRF model applied with theta=%f and mu=%f' % ( crf_theta_slider_value, crf_mu_slider_value)) + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) if ((n==1)): result2[result>0] = np.unique(result) @@ -626,7 +652,8 @@ def tta_crf_int(img, result, k): result2 = result2.astype('float') result2[result2==0] = np.nan result2 = inpaint_nans(result2).astype('uint8') - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) + logging.info(datetime.now().strftime("%Y-%m-%d-%H-%M-%S")) logging.info('Spatially filtered values inpainted') + logging.info('percent RAM usage: %f' % (psutil.virtual_memory()[2])) return result2 diff --git a/src/plot_utils.py b/app_files/src/plot_utils.py similarity index 91% rename from src/plot_utils.py rename to app_files/src/plot_utils.py index d45f283..f9b61c7 100644 --- a/src/plot_utils.py +++ b/app_files/src/plot_utils.py @@ -1,10 +1,9 @@ # Written by Dr Daniel Buscombe, Marda Science LLC -# for "ML Mondays", a course supported by the USGS Community for Data Integration -# and the USGS Coastal Change Hazards Program +# for the USGS Coastal Change Hazards Program # # MIT License # -# Copyright (c) 2020, Marda Science LLC +# Copyright (c) 2020-2021, Marda Science LLC # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal @@ -24,7 +23,9 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. - +#======================================================== +## ``````````````````````````` imports +##======================================================== import PIL.Image import plotly.graph_objects as go import skimage.util @@ -94,12 +95,6 @@ def add_layout_images_to_fig(fig, pass return fig -##======================================================== -def img_array_2_pil(ia): - """ converst image byte array to PIL Image""" - ia = skimage.util.img_as_ubyte(ia) - img = PIL.Image.fromarray(ia) - return img ##======================================================== def pil2uri(img): diff --git a/assets/D800_20160308_222129lr02-0.jpg b/assets/D800_20160308_222129lr02-0.jpg deleted file mode 100644 index 309562f..0000000 Binary files a/assets/D800_20160308_222129lr02-0.jpg and /dev/null differ diff --git a/assets/D800_20160308_222129lr02-2.jpg b/assets/D800_20160308_222129lr02-2.jpg deleted file mode 100644 index d8785da..0000000 Binary files a/assets/D800_20160308_222129lr02-2.jpg and /dev/null differ diff --git a/assets/D800_20160308_222129lr02-3.jpg b/assets/D800_20160308_222129lr02-3.jpg deleted file mode 100644 index db7ee48..0000000 Binary files a/assets/D800_20160308_222129lr02-3.jpg and /dev/null differ diff --git a/assets/D800_20160308_222129lr03-2.JPG b/assets/D800_20160308_222129lr03-2.JPG deleted file mode 100644 index 26e8156..0000000 Binary files a/assets/D800_20160308_222129lr03-2.JPG and /dev/null differ diff --git a/assets/D800_20160308_222129lr03-3.JPG b/assets/D800_20160308_222129lr03-3.JPG deleted file mode 100644 index 1900a46..0000000 Binary files a/assets/D800_20160308_222129lr03-3.JPG and /dev/null differ diff --git a/assets/D800_20160308_222135lr00-2.JPG b/assets/D800_20160308_222135lr00-2.JPG deleted file mode 100644 index 7f08d98..0000000 Binary files a/assets/D800_20160308_222135lr00-2.JPG and /dev/null differ diff --git a/assets/D800_20160308_222135lr02-0.jpeg b/assets/D800_20160308_222135lr02-0.jpeg deleted file mode 100644 index 46ade1c..0000000 Binary files a/assets/D800_20160308_222135lr02-0.jpeg and /dev/null differ diff --git a/assets/D800_20160308_222135lr02-2.jpeg b/assets/D800_20160308_222135lr02-2.jpeg deleted file mode 100644 index bd0d30e..0000000 Binary files a/assets/D800_20160308_222135lr02-2.jpeg and /dev/null differ diff --git a/assets/D800_20160308_222135lr02-3.jpeg b/assets/D800_20160308_222135lr02-3.jpeg deleted file mode 100644 index bbdad96..0000000 Binary files a/assets/D800_20160308_222135lr02-3.jpeg and /dev/null differ diff --git a/assets/D800_20160308_222135lr03-0.jpg b/assets/D800_20160308_222135lr03-0.jpg deleted file mode 100644 index 19fbe93..0000000 Binary files a/assets/D800_20160308_222135lr03-0.jpg and /dev/null differ diff --git a/assets/D800_20160308_222135lr03-1.jpg b/assets/D800_20160308_222135lr03-1.jpg deleted file mode 100644 index b68ec3a..0000000 Binary files a/assets/D800_20160308_222135lr03-1.jpg and /dev/null differ diff --git a/assets/D800_20160308_222135lr03-2.jpg b/assets/D800_20160308_222135lr03-2.jpg deleted file mode 100644 index 838928e..0000000 Binary files a/assets/D800_20160308_222135lr03-2.jpg and /dev/null differ diff --git a/assets/D800_20160308_222135lr03-3.jpg b/assets/D800_20160308_222135lr03-3.jpg deleted file mode 100644 index c94b10e..0000000 Binary files a/assets/D800_20160308_222135lr03-3.jpg and /dev/null differ diff --git a/assets/D800_20160308_222136lr02-2.jpeg b/assets/D800_20160308_222136lr02-2.jpeg deleted file mode 100644 index d650dd3..0000000 Binary files a/assets/D800_20160308_222136lr02-2.jpeg and /dev/null differ diff --git a/assets/D800_20160308_222136lr02-3.jpeg b/assets/D800_20160308_222136lr02-3.jpeg deleted file mode 100644 index 091d855..0000000 Binary files a/assets/D800_20160308_222136lr02-3.jpeg and /dev/null differ diff --git a/assets/logos/doodler-demo-2-9-21-short-coast.gif b/assets/logos/doodler-demo-2-9-21-short-coast.gif deleted file mode 100644 index 8fc083e..0000000 Binary files a/assets/logos/doodler-demo-2-9-21-short-coast.gif and /dev/null differ diff --git a/assets/logos/doodler-demo-2-9-21-short-coast2.gif b/assets/logos/doodler-demo-2-9-21-short-coast2.gif deleted file mode 100644 index 2c53a8f..0000000 Binary files a/assets/logos/doodler-demo-2-9-21-short-coast2.gif and /dev/null differ diff --git a/assets/logos/doodler-demo-2-9-21-short-elwha.gif b/assets/logos/doodler-demo-2-9-21-short-elwha.gif deleted file mode 100644 index 18838fe..0000000 Binary files a/assets/logos/doodler-demo-2-9-21-short-elwha.gif and /dev/null differ diff --git a/assets/logos/doodler-demo-2-9-21-short.gif b/assets/logos/doodler-demo-2-9-21-short.gif deleted file mode 100644 index 06892a4..0000000 Binary files a/assets/logos/doodler-demo-2-9-21-short.gif and /dev/null differ diff --git a/assets/logos/quick-satshore2-x2c.gif b/assets/logos/quick-satshore2-x2c.gif deleted file mode 100644 index f5dca39..0000000 Binary files a/assets/logos/quick-satshore2-x2c.gif and /dev/null differ diff --git a/assets/logos/quick-satshoreline-x2c.gif b/assets/logos/quick-satshoreline-x2c.gif deleted file mode 100644 index db5d82c..0000000 Binary files a/assets/logos/quick-satshoreline-x2c.gif and /dev/null differ diff --git a/assets/logos/quick-saturban-x2c.gif b/assets/logos/quick-saturban-x2c.gif deleted file mode 100644 index d5ce0de..0000000 Binary files a/assets/logos/quick-saturban-x2c.gif and /dev/null differ diff --git a/clear_doodler_cache.py b/clear_doodler_cache.py new file mode 100644 index 0000000..3a25f00 --- /dev/null +++ b/clear_doodler_cache.py @@ -0,0 +1,41 @@ +# Written by Dr Daniel Buscombe, Marda Science LLC +# for the USGS Coastal Change Hazards Program +# +# MIT License +# +# Copyright (c) 2020-2021, Marda Science LLC +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +# this doesnt work - i dont know why + +# from flask_caching import Cache +# import os +# from app import app, server +# +# cache = Cache(app.server, config={ +# 'CACHE_TYPE': 'filesystem', +# 'CACHE_DIR': 'app_files'+os.sep+'cache-directory' +# }) +# +# def main(): +# cache.clear() +# +# if __name__ == '__main__': +# main() diff --git a/deploy/Dockerfile b/deploy/Dockerfile new file mode 100644 index 0000000..fa234f8 --- /dev/null +++ b/deploy/Dockerfile @@ -0,0 +1,47 @@ +# Written by Dr Daniel Buscombe, Marda Science LLC +# for the USGS Coastal Change Hazards Program +# +# MIT License +# +# Copyright (c) 2020-2021, Marda Science LLC +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +FROM continuumio/miniconda3 +LABEL maintainer "Doodler, by Dr Daniel Buscombe, Marda Science/USGS " +WORKDIR / +# The code to run when container is started: +COPY ./ ./ + +COPY environment/dashdoodler-clean.yml . +RUN conda env create -f dashdoodler-clean.yml + +# Make RUN commands use the new environment: +SHELL ["conda", "run", "-n", "dashdoodler", "/bin/bash", "-c"] + +EXPOSE 8050/tcp +EXPOSE 8050/udp +EXPOSE 80 +EXPOSE 8080 + +# set environment variables +ENV PYTHONDONTWRITEBYTECODE 1 +ENV PYTHONUNBUFFERED 1 + +ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "dashdoodler", "python", "doodler.py"] diff --git a/deploy/Procfile b/deploy/Procfile new file mode 100644 index 0000000..6b188b9 --- /dev/null +++ b/deploy/Procfile @@ -0,0 +1 @@ +web: gunicorn doodler:server diff --git a/deploy/docker-compose.yml b/deploy/docker-compose.yml new file mode 100644 index 0000000..d2abb84 --- /dev/null +++ b/deploy/docker-compose.yml @@ -0,0 +1,13 @@ +version: "3.7" + +services: + dash-doodler: + build: + context: . + image: dash-doodler:1.0.0 + container_name: dash-doodler + ports: + - "8050:8050" + environment: + - TARGET=LIVE + restart: unless-stopped diff --git a/deploy/gunicorn_config.py b/deploy/gunicorn_config.py new file mode 100644 index 0000000..0696e78 --- /dev/null +++ b/deploy/gunicorn_config.py @@ -0,0 +1,4 @@ +bind = "0.0.0.0:8050" +workers = 4 +threads = 4 +timeout = 12000 diff --git a/doodler.py b/doodler.py index b94015f..ea63213 100644 --- a/doodler.py +++ b/doodler.py @@ -3,7 +3,7 @@ # # MIT License # -# Copyright (c) 2020, Marda Science LLC +# Copyright (c) 2020-2021, Marda Science LLC # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal @@ -23,1069 +23,52 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -# ##======================================================== - -# allows loading of functions from the src directory -import sys -sys.path.insert(1, 'src') - -##======================================================== -import plotly.express as px -import dash -from dash.dependencies import Input, Output, State -import dash_html_components as html -import dash_core_components as dcc - -from annotations_to_segmentations import * -from plot_utils import * - -import io, base64, PIL.Image, json, shutil, os, time -from glob import glob -from datetime import datetime -from urllib.parse import quote as urlquote -from flask import Flask, send_from_directory - -##======================================================== -import logging -logging.basicConfig(filename=os.getcwd()+os.sep+'logs/'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'.log', level=logging.INFO) #DEBUG) #encoding='utf-8', -logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p') - #======================================================== - -##======================================================== -DEFAULT_IMAGE_PATH = "assets/logos/dash-default.jpg" - -try: - from my_defaults import * - print('Hyperparameters imported from my_defaults.py') -except: - from defaults import * - print('Default hyperparameters imported from src/my_defaults.py') - -# the number of different classes for labels -DEFAULT_LABEL_CLASS = 0 - -SAVE_RF = False # use True for mode 2 (learn as you go) - -UPLOAD_DIRECTORY = os.getcwd()+os.sep+"assets" -LABELED_DIRECTORY = os.getcwd()+os.sep+"labeled" - -##======================================================== - -try: - with open('classes.txt') as f: - classes = f.readlines() -except: #in case classes.txt does not exist - print("classes.txt not found or badly formatted. Exit the program and fix the classes.txt file ... otherwie, will continue using default classes. ") - classes = ['water', 'land'] - -class_label_names = [c.strip() for c in classes] - -NUM_LABEL_CLASSES = len(class_label_names) - -if NUM_LABEL_CLASSES<=10: - class_label_colormap = px.colors.qualitative.G10 -else: - class_label_colormap = px.colors.qualitative.Light24 - - -# we can't have fewer colors than classes -assert NUM_LABEL_CLASSES <= len(class_label_colormap) - -class_labels = list(range(NUM_LABEL_CLASSES)) - -logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) -logging.info('loaded class labels:') -for f in class_label_names: - logging.info(f) - -# rf_file = 'RandomForestClassifier_'+'_'.join(class_label_names)+'.pkl.z' #class_label_names -# data_file = 'data_'+'_'.join(class_label_names)+'.pkl.z' #class_label_names -# -# try: -# shutil.move(rf_file, rf_file.replace('.pkl.z','_'+datetime.now().strftime("%d-%m-%Y-%H-%M-%S")+'.pkl.z')) -# except: -# pass -# -# -# try: -# shutil.move(data_file, data_file.replace('.pkl.z','_'+datetime.now().strftime("%d-%m-%Y-%H-%M-%S")+'.pkl.z')) -# except: -# pass - -##======================================================== -def convert_integer_class_to_color(n): - return class_label_colormap[n] - -def convert_color_class(c): - return class_label_colormap.index(c) - - -##======================================================== - -results_folder = 'results/results'+datetime.now().strftime("%Y-%m-%d-%H-%M") - -try: - os.mkdir(results_folder) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info("Folder created: %s" % (results_folder)) -except: - pass - -logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) -logging.info("Results will be written to %s" % (results_folder)) - - -if not os.path.exists(UPLOAD_DIRECTORY): - os.makedirs(UPLOAD_DIRECTORY) - -files = sorted(glob('assets/*.jpg')) + sorted(glob('assets/*.JPG')) + sorted(glob('assets/*.jpeg')) - -files = [f for f in files if 'dash' not in f] - -logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) -logging.info('loaded files:') -for f in files: - logging.info(f) - -##======================================================== -def make_and_return_default_figure( - images=[DEFAULT_IMAGE_PATH], - stroke_color=convert_integer_class_to_color(DEFAULT_LABEL_CLASS), - pen_width=DEFAULT_PEN_WIDTH, - shapes=[], -): - - fig = dummy_fig() #plot_utils. - - add_layout_images_to_fig(fig, images) #plot_utils. - - fig.update_layout( - { - "dragmode": "drawopenpath", - "shapes": shapes, - "newshape.line.color": stroke_color, - "newshape.line.width": pen_width, - "margin": dict(l=0, r=0, b=0, t=0, pad=4), - "height": 650 - } - ) - - return fig - -##======================================================== -def shapes_to_key(shapes): - return json.dumps(shapes) - -##======================================================== -def shapes_seg_pair_as_dict(d, key, seg, remove_old=True): - """ - Stores shapes and segmentation pair in dict d - seg is a PIL.Image object - if remove_old True, deletes all the old keys and values. - """ - bytes_to_encode = io.BytesIO() - seg.save(bytes_to_encode, format="png") - bytes_to_encode.seek(0) - - data = base64.b64encode(bytes_to_encode.read()).decode() - - if remove_old: - return {key: data} - d[key] = data - - return d - -##=============================================================== - -UPLOAD_DIRECTORY = os.getcwd()+os.sep+"assets" - -if not os.path.exists(UPLOAD_DIRECTORY): - os.makedirs(UPLOAD_DIRECTORY) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Made the directory '+UPLOAD_DIRECTORY) - - -##======================================================== -# Normally, Dash creates its own Flask server internally. By creating our own, -# we can create a route for downloading files directly: -# server = Flask(__name__) -# app = dash.Dash(server=server) - -# @server.route("/download/") -# def download(path): -# """Serve a file from the upload directory.""" -# return send_from_directory(UPLOAD_DIRECTORY, path, as_attachment=True) - -server = Flask(__name__) -app = dash.Dash(server=server) - -# app = dash.Dash(__name__) - -##======================================================== - -app.layout = html.Div( - id="app-container", - children=[ - html.Div( - id="banner", - children=[ - html.H1( - "Doodler: Interactive Image Segmentation", - id="title", - className="seven columns", - ), - # html.H2( - # "Label all classes that are present, in all regions of the image those classes occur", - # id="subtitle", - # className="seven columns", - # ), - - html.Img(id="logo", src=app.get_asset_url("logos/dash-logo-new.png")), - # html.Div(html.Img(src=app.get_asset_url('logos/dash-logo-new.png'), style={'height':'10%', 'width':'10%'})), #id="logo", - - html.H2(""), - dcc.Upload( - id="upload-data", - # children=html.Div( - # ["(Label all classes that are present, in all regions of the image those classes occur)"] - # ), - style={ - "width": "100%", - "height": "30px", - "lineHeight": "70px", - "borderWidth": "1px", - "borderStyle": "none", - "borderRadius": "1px", - "textAlign": "center", - "margin": "10px", - }, - multiple=True, - ), - html.H2(""), - html.Ul(id="file-list"), - - ], #children - ), #div banner id - - dcc.Tabs([ - dcc.Tab(label='Imagery and Controls', children=[ - - html.Div( - id="main-content", - children=[ - - html.Div( - id="left-column", - children=[ - dcc.Loading( - id="segmentations-loading", - type="cube", - children=[ - # Graph - dcc.Graph( - id="graph", - figure=make_and_return_default_figure(), - config={ - 'displayModeBar': 'hover', - "displaylogo": False, - # 'modeBarOrientation': 'h', - "modeBarButtonsToAdd": [ - # "drawrect", - "drawopenpath", - "eraseshape", - ] - }, - ), - ], - ), - - ], - className="ten columns app-background", - ), - - html.Div( - id="right-column", - children=[ - - - html.H6("Label class"), - # Label class chosen with buttons - html.Div( - id="label-class-buttons", - children=[ - html.Button( - #"%2d" % (n,), - "%s" % (class_label_names[n],), - id={"type": "label-class-button", "index": n}, - style={"background-color": convert_integer_class_to_color(c)}, - ) - for n, c in enumerate(class_labels) - ], - ), - - html.H6(id="pen-width-display"), - # Slider for specifying pen width - dcc.Slider( - id="pen-width", - min=0, - max=5, - step=1, - value=DEFAULT_PEN_WIDTH, - ), - - - # Indicate showing most recently computed segmentation - dcc.Checklist( - id="crf-show-segmentation", - options=[ - { - "label": "Compute/Show segmentation", - "value": "Show segmentation", - } - ], - value=[], - ), - - # html.Br(), - # html.P(['------------------------']), - dcc.Markdown( - ">Post-processing settings" - ), - - html.H6(id="theta-display"), - # Slider for specifying pen width - dcc.Slider( - id="crf-theta-slider", - min=1, - max=100, - step=1, - value=DEFAULT_CRF_THETA, - ), - - html.H6(id="mu-display"), - # Slider for specifying pen width - dcc.Slider( - id="crf-mu-slider", - min=1, - max=100, - step=1, - value=DEFAULT_CRF_MU, - ), - - html.H6(id="crf-downsample-display"), - # Slider for specifying pen width - dcc.Slider( - id="crf-downsample-slider", - min=1, - max=6, - step=1, - value=DEFAULT_CRF_DOWNSAMPLE, - ), - - html.H6(id="crf-gtprob-display"), - # Slider for specifying pen width - dcc.Slider( - id="crf-gtprob-slider", - min=0.5, - max=0.95, - step=0.05, - value=DEFAULT_CRF_GTPROB, - ), - - dcc.Markdown( - ">Classifier settings" - ), - - # html.H6(id="sigma-display"), - # dcc.RangeSlider( - # id="rf-sigma-range-slider", - # min=1, - # max=30, - # step=1, - # value=[SIGMA_MIN, SIGMA_MAX], #1, 16], - # ), - - html.H6(id="rf-downsample-display"), - # Slider for specifying pen width - dcc.Slider( - id="rf-downsample-slider", - min=1, - max=20, - step=1, - value=DEFAULT_RF_DOWNSAMPLE, - ), - - # html.H6(id="rf-nestimators-display"), - # # Slider for specifying pen width - # dcc.Slider( - # id="rf-nestimators-slider", - # min=1, - # max=5, - # step=1, - # value=DEFAULT_RF_NESTIMATORS, - # ), - - # dcc.Markdown( - # ">Note that all segmentations are saved automatically. This download button is for quick checks only e.g. when dense annotations obscure the segmentation view" - # ), - # - # html.A( - # id="download-image", - # download="classified-image-"+datetime.now().strftime("%d-%m-%Y-%H-%M")+".png", - # children=[ - # html.Button( - # "Download Label Image (optional)", - # id="download-image-button", - # ) - # ], - # ), - - ], - className="three columns app-background", - ), - ], - className="ten columns", - ), #main content Div - - ]), - dcc.Tab(label='File List and Instructions', children=[ - - html.H4(children="Doodler"), - dcc.Markdown( - "> A user-interactive tool for fast segmentation of imagery (designed for natural environments), using a Multilayer Perceptron classifier and Conditional Random Field (CRF) refinement. \ - Doodles are used to make a classifier model, which maps image features to unary potentials to create an initial image segmentation. The segmentation is then refined using a CRF model." - ), - - dcc.Input(id='my-id', value='Enter-user-ID', type="text"), - html.Button('Submit', id='button'), - html.Div(id='my-div'), - - html.H3("Select Image"), - dcc.Dropdown( - id="select-image", - optionHeight=15, - style={'fontSize': 13}, - options = [ - {'label': image.split('assets/')[-1], 'value': image } \ - for image in files - ], - - value='assets/logos/dash-default.jpg', # - multi=False, - ), - html.Div([html.Div(id='live-update-text'), - dcc.Interval(id='interval-component', interval=500, n_intervals=0)]), - - - html.P(children="This image/Copy"), - dcc.Textarea(id="thisimage_output", cols=80), - html.Br(), - - dcc.Markdown( - """ - **Instructions:** - * Before you begin, make a new 'classes.txt' file that contains a list of the classes you'd like to label - * Optionally, you can copy the images you wish to label into the 'assets' folder (just jpg, JPG or jpeg extension, or mixtures of those, for now) - * Enter a user ID (initials or similar). This will get appended to your results to identify you. Results are also timestamped. You may enter a user ID at any time (or not at all) - * Select an image from the list (often you need to select the image twice: make sure the image selected matches the image name shown in the box) - * Make some brief annotations ('doodles') of every class present in the image, in every region of the image that class is present - * Check 'Show/compute segmentation'. The computation time depends on image size, and the number of classes and doodles. Larger image or more doodles/classes = greater time and memory required - * If you're not happy, uncheck 'Show/compute segmentation' and play with the parameters. However, it is often better to leave the parameters and correct mistakes by adding or removing doodles, or using a different pen width. - * Once you're happy, you can download the label image, but it is already saved in the 'results' folder. - * Before you move onto the next image from the list, uncheck 'Show/compute segmentation'. - * Repeat. Happy doodling! Press Ctrl+C to end the program. Results are in the 'results' folder, timestamped. Session logs are also timestamped and found in the 'logs' directory. - * As you go, the program only lists files that are yet to be labeled. It does this irrespective of your opinion of the segmentation, so you get 'one shot' before you select another image (i.e. you cant go back to redo) - * [Code on GitHub](https://github.com/dbuscombe-usgs/dash_doodler). - """ - ), - dcc.Markdown( - """ - **Tips:** 1) Works best for small imagery, typically much smaller than 3000 x 3000 px images. This prevents out-of-memory errors, and also helps you identify small features\ - 2) Less is usually more! It is often best to use small pen width and relatively few annotations. Don't be tempted to spend too long doodling; extra doodles can be strategically added to correct segmentations \ - 3) Make doodles of every class present in the image, and also every region of the image (i.e. avoid label clusters) \ - 4) If things get weird, hit the refresh button on your browser and it should reset the application. Don't worry, all your previous work is saved!\ - 5) Remember to uncheck 'Show/compute segmentation' before you change parameter values or change image\ - """ - ), - - - ]),]), - - html.Div( - id="no-display", - children=[ - dcc.Store(id="image-list-store", data=[]), - # Store for user created masks - # data is a list of dicts describing shapes - dcc.Store(id="masks", data={"shapes": []}), - # Store for storing segmentations from shapes - # the keys are hashes of shape lists and the data are pngdata - # representing the corresponding segmentation - # this is so we can download annotations and also not recompute - # needlessly old segmentations - dcc.Store(id="segmentation", data={}), - dcc.Store(id="classified-image-store", data=""), - ], - ), #nos-display div - - ], #children -) #app layout - -##============================================================ -def save_file(name, content): - """Decode and store a file uploaded with Plotly Dash.""" - data = content.encode("utf8").split(b";base64,")[1] - with open(os.path.join(UPLOAD_DIRECTORY, name), "wb") as fp: - fp.write(base64.decodebytes(data)) - - -def uploaded_files(): - """List the files in the upload directory.""" - files = [] - for filename in os.listdir(UPLOAD_DIRECTORY): - path = os.path.join(UPLOAD_DIRECTORY, filename) - if os.path.isfile(path): - if 'jpg' in filename: - files.append(filename) - if 'JPG' in filename: - files.append(filename) - if 'jpeg' in filename: - files.append(filename) - - labeled_files = [] - for filename in os.listdir(LABELED_DIRECTORY): - path = os.path.join(LABELED_DIRECTORY, filename) - if os.path.isfile(path): - if 'jpg' in filename: - labeled_files.append(filename) - if 'JPG' in filename: - labeled_files.append(filename) - if 'jpeg' in filename: - labeled_files.append(filename) - - filelist = 'files_done.txt' - - with open(filelist, 'w') as filehandle: - for listitem in labeled_files: - filehandle.write('%s\n' % listitem) - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('File list written to %s' % (filelist)) - - return sorted(files), sorted(labeled_files) - - -def file_download_link(filename): - """Create a Plotly Dash 'A' element that downloads a file from the app.""" - location = "/download/{}".format(urlquote(filename)) - return html.A(filename, href=location) - - +## ``````````````````````````` imports ##======================================================== -def show_segmentation(image_path, - mask_shapes, - callback_context, - crf_theta_slider_value, - crf_mu_slider_value, - results_folder, - rf_downsample_value, - crf_downsample_factor, - gt_prob, - my_id_value, - rf_file, - data_file, - multichannel, - intensity, - edges, - texture, - # n_estimators, - SAVE_RF, - ): - - """ adds an image showing segmentations to a figure's layout """ - - # add 1 because classifier takes 0 to mean no mask - shape_layers = [convert_color_class(shape["line"]["color"]) + 1 for shape in mask_shapes] - - label_to_colors_args = { - "colormap": class_label_colormap, - "color_class_offset": -1, - } - - sigma_min=1; sigma_max=16 - - segimg, seg, img, color_doodles, doodles = compute_segmentations( - mask_shapes, crf_theta_slider_value,crf_mu_slider_value, - results_folder, rf_downsample_value, # median_filter_value, - crf_downsample_factor, gt_prob, my_id_value, callback_context, rf_file, data_file, - multichannel, intensity, edges, texture, 1, 16, #n_estimators, - img_path=image_path, - shape_layers=shape_layers, - label_to_colors_args=label_to_colors_args, - SAVE_RF=SAVE_RF, - ) - - # get the classifier that we can later store in the Store - segimgpng = img_array_2_pil(segimg) #plot_utils. - - return (segimgpng, seg, img, color_doodles, doodles ) - - -def parse_contents(contents, filename, date): - return html.Div([ - html.H5(filename), - html.H6(datetime.fromtimestamp(date)), - - # HTML images accept base64 encoded strings in the same format - # that is supplied by the upload - html.Img(src=contents), - html.Hr(), - html.Div('Raw Content'), - html.Pre(contents[0:200] + '...', style={ - 'whiteSpace': 'pre-wrap', - 'wordBreak': 'break-all' - }) - ]) - -def look_up_seg(d, key): - """ Returns a PIL.Image object """ - data = d[key] - img_bytes = base64.b64decode(data) - img = PIL.Image.open(io.BytesIO(img_bytes)) - return img - -def listToString(s): - # initialize an empty string - str1 = " " - # return string - return (str1.join(s)) - -# ##======================================================== - -@app.callback( - [ - Output("select-image","options"), - Output("graph", "figure"), - Output("image-list-store", "data"), - Output("masks", "data"), - Output('my-div', 'children'), - Output("segmentation", "data"), - Output('thisimage_output', 'value'), - Output("pen-width-display", "children"), - Output("theta-display", "children"), - Output("mu-display", "children"), - Output("crf-downsample-display", "children"), - Output("crf-gtprob-display", "children"), - Output("rf-downsample-display", "children"), - # Output("rf-nestimators-display", "children"), - Output("classified-image-store", "data"), - ], - [ - Input("upload-data", "filename"), - Input("upload-data", "contents"), - Input("graph", "relayoutData"), - Input( - {"type": "label-class-button", "index": dash.dependencies.ALL}, - "n_clicks_timestamp", - ), - Input("crf-theta-slider", "value"), - Input('crf-mu-slider', "value"), - Input("pen-width", "value"), - Input("crf-show-segmentation", "value"), - Input("crf-downsample-slider", "value"), - Input("crf-gtprob-slider", "value"), - Input("rf-downsample-slider", "value"), - # Input("rf-nestimators-slider", "value"), - Input("select-image", "value"), - Input('interval-component', 'n_intervals'), - ], - [ - State("image-list-store", "data"), - State('my-id', 'value'), - State("masks", "data"), - State("segmentation", "data"), - State("classified-image-store", "data"), - ], -) - -# ##======================================================== - -def update_output( - uploaded_filenames, - uploaded_file_contents, - graph_relayoutData, - any_label_class_button_value, - crf_theta_slider_value, - crf_mu_slider_value, - pen_width_value, - show_segmentation_value, - crf_downsample_value, - gt_prob, - # sigma_range_slider_value, - rf_downsample_value, - # n_estimators, - select_image_value, - n_intervals, - image_list_data, - my_id_value, - masks_data, - segmentation_data, - segmentation_store_data, - ): - """Save uploaded files and regenerate the file list.""" - - callback_context = [p["prop_id"] for p in dash.callback_context.triggered][0] - #print(callback_context) - - multichannel = True - intensity = True - edges = True - texture = True - - # if uploaded_filenames is not None and uploaded_file_contents is not None: - # for name, data in zip(uploaded_filenames, uploaded_file_contents): - # save_file(name, data) - # image_list_data = [] - # all_image_value = '' - # files = '' - # options = [] - # else: - image_list_data = [] - all_image_value = '' - files = '' - options = [] - - if callback_context=='interval-component.n_intervals': - files, labeled_files = uploaded_files() - - files = [f.split('assets/')[-1] for f in files] - labeled_files = [f.split('labeled/')[-1] for f in labeled_files] - - files = list(set(files) - set(labeled_files)) - files = sorted(files) - - options = [{'label': image, 'value': image } for image in files] - - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Checked assets and labeled lists and revised list of images yet to label') - - if 'assets' not in select_image_value: - select_image_value = 'assets'+os.sep+select_image_value - - if callback_context == "graph.relayoutData": - try: - if "shapes" in graph_relayoutData.keys(): - masks_data["shapes"] = graph_relayoutData["shapes"] - else: - return dash.no_update - except: - return dash.no_update - - elif callback_context == "select-image.value": - masks_data={"shapes": []} - segmentation_data={} - - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('New image selected') - - pen_width = pen_width_value #int(round(2 ** (pen_width_value))) - - # find label class value by finding button with the greatest n_clicks - if any_label_class_button_value is None: - label_class_value = DEFAULT_LABEL_CLASS - else: - label_class_value = max( - enumerate(any_label_class_button_value), - key=lambda t: 0 if t[1] is None else t[1], - )[0] - - fig = make_and_return_default_figure( - images = [select_image_value], - stroke_color=convert_integer_class_to_color(label_class_value), - pen_width=pen_width, - shapes=masks_data["shapes"], - ) - - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Main figure window updated with new image') - - if ("Show segmentation" in show_segmentation_value) and ( - len(masks_data["shapes"]) > 0): - # to store segmentation data in the store, we need to base64 encode the - # PIL.Image and hash the set of shapes to use this as the key - # to retrieve the segmentation data, we need to base64 decode to a PIL.Image - # because this will give the dimensions of the image - sh = shapes_to_key( - [ - masks_data["shapes"], - '', #segmentation_features_value, - '', #sigma_range_slider_value, - ] - ) +from app import app, server - rf_file = 'RandomForestClassifier_'+'_'.join(class_label_names)+'.pkl.z' #class_label_names - data_file = 'data_'+'_'.join(class_label_names)+'.pkl.z' #class_label_names - - # logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - # logging.info('Saving RF model to %s' % (rf_file)) - # - # logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - # logging.info('Saving data features to %s' % (rf_file)) - - segimgpng = None - if 'median' not in callback_context: - - # start timer - if os.name=='posix': # true if linux/mac or cygwin on windows - start = time.time() - else: # windows - start = time.clock() - - segimgpng, seg, img, color_doodles, doodles = show_segmentation( - [select_image_value], masks_data["shapes"], callback_context,#median_filter_value, - crf_theta_slider_value, crf_mu_slider_value, results_folder, rf_downsample_value, crf_downsample_value, gt_prob, my_id_value, rf_file, data_file, - multichannel, intensity, edges, texture,SAVE_RF, # n_estimators, sigma_range_slider_value[0], sigma_range_slider_value[1], - ) - - if os.name=='posix': # true if linux/mac - elapsed = (time.time() - start)#/60 - else: # windows - elapsed = (time.clock() - start)#/60 - #print("Processing took "+ str(elapsed) + " minutes") - - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Processing took %s seconds' % (str(elapsed))) - - lstack = (np.arange(seg.max()) == seg[...,None]-1).astype(int) #one-hot encode - - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('One-hot encoded label stack created') - - #np.savez('test', img.astype(np.uint8), lstack.astype(np.uint8), color_doodles.astype(np.uint8), doodles.astype(np.uint8) ) - - if type(select_image_value) is list: - if 'jpg' in select_image_value[0]: - colfile = select_image_value[0].replace('assets',results_folder).replace('.jpg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') - if 'JPG' in select_image_value[0]: - colfile = select_image_value[0].replace('assets',results_folder).replace('.JPG','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') - if 'jpeg' in select_image_value[0]: - colfile = select_image_value[0].replace('assets',results_folder).replace('.jpeg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') - - if np.ndim(img)==3: - imsave(colfile,label_to_colors(seg-1, img[:,:,0]==0, alpha=128, colormap=class_label_colormap, color_class_offset=0, do_alpha=False)) - else: - imsave(colfile,label_to_colors(seg-1, img==0, alpha=128, colormap=class_label_colormap, color_class_offset=0, do_alpha=False)) - - else: - #colfile = select_image_value.replace('assets',results_folder).replace('.jpg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') - if 'jpg' in select_image_value: - colfile = select_image_value.replace('assets',results_folder).replace('.jpg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') - if 'JPG' in select_image_value: - colfile = select_image_value.replace('assets',results_folder).replace('.JPG','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') - if 'jpeg' in select_image_value: - colfile = select_image_value.replace('assets',results_folder).replace('.jpeg','_label'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'_'+my_id_value+'.png') - - if np.ndim(img)==3: - imsave(colfile,label_to_colors(seg-1, img[:,:,0]==0, alpha=128, colormap=class_label_colormap, color_class_offset=0, do_alpha=False)) - else: - imsave(colfile,label_to_colors(seg-1, img==0, alpha=128, colormap=class_label_colormap, color_class_offset=0, do_alpha=False)) - - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('RGB label image saved to %s' % (colfile)) - - settings_dict = np.array([pen_width, crf_downsample_value, rf_downsample_value, crf_theta_slider_value, crf_mu_slider_value, gt_prob]) - #median_filter_value,sigma_range_slider_value[0], sigma_range_slider_value[1], n_estimators - - if type(select_image_value) is list: - if 'jpg' in select_image_value[0]: - numpyfile = select_image_value[0].replace('assets',results_folder).replace('.jpg','_'+my_id_value+'.npz') #datetime.now().strftime("%Y-%m-%d-%H-%M")+ - if 'JPG' in select_image_value[0]: - numpyfile = select_image_value[0].replace('assets',results_folder).replace('.JPG','_'+my_id_value+'.npz') #datetime.now().strftime("%Y-%m-%d-%H-%M")+ - if 'jpeg' in select_image_value[0]: - numpyfile = select_image_value[0].replace('assets',results_folder).replace('.jpeg','_'+my_id_value+'.npz') #datetime.now().strftime("%Y-%m-%d-%H-%M")+ - - - if os.path.exists(numpyfile): - saved_data = np.load(numpyfile) - savez_dict = dict() - for k in saved_data.keys(): - tmp = saved_data[k] - name = str(k) - savez_dict['0'+name] = tmp - del tmp - - savez_dict['image'] = img.astype(np.uint8) - savez_dict['label'] = lstack.astype(np.uint8) - savez_dict['color_doodles'] = color_doodles.astype(np.uint8) - savez_dict['doodles'] = doodles.astype(np.uint8) - savez_dict['settings'] = settings_dict - savez_dict['classes'] = class_label_names - - np.savez(numpyfile, **savez_dict ) - - #np.savez(numpyfile, img.astype(np.uint8), lstack.astype(np.uint8), color_doodles.astype(np.uint8), doodles.astype(np.uint8), saved_img, saved_label, ) - else: - savez_dict = dict() - savez_dict['image'] = img.astype(np.uint8) - savez_dict['label'] = lstack.astype(np.uint8) - savez_dict['color_doodles'] = color_doodles.astype(np.uint8) - savez_dict['doodles'] = doodles.astype(np.uint8) - savez_dict['settings'] = settings_dict - savez_dict['classes'] = class_label_names - - np.savez(numpyfile, **savez_dict ) #save settings too - - else: - if 'jpg' in select_image_value: - numpyfile = select_image_value.replace('assets',results_folder).replace('.jpg','_'+my_id_value+'.npz') #datetime.now().strftime("%Y-%m-%d-%H-%M")+ - if 'JPG' in select_image_value: - numpyfile = select_image_value.replace('assets',results_folder).replace('.JPG','_'+my_id_value+'.npz') #datetime.now().strftime("%Y-%m-%d-%H-%M")+ - if 'jpeg' in select_image_value: - numpyfile = select_image_value.replace('assets',results_folder).replace('.jpeg','_'+my_id_value+'.npz') #datetime.now().strftime("%Y-%m-%d-%H-%M")+ - - if os.path.exists(numpyfile): - saved_data = np.load(numpyfile) - savez_dict = dict() - for k in saved_data.keys(): - tmp = saved_data[k] - name = str(k) - savez_dict['0'+name] = tmp - del tmp - - savez_dict['image'] = img.astype(np.uint8) - savez_dict['label'] = lstack.astype(np.uint8) - savez_dict['color_doodles'] = color_doodles.astype(np.uint8) - savez_dict['doodles'] = doodles.astype(np.uint8) - savez_dict['settings'] = settings_dict - savez_dict['classes'] = class_label_names - - np.savez(numpyfile, **savez_dict )#save settings too - - #np.savez(numpyfile, img.astype(np.uint8), lstack.astype(np.uint8), color_doodles.astype(np.uint8), doodles.astype(np.uint8), saved_img, saved_label, ) - else: - savez_dict = dict() - savez_dict['image'] = img.astype(np.uint8) - savez_dict['label'] = lstack.astype(np.uint8) - savez_dict['color_doodles'] = color_doodles.astype(np.uint8) - savez_dict['doodles'] = doodles.astype(np.uint8) - savez_dict['settings'] = settings_dict - savez_dict['classes'] = class_label_names - - np.savez(numpyfile, **savez_dict )#save settings too - - del img, seg, lstack, doodles, color_doodles - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('Numpy arrays saved to %s' % (numpyfile)) - - segmentation_data = shapes_seg_pair_as_dict( - segmentation_data, sh, segimgpng - ) - try: - segmentation_store_data = pil2uri( - seg_pil( - select_image_value, segimgpng, do_alpha=True - ) #plot_utils. - ) - shutil.copyfile(select_image_value, select_image_value.replace('assets', 'labeled')) #move - except: - segmentation_store_data = pil2uri( - seg_pil( - PIL.Image.open(select_image_value), segimgpng, do_alpha=True - ) #plot_utils. - ) - shutil.copyfile(select_image_value, select_image_value.replace('assets', 'labeled')) #move - - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('%s moved to labeled folder' % (select_image_value.replace('assets', 'labeled'))) - - - images_to_draw = [] - if segimgpng is not None: - images_to_draw = [segimgpng] - - fig = add_layout_images_to_fig(fig, images_to_draw) #plot_utils. - - show_segmentation_value = [] - - image_list_data.append(select_image_value) - - try: - os.remove('my_defaults.py') - except: - pass - - with open('my_defaults.py', 'a') as the_file: - the_file.write('DEFAULT_PEN_WIDTH = {}\n'.format(pen_width)) - the_file.write('DEFAULT_CRF_DOWNSAMPLE = {}\n'.format(crf_downsample_value)) - the_file.write('DEFAULT_RF_DOWNSAMPLE = {}\n'.format(rf_downsample_value)) - the_file.write('DEFAULT_CRF_THETA = {}\n'.format(crf_theta_slider_value)) - the_file.write('DEFAULT_CRF_MU = {}\n'.format(crf_mu_slider_value)) - the_file.write('DEFAULT_CRF_GTPROB = {}\n'.format(gt_prob)) - print('my_defaults.py overwritten with parameter settings') - - logging.info(datetime.now().strftime("%d-%m-%Y-%H-%M-%S")) - logging.info('my_defaults.py overwritten with parameter settings') - - - if len(files) == 0: - return [ - options, - fig, - image_list_data, - masks_data, - segmentation_data, - 'User ID: "{}"'.format(my_id_value) , - select_image_value, - "Pen width (default: %d): %d" % (DEFAULT_PEN_WIDTH,pen_width), - "Blur factor (default: %d): %d" % (DEFAULT_CRF_THETA, crf_theta_slider_value), #"Blurring parameter for CRF image feature extraction (default: %d): %d" - "Model independence factor (default: %d): %d" % (DEFAULT_CRF_MU,crf_mu_slider_value), #CRF color class difference tolerance parameter (default: %d) - "Downsample factor (default: %d): %d" % (DEFAULT_CRF_DOWNSAMPLE,crf_downsample_value), - "Probability of doodle (default: %f): %f" % (DEFAULT_CRF_GTPROB,gt_prob), - # "Blurring parameter for RF feature extraction: %d, %d" % (sigma_range_slider_value[0], sigma_range_slider_value[1]), - "RF downsample factor (default: %d): %d" % (DEFAULT_RF_DOWNSAMPLE,rf_downsample_value), - # "RF estimators per image (default: %d): %d" % (DEFAULT_RF_NESTIMATORS,n_estimators), - segmentation_store_data, - ] - else: - return [ - options, - fig, - image_list_data, - masks_data, - segmentation_data, - 'User ID: "{}"'.format(my_id_value) , - select_image_value, - "Pen width (default: %d): %d" % (DEFAULT_PEN_WIDTH,pen_width), - "Blur factor (default: %d): %d" % (DEFAULT_CRF_THETA, crf_theta_slider_value), - "Model independence factor (default: %d): %d" % (DEFAULT_CRF_MU,crf_mu_slider_value), - "Downsample factor (default: %d): %d" % (DEFAULT_CRF_DOWNSAMPLE,crf_downsample_value), - "Probability of doodle (default: %f): %f" % (DEFAULT_CRF_GTPROB,gt_prob), - # "Blurring parameter for RF feature extraction: %d, %d" % (sigma_range_slider_value[0], sigma_range_slider_value[1]), - "Downsample factor (default: %d): %d" % (DEFAULT_RF_DOWNSAMPLE,rf_downsample_value), - # "Estimators per image (default: %d): %d" % (DEFAULT_RF_NESTIMATORS,n_estimators), - segmentation_store_data, - ] - - -##======================================================== -# set the download url to the contents of the classified-image-store (so they can be -# downloaded from the browser's memory) -# app.clientside_callback( -# """ -# function(the_image_store_data) { -# return the_image_store_data; -# } -# """, -# Output("download-image", "href"), -# [Input("classified-image-store", "data")], -# ) - -##======================================================== +from environment.settings import APP_HOST, APP_PORT, APP_DEBUG, DEV_TOOLS_PROPS_CHECK, APP_DOWNLOAD_SAMPLE +import os, zipfile, requests +from datetime import datetime +from glob import glob if __name__ == "__main__": - print('Go to http://127.0.0.1:8050/ in your web browser to use Doodler') - #app.run_server() - app.run_server(host='0.0.0.0', port=8050, threaded=True) - -#=== + #======================================================== + ## downlaod the sample if APP_DOWNLOAD_SAMPLE is True + #======================================================== + if APP_DOWNLOAD_SAMPLE: + url='https://github.com/dbuscombe-usgs/dash_doodler/releases/download/data/sample_images.zip' + filename = os.path.join(os.getcwd(), "sample_images.zip") + r = requests.get(url, allow_redirects=True) + open(filename, 'wb').write(r.content) + + with zipfile.ZipFile(filename, "r") as z_fp: + z_fp.extractall("./assets/") + os.remove(filename) + + #if labeled images exist in labaled folder, zip them up with a timestamp, and remove the individual files + try: + filename = 'labeled'+os.sep+'labeled-'+datetime.now().strftime("%Y-%m-%d-%H-%M")+'.zip' + with zipfile.ZipFile(filename, "w") as z_fp: + for k in glob("./labeled/*.jpeg")+glob("./labeled/*.JPG")+glob("./labeled/*.jpg"): + z_fp.write(k) + z_fp.close() + + for k in glob("./labeled/*.jpeg")+glob("./labeled/*.JPG")+glob("./labeled/*.jpg"): + os.remove(k) + + except: + pass + + #======================================================== + ## ``````````````````````````` run the app in the browser at $APP_HOST, port $APP_PORT + ##======================================================== + print('Go to http://%s:%i/ in your web browser to use Doodler'% (APP_HOST,APP_PORT)) + app.run_server( + host=APP_HOST, + port=APP_PORT, + debug=APP_DEBUG, + dev_tools_props_check=DEV_TOOLS_PROPS_CHECK + ) diff --git a/install/dashdoodler-clean.yml b/environment/dashdoodler-clean.yml similarity index 83% rename from install/dashdoodler-clean.yml rename to environment/dashdoodler-clean.yml index b7eaf9b..556160d 100644 --- a/install/dashdoodler-clean.yml +++ b/environment/dashdoodler-clean.yml @@ -17,7 +17,11 @@ dependencies: - dash-table - Pillow - scikit-learn - - pandas - pydensecrf - cairo - tqdm + - psutil + - pip + - pip: + - Flask-Caching + - requests diff --git a/install/dashdoodler.yml b/environment/dashdoodler.yml similarity index 86% rename from install/dashdoodler.yml rename to environment/dashdoodler.yml index c03a479..995da73 100644 --- a/install/dashdoodler.yml +++ b/environment/dashdoodler.yml @@ -17,7 +17,11 @@ dependencies: - dash-table=4.7 - Pillow==7.1.2 - scikit-learn=0.23 - - pandas==1.0.3 - pydensecrf - cairo - tqdm + - psutil + - pip + - pip: + - Flask-Caching + - requests diff --git a/environment/requirements.txt b/environment/requirements.txt new file mode 100644 index 0000000..5a011b0 --- /dev/null +++ b/environment/requirements.txt @@ -0,0 +1,19 @@ +dashcorecomponents +plotly +plotly_express +CairoSVG +matplotlib +scipy +dash +numpy +scikitimage +dashhtmlcomponents +dashtable +Pillow +scikitlearn +pydensecrf +cairo +tqdm +psutil +FlaskCaching +requests diff --git a/environment/settings.py b/environment/settings.py new file mode 100644 index 0000000..3e4311a --- /dev/null +++ b/environment/settings.py @@ -0,0 +1,14 @@ +HOST="0.0.0.0" +PORT="8050" +DEBUG=False +DEV_TOOLS_PROPS_CHECK=False +DOWNLOAD_SAMPLE=True + +## uncomment line below to prevent the program from downloading the sample imagery into the assets folder +#DOWNLOAD_SAMPLE=False + +APP_HOST = str(HOST) +APP_PORT = int(PORT) +APP_DEBUG = bool(DEBUG) +DEV_TOOLS_PROPS_CHECK = bool(DEV_TOOLS_PROPS_CHECK) +APP_DOWNLOAD_SAMPLE = bool(DOWNLOAD_SAMPLE) diff --git a/install/Dockerfile.miniconda b/install/Dockerfile.miniconda deleted file mode 100644 index 81898a5..0000000 --- a/install/Dockerfile.miniconda +++ /dev/null @@ -1,23 +0,0 @@ -# -FROM continuumio/miniconda3 -LABEL maintainer "Doodler, by Dr Daniel Buscombe, Marda Science/USGS " -WORKDIR / -# The code to run when container is started: -COPY ./ ./ - -COPY install/dashdoodler.yml . -RUN conda env create -f dashdoodler.yml - -# Make RUN commands use the new environment: -SHELL ["conda", "run", "-n", "dashdoodler", "/bin/bash", "-c"] - -EXPOSE 8050/tcp -EXPOSE 8050/udp -EXPOSE 80 -EXPOSE 8080 - -# set environment variables -ENV PYTHONDONTWRITEBYTECODE 1 -ENV PYTHONUNBUFFERED 1 - -ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "dashdoodler", "python", "doodler.py"] diff --git a/install/requirements.txt b/install/requirements.txt deleted file mode 100644 index bf7e3e3..0000000 --- a/install/requirements.txt +++ /dev/null @@ -1,16 +0,0 @@ -scikit-learn -plotly_express -scipy -pillow -plotly -matplotlib -imageio -cython -pydensecrf -numpy -joblib -flask -pandas -scikit-image -dash -cairosvg diff --git a/results/download_sample_results.py b/results/download_sample_results.py new file mode 100644 index 0000000..70efbca --- /dev/null +++ b/results/download_sample_results.py @@ -0,0 +1,35 @@ +# Written by Dr Daniel Buscombe, Marda Science LLC +# for the USGS Coastal Change Hazards Program +# +# MIT License +# +# Copyright (c) 2020-2021, Marda Science LLC +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +import os, requests, zipfile + +url='https://github.com/dbuscombe-usgs/dash_doodler/releases/download/example-results/results2021-06-21-11-05.zip' +filename = os.path.join(os.getcwd(), "results2021-06-21-11-05.zip") +r = requests.get(url, allow_redirects=True) +open(filename, 'wb').write(r.content) + +with zipfile.ZipFile(filename, "r") as z_fp: + z_fp.extractall("./") +os.remove(filename) diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr02-0_db.npz b/results/results2021-06-21-11-05/D800_20160308_222129lr02-0_db.npz deleted file mode 100644 index 6b64256..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr02-0_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr02-0_label2021-06-21-11-18_db.png b/results/results2021-06-21-11-05/D800_20160308_222129lr02-0_label2021-06-21-11-18_db.png deleted file mode 100644 index 4407d73..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr02-0_label2021-06-21-11-18_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr02-2_db.npz b/results/results2021-06-21-11-05/D800_20160308_222129lr02-2_db.npz deleted file mode 100644 index c4049fa..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr02-2_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr02-2_label2021-06-21-11-12_db.png b/results/results2021-06-21-11-05/D800_20160308_222129lr02-2_label2021-06-21-11-12_db.png deleted file mode 100644 index 83c858d..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr02-2_label2021-06-21-11-12_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_db.npz b/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_db.npz deleted file mode 100644 index 0021ff3..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_label2021-06-21-11-06_db.png b/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_label2021-06-21-11-06_db.png deleted file mode 100644 index ddf0998..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_label2021-06-21-11-06_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_label2021-06-21-11-07_db.png b/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_label2021-06-21-11-07_db.png deleted file mode 100644 index 9e82fc0..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr02-3_label2021-06-21-11-07_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr03-2_db.npz b/results/results2021-06-21-11-05/D800_20160308_222129lr03-2_db.npz deleted file mode 100644 index d214e09..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr03-2_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr03-2_label2021-06-21-11-12_db.png b/results/results2021-06-21-11-05/D800_20160308_222129lr03-2_label2021-06-21-11-12_db.png deleted file mode 100644 index f9939b1..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr03-2_label2021-06-21-11-12_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr03-3_db.npz b/results/results2021-06-21-11-05/D800_20160308_222129lr03-3_db.npz deleted file mode 100644 index cfb110c..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr03-3_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222129lr03-3_label2021-06-21-11-10_db.png b/results/results2021-06-21-11-05/D800_20160308_222129lr03-3_label2021-06-21-11-10_db.png deleted file mode 100644 index edee66a..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222129lr03-3_label2021-06-21-11-10_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr00-2_db.npz b/results/results2021-06-21-11-05/D800_20160308_222135lr00-2_db.npz deleted file mode 100644 index f817af3..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr00-2_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr00-2_label2021-06-21-11-09_db.png b/results/results2021-06-21-11-05/D800_20160308_222135lr00-2_label2021-06-21-11-09_db.png deleted file mode 100644 index 4407d73..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr00-2_label2021-06-21-11-09_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr02-0_db.npz b/results/results2021-06-21-11-05/D800_20160308_222135lr02-0_db.npz deleted file mode 100644 index 4da92ef..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr02-0_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr02-0_label2021-06-21-11-07_db.png b/results/results2021-06-21-11-05/D800_20160308_222135lr02-0_label2021-06-21-11-07_db.png deleted file mode 100644 index 4407d73..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr02-0_label2021-06-21-11-07_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr02-2_db.npz b/results/results2021-06-21-11-05/D800_20160308_222135lr02-2_db.npz deleted file mode 100644 index 5d01c11..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr02-2_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr02-2_label2021-06-21-11-08_db.png b/results/results2021-06-21-11-05/D800_20160308_222135lr02-2_label2021-06-21-11-08_db.png deleted file mode 100644 index 2a90f91..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr02-2_label2021-06-21-11-08_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr02-3_db.npz b/results/results2021-06-21-11-05/D800_20160308_222135lr02-3_db.npz deleted file mode 100644 index 01b186f..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr02-3_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr02-3_label2021-06-21-11-09_db.png b/results/results2021-06-21-11-05/D800_20160308_222135lr02-3_label2021-06-21-11-09_db.png deleted file mode 100644 index 6312360..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr02-3_label2021-06-21-11-09_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr03-0_db.npz b/results/results2021-06-21-11-05/D800_20160308_222135lr03-0_db.npz deleted file mode 100644 index fe7e0dc..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr03-0_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr03-0_label2021-06-21-11-08_db.png b/results/results2021-06-21-11-05/D800_20160308_222135lr03-0_label2021-06-21-11-08_db.png deleted file mode 100644 index 4407d73..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr03-0_label2021-06-21-11-08_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr03-1_db.npz b/results/results2021-06-21-11-05/D800_20160308_222135lr03-1_db.npz deleted file mode 100644 index c17d5ab..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr03-1_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr03-1_label2021-06-21-11-11_db.png b/results/results2021-06-21-11-05/D800_20160308_222135lr03-1_label2021-06-21-11-11_db.png deleted file mode 100644 index 4407d73..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr03-1_label2021-06-21-11-11_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr03-2_db.npz b/results/results2021-06-21-11-05/D800_20160308_222135lr03-2_db.npz deleted file mode 100644 index c22c0a9..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr03-2_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr03-2_label2021-06-21-11-10_db.png b/results/results2021-06-21-11-05/D800_20160308_222135lr03-2_label2021-06-21-11-10_db.png deleted file mode 100644 index 4dcd44e..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr03-2_label2021-06-21-11-10_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr03-3_db.npz b/results/results2021-06-21-11-05/D800_20160308_222135lr03-3_db.npz deleted file mode 100644 index 0f3b65e..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr03-3_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222135lr03-3_label2021-06-21-11-11_db.png b/results/results2021-06-21-11-05/D800_20160308_222135lr03-3_label2021-06-21-11-11_db.png deleted file mode 100644 index 151c1f8..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222135lr03-3_label2021-06-21-11-11_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222136lr02-2_db.npz b/results/results2021-06-21-11-05/D800_20160308_222136lr02-2_db.npz deleted file mode 100644 index 2bb71ca..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222136lr02-2_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222136lr02-2_label2021-06-21-11-18_db.png b/results/results2021-06-21-11-05/D800_20160308_222136lr02-2_label2021-06-21-11-18_db.png deleted file mode 100644 index a540e6c..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222136lr02-2_label2021-06-21-11-18_db.png and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222136lr02-3_db.npz b/results/results2021-06-21-11-05/D800_20160308_222136lr02-3_db.npz deleted file mode 100644 index 72d9e21..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222136lr02-3_db.npz and /dev/null differ diff --git a/results/results2021-06-21-11-05/D800_20160308_222136lr02-3_label2021-06-21-11-18_db.png b/results/results2021-06-21-11-05/D800_20160308_222136lr02-3_label2021-06-21-11-18_db.png deleted file mode 100644 index 9aaa540..0000000 Binary files a/results/results2021-06-21-11-05/D800_20160308_222136lr02-3_label2021-06-21-11-18_db.png and /dev/null differ diff --git a/sample/D800_20160308_221830-2.JPG b/sample/D800_20160308_221830-2.JPG deleted file mode 100644 index a0e57a1..0000000 Binary files a/sample/D800_20160308_221830-2.JPG and /dev/null differ diff --git a/sample/D800_20160308_222135-2.JPG b/sample/D800_20160308_222135-2.JPG deleted file mode 100644 index dbcd6c9..0000000 Binary files a/sample/D800_20160308_222135-2.JPG and /dev/null differ diff --git a/sample/D800_20160308_222135-3.JPG b/sample/D800_20160308_222135-3.JPG deleted file mode 100644 index b63e1db..0000000 Binary files a/sample/D800_20160308_222135-3.JPG and /dev/null differ diff --git a/sample/D800_20160308_222138-3.JPG b/sample/D800_20160308_222138-3.JPG deleted file mode 100644 index 0ae8f4d..0000000 Binary files a/sample/D800_20160308_222138-3.JPG and /dev/null differ diff --git a/sample/D800_20160308_222140-3.JPG b/sample/D800_20160308_222140-3.JPG deleted file mode 100644 index 18da654..0000000 Binary files a/sample/D800_20160308_222140-3.JPG and /dev/null differ diff --git a/sample/D800_20160308_222143-2.JPG b/sample/D800_20160308_222143-2.JPG deleted file mode 100644 index 61eabb3..0000000 Binary files a/sample/D800_20160308_222143-2.JPG and /dev/null differ diff --git a/sample/D800_20160308_222143-3.JPG b/sample/D800_20160308_222143-3.JPG deleted file mode 100644 index c211317..0000000 Binary files a/sample/D800_20160308_222143-3.JPG and /dev/null differ diff --git a/sample/D800_20160308_222208-3.JPG b/sample/D800_20160308_222208-3.JPG deleted file mode 100644 index 5b5b677..0000000 Binary files a/sample/D800_20160308_222208-3.JPG and /dev/null differ diff --git a/sample/D800_20160308_222216-0.JPG b/sample/D800_20160308_222216-0.JPG deleted file mode 100644 index 5a46360..0000000 Binary files a/sample/D800_20160308_222216-0.JPG and /dev/null differ diff --git a/sample/D800_20160308_222216-2.JPG b/sample/D800_20160308_222216-2.JPG deleted file mode 100644 index dd8b81c..0000000 Binary files a/sample/D800_20160308_222216-2.JPG and /dev/null differ diff --git a/sample/D800_20160308_222216-3.JPG b/sample/D800_20160308_222216-3.JPG deleted file mode 100644 index b90930e..0000000 Binary files a/sample/D800_20160308_222216-3.JPG and /dev/null differ diff --git a/sample/D800_20160308_222220-2.JPG b/sample/D800_20160308_222220-2.JPG deleted file mode 100644 index e171f89..0000000 Binary files a/sample/D800_20160308_222220-2.JPG and /dev/null differ diff --git a/sample/D800_20160308_222226-2.JPG b/sample/D800_20160308_222226-2.JPG deleted file mode 100644 index e031351..0000000 Binary files a/sample/D800_20160308_222226-2.JPG and /dev/null differ diff --git a/sample/D800_20160308_222234-2.JPG b/sample/D800_20160308_222234-2.JPG deleted file mode 100644 index 624e5f1..0000000 Binary files a/sample/D800_20160308_222234-2.JPG and /dev/null differ diff --git a/sample/D800_20160308_222234-3.JPG b/sample/D800_20160308_222234-3.JPG deleted file mode 100644 index 4d74bf5..0000000 Binary files a/sample/D800_20160308_222234-3.JPG and /dev/null differ diff --git a/src/__pycache__/defaults.cpython-36.pyc b/src/__pycache__/defaults.cpython-36.pyc deleted file mode 100644 index d9efe3f..0000000 Binary files a/src/__pycache__/defaults.cpython-36.pyc and /dev/null differ diff --git a/src/__pycache__/image_segmentation.cpython-36.pyc b/src/__pycache__/image_segmentation.cpython-36.pyc deleted file mode 100644 index eb6533e..0000000 Binary files a/src/__pycache__/image_segmentation.cpython-36.pyc and /dev/null differ