Added ability to pass TF model as a config parameter #63
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@blakeblackshear here is what this PR suggests:
New config section:
In order to enable this, I also switched from DetectWithInputTensor to DetectWithImage. It saves a few lines of code figuring out the model's input tensor shape. This works for models with image input. In the future if we want to allow video, sound or other models, DetectWithImage won't be sufficient.
I did not notice performance impact by using PIL Image to feed a frame as an image to Coral instead of feeding frame np array as an InputTensor.
Looking forward to feedback.
A potential next step would be to enable chaining of models. If model A detects a label in a region (person), feed the detected box from the original image to the next model (face) and then send that to a third model to recognize the face. This will take a good amount of effort to implement and warrants a community discussion first. I'd be happy to contribute.