Skip to content

v0.9.10

Compare
Choose a tag to compare
@paulguerrie paulguerrie released this 13 Feb 18:41
· 1189 commits to main since this release

πŸš€ Added

inference Benchmarking πŸƒβ€β™‚οΈ

A new command has been added to the inference-cli for benchmarking performance. Now you can test inference in different environments with different configurations and measure its performance. Look at us testing speed and scalability of hosted inference at Roboflow platform 🀯

scaling_of_hosted_roboflow_platform.mov

Run your own benchmark with a simple command:

inference benchmark python-package-speed -m coco/3 

See the docs for more details.

🌱 Changed

  • Improved serialisation logic of requests and responses that helps Roboflow platform to improve model monitoring

πŸ”¨ Fixed

  • bug #260 causing inference API instability in multiple-workers setup and in case of shuffling large amount of models - from now on, API container should not raise strange HTTP 5xx errors due to model management
  • faulty logic for getting request_id causing errors in parallel-http container

πŸ† Contributors

@paulguerrie (Paul Guerrie), @SolomonLake (Solomon Lake ), @robiscoding (Rob Miller) @PawelPeczek-Roboflow (PaweΕ‚ PΔ™czek)

Full Changelog: v0.9.9...v0.9.10