Skip to content

.NET/C# binding for Baidu paddle inference library and PaddleOCR

License

Notifications You must be signed in to change notification settings

AvenSun/PaddleSharp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PaddleSharp 🌟 main QQ

💗 .NET Wrapper for PaddleInference C API, include PaddleOCR 📖, PaddleDetection 🎯, Rotation Detector 🔄, support Windows(x64) 💻, NVIDIA Cuda 10.2+ based GPU 🎮 and Linux(Ubuntu-22.04 x64) 🐧, currently contained following main components:

  • PaddleOCR 📖 support 14 OCR languages model download on-demand, allow rotated text angle detection, 180 degree text detection, also support table recognition 📊.
  • PaddleDetection 🎯 support PPYolo detection model and PicoDet model 🏹.
  • RotationDetection 🔄 use Baidu's official text_image_orientation_infer model to detect text picture's rotation angle(0, 90, 180, 270).

NuGet Packages/Docker Images 📦

Release notes 📝

Please checkout this page 📄.

Infrastructure packages 🏗️

NuGet Package 💼 Version 📌 Description 📚
Sdcb.PaddleInference NuGet Paddle Inference C API .NET binding ⚙️
Sdcb.PaddleInference.runtime.win64.openblas NuGet Paddle Inference native windows-x64-openblas binding 🔗
Sdcb.PaddleInference.runtime.win64.mkl NuGet Paddle Inference native windows-x64-mkldnn binding 🔗

Note: Linux does not need a native binding NuGet package like windows(Sdcb.PaddleInference.runtime.win64.mkl), instead, you can/should based from a Dockerfile 🐳 to development:

Docker Images 🐳 Version 📌 Description 📚
sdflysha/dotnet6-paddle Docker PaddleInference 2.4.2, OpenCV 4.7.0, based on official Ubuntu 22.04 .NET 6 Runtime 🌐
sdflysha/dotnet6sdk-paddle Docker PaddleInference 2.4.2, OpenCV 4.7.0, based on official Ubuntu 22.04 .NET 6 SDK 🌐

Paddle Inference GPU package 🎮

Since GPU package are too large(>1.5GB), I cannot publish a NuGet package to nuget.org, there is a limitation of 250MB when upload to Github, there is some related issues to this:

However, you're good to build your own GPU nuget package using 01-build-native.linq 🛠️.

Here is the GPU package that I compiled (not from Baidu official) 🛠️:

NuGet Package 💼 Version 📌 Description 📚
cuda117_cudnn84_tr84_sm86 NuGet win64/CUDA 11.7/cuDNN 8.4/TensorRT 8.4/sm86 binding 🔗
cuda102_cudnn76_sm61_75 NuGet win64/CUDA 10.2/cuDNN 7.6/sm61+sm75 binding 🔗
cuda116_cudnn84_sm86_onnx NuGet win64/CUDA 11.6/cuDNN 8.4/sm86/onnx binding 🔗

PaddleOCR packages 📖

NuGet Package 💼 Version 📌 Description 📚
Sdcb.PaddleOCR NuGet PaddleOCR library(based on Sdcb.PaddleInference) ⚙️
Sdcb.PaddleOCR.Models.Online NuGet Online PaddleOCR models, will download when first using 🌐
Sdcb.PaddleOCR.Models.LocalV3 NuGet Full local v3 models, include multiple language(~130MB) 🗺️

Rotation Detection packages (part of PaddleCls) 🔄

NuGet Package 💼 Version 📌 Description 📚
Sdcb.RotationDetector NuGet RotationDetector library(based on Sdcb.PaddleInference) ⚙️

PaddleDetection packages 🎯

NuGet Package 💼 Version 📌 Description 📚
Sdcb.PaddleDetection NuGet PaddleDetection library(based on Sdcb.PaddleInference) ⚙️

Usage 📚

FAQ ❓

Why my code runs good in my windows machine, but DllNotFoundException in other machine: 💻

  1. Please ensure the latest Visual C++ Redistributable was installed in Windows (typically it should automatically installed if you have Visual Studio installed) 🛠️ Otherwise, it will fail with the following error (Windows only):

    DllNotFoundException: Unable to load DLL 'paddle_inference_c' or one of its dependencies (0x8007007E)
    

    If it's Unable to load DLL OpenCvSharpExtern.dll or one of its dependencies, then most likely the Media Foundation is not installed in the Windows Server 2012 R2 machine: image

  2. Many old CPUs do not support AVX instructions, please ensure your CPU supports AVX, or download the x64-noavx-openblas DLLs and disable Mkldnn: PaddleDevice.Openblas() 🚀

  3. If you're using Win7-x64, and your CPU does support AVX2, then you might also need to extract the following 3 DLLs into C:\Windows\System32 folder to make it run: 💾

    • api-ms-win-core-libraryloader-l1-2-0.dll
    • api-ms-win-core-processtopology-obsolete-l1-1-0.dll
    • API-MS-Win-Eventing-Provider-L1-1-0.dll

    You can download these 3 DLLs here: win7-x64-onnxruntime-missing-dlls.zip ⬇️

How to enable GPU? 🎮

Enable GPU support can significantly improve the throughput and lower the CPU usage. 🚀

Steps to use GPU in Windows:

  1. (for Windows) Install the package: Sdcb.PaddleInference.runtime.win64.cuda* instead of Sdcb.PaddleInference.runtime.win64.mkl, do not install both. 📦
  2. Install CUDA from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) 🔧
  3. Install cuDNN from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) 🛠️
  4. Install TensorRT from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) ⚙️

You can refer to this blog page for GPU in Windows: 关于PaddleSharp GPU使用 常见问题记录 📝

If you're using Linux, you need to compile your own OpenCvSharp4 environment following the docker build scripts and the CUDA/cuDNN/TensorRT configuration tasks. 🐧

After these steps are completed, you can try specifying PaddleDevice.Gpu() in the paddle device configuration parameter, then enjoy the performance boost! 🎉

TensorRT 🚄

To use TensorRT, just specify PaddleDevice.Gpu().And(PaddleDevice.TensorRt("shape-info.txt")) instead of PaddleDevice.Gpu() to make it work. 💡

Please be aware, this shape info text file **.txt is bound to your model. Different models have different shape info, so if you're using a complex model like Sdcb.PaddleOCR, you should use different shapes for different models like this:

using PaddleOcrAll all = new(model,
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("det.txt")),
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("cls.txt")),
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("rec.txt")))
{
   Enable180Classification = true,
   AllowRotateDetection = true,
};

In this case:

  • DetectionModel will use det.txt 🔍
  • 180DegreeClassificationModel will use cls.txt 🔃
  • RecognitionModel will use rec.txt 🔡

NOTE 📝:

The first round of TensorRT running will generate a shape info **.txt file in this folder: %AppData%\Sdcb.PaddleInference\TensorRtCache. It will take around 100 seconds to finish TensorRT cache generation. After that, it should be faster than the general GPU. 🚀

In this case, if something strange happens (for example, you mistakenly create the same shape-info.txt file for different models), you can delete this folder to generate TensorRT cache again: %AppData%\Sdcb.PaddleInference\TensorRtCache. 🗑️

Thanks & Sponsors 🙏

Contact 📞

QQ group of C#/.NET computer vision technical communication (C#/.NET计算机视觉技术交流群): 579060605

About

.NET/C# binding for Baidu paddle inference library and PaddleOCR

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C# 99.5%
  • Dockerfile 0.5%