Skip to content

mithril-security/blindai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributors Forks Stargazers Issues Apache License


Logo

BlindAI

Website Blog LinkedIn

⚠️ Warning: Unfortunately, BlindAI is not actively maintained at the moment. Thus, you should not use BlindAI for processing sensitive data. If you have a use case that involves confidential data and are interested in using BlindAI, please contact us to discuss potential support and collaboration.

BlindAI is an AI privacy solution, allowing users to query popular AI models or serve their own models whilst ensuring that users' data remains private every step of the way.

Explore the docs »

Try Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Getting Help
  5. License
  6. Contact

🔒 About The Project

BlindAI is an open-source solution to query and deploy AI models while guaranteeing data privacy. The querying of models is done via our easy-to-use Python library.

Data sent by users to the AI model is kept confidential at all times by hardware-enforced Trusted Execution Environments. We explain how they keep data and models safe in detail here.

There are two main scenarios for BlindAI:

  • BlindAI API: Using BlindAI to query popular AI models hosted by Mithril Security.
  • BlindAI Core: Using BlindAI's underlying technology to host your own BlindAI server instance to securely deploy your own models.

You can find our more about BlindAI API and BlindAI Core here.

Built With

Rust Python Intel-SGX Tract

(back to top)

🚀 Getting Started

We strongly recommend for you to get started with our Quick tour to discover BlindAI with the open-source model Whisper.

But here’s a taste of what using BlindAI could look like 🍒

BlindAI API

transcript = blindai.api.Audio.transcribe(
    file="patient_104678.wav"
)
print(transcript)

The patient is a 55-year old male with known coronary artery disease.

BlindAI.Core

AI company's side: uploading and deleting models

An AI company AI company want to provide their model as an an easy-to-use service. They upload it to the server, which is assigned a model ID.

response = client_1.upload_model(model="./COVID-Net-CXR-2.onnx")
MODEL_ID = response.model_id
print(MODEL_ID)

8afcdab8-209e-4b93-9403-f3ea2dc0c3ae

When collaborating with clients is done, the AI company can delete their model from the server.

# AI company deletes model after use
client_1.delete_model(MODEL_ID)

Client's side: running a model on confidential data

The client wants to feed their confidential data to the model while protecting it from third-party access. They connect and run the model on the following confidential image.

pos_ret = client_2.run_model(MODEL_ID, positive)
print("Probability of Covid for positive image is", pos_ret.output[0].as_flat()[0][1])

Probability of Covid for positive image is 0.890598714351654

For more examples, please refer to the Documentation

(back to top)

🙋 Getting help

📜 License

Distributed under the Apache License, version 2.0. See LICENSE.md for more information.

📇 Contact

Mithril Security - @MithrilSecurity - [email protected]

Project Link: https://github.com/mithril-security/blindai

(back to top)