PyTorchIPFSTensorFlow

Vision & Mission

DonateDonate

Industry standard tools for artificial intelligence have been designed with several assumptions: data is centralized into a single compute cluster, the cluster exists in a secure cloud, and the resulting models will be owned by a central authority. We envision a world in which we are not restricted to this scenario - a world in which AI tools treat privacy, security, and multi-owner governance as first class citizens.

With OpenMined, an AI model can be governed by multiple owners and trained securely on an unseen, distributed dataset.

The mission of the OpenMined community is to create an accessible ecosystem of tools for private, secure, multi-owner governed AI. We do this by extending popular libraries like TensorFlow and PyTorch with advanced techniques in cryptography and private machine learning.

Private

Private

Privacy is at the core of OpenMined - building tools that allow data owners to keep their data private during the model training process. This is done by utilizing two methods of privacy preservation: federated learning and differential privacy.

Private

Federated Learning

Instead of bringing data all to one place for training, federated learning is done by bringing the model to the data. This allows a data owner to maintain the only copy of their information.

Private

Differential Privacy

Differential Privacy is a set of techniques for preventing a model from accidentally memorizing secrets present in a training dataset during the learning process.

Secure

Secure

OpenMined is building tools that allow models to be trained within insecure, distributed environments such as end user devices. We aim to support two methods of secure computation: multi-party computation and homomorphic encryption.

Secure

Multi-party Computation

When a model has multiple owners, multi-party computation allows for individuals to share control of a model without seeing its contents such that no sole owner can use or train it.

Secure

Homomorphic Encryption

When a model has a single owner, homomorphic encryption allows an owner to encrypt their model so that untrusted 3rd parties can train or use the model without being able to steal it.

Governance

Governance

The OpenMined ecosystem allows for various systems of shared ownership, allowing variable control structures to be designed by model owners according to their own preferences. We allow for two systems of governance: consensus and threshold governance.

Governance

Consensus Governance

The default governance structure is one in which a group of data or model owners must all agree to perform training or inference in order for it to occur.

Governance

Threshold Governance

An alternative governance structure is one in which a minimum threshold of data or model owners must agree to perform training or inference in order for it to occur.

How it Works

Watch IntroWatch Intro
Create a model

A data scientist creates a model in a framework such as PyTorch, Tensorflow, or Keras, defines a training bounty they are willing to pay for it to be trained, and requests a specific kind of private training data (i.e., personal health information, social media posts, smart-home metadata, etc.)

Contribute

  • openmined
  • grid
  • mine
  • mine
  • mine
  • Create
  • Distribute
  • Train
  • Reward
  • Deliver

Milestones

  • Done
  • In Progress
  • Planned

Private Grid in Python

Privacy-preserved federated learning and secure prediction in PyTorch

As the main offering of OpenMined, data scientists should have the ability to incorporate federated learning and secure prediction into their existing deep learning infrastructure. This allows for training to be done in a private cloud while minimizing the risk of leaking intellectual property or private training data.

  • Virtual workers learning environment for using OpenMined without requiring access to an external network
  • Socket workers for training and prediction over a cluster of machines connected using socket connections
  • Federated learning for arbitrary PyTorch models
  • Multi-party computation training and prediction for arbitrary PyTorch models
  • Secure aggregation for federated learning using secure multi-party computation
  • Differential privacy for arbitrary PyTorch models using PATE

Browser Training and Secure Prediction

Privacy-preserved federated learning and secure prediction of models in the browser

Federated learning can be done in the browser with a Javascript port of the Syft project. This allows for data scientists to create a private federated learning compute grid to receive gradients from any client or server-side JS environment. Not only does this allow for the model to remain private from the user, but it also allows the user’s data to remain totally private as well.

Public OpenMined Grid

Public or private environments for federated learning

With identical technical functionality to the private grid, the public grid is an open marketplace for individuals to offer their data to be trained on and for models to be offered for secure prediction in web applications.

Latest News