Announcing PlaidML: Open Source Deep Learning for Every Platform

Oct 20, 2017 | By: Choong Ng

PlaidML Logo

We’re pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is to make deep learning accessible to every person on every device, and we’re building PlaidML to help make that a reality. We’re starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel. Additionally, we’re including support for running the widely popular Keras framework on top of Plaid to allow existing code and tutorials to run unchanged.

PlaidML block diagram
PlaidML initial support

Our company uses PlaidML at the core of our deep learning vision systems for embedded devices, and to date we’ve focused on support for image processing neural networks like ResNet-50, Xception, and MobileNet. By sharing this technology we see potential to greatly improve the accessibility of deep learning. This release is just one early step. Currently PlaidML supports Keras, OpenCL, and Linux. In the future, we’ll be adding support for macOS and Windows. We’ll also be adding compatibility with frameworks such as TensorFlow, PyTorch, and Deeplearning4j. For vision workloads we’ve shown results on desktop hardware competitive with hand-tuned but vendor-locked engines like cuDNN; we will continue that work but we’ll also add broader task support such as as recurrent nets to support video, speech, and text processing.

An update on performance

Throughput is a key factor for both computationally intensive development workflows and for enabling use of the most sophisticated models in production. We wrote about this in a previous post comparing PlaidML inference throughput to TensorFlow on cuDNN. Since that post the TensorFlow team has made major improvements in performance, greatly improving on the unbatched Xception throughput number quoted. After updating to Keras 2.0.8, cuDNN 6, and Tensorflow 1.3, it’s within about 4% of PlaidML’s throughput:

Unbatched Xception Inference Rate (ratio): 1.04 to 1.00
Unbatched Xception Inference Rate (longer bars are better)

It’s a great improvement and we continue to use TensorFlow as our benchmark for other areas where PlaidML is less mature. Practically speaking, PlaidML’s throughput for image inference is suitable for real-world workloads today. The chart below shows Plaid throughput for a variety of image networks and GPU models, units are ratio of throughput to TensorFlow on an NVIDIA Tesla K80 (longer bars are faster):

Unbatched inference across platforms
Unbatched inference across platforms

As we continue to make improvements and add networks to our benchmarking suite we’ll share the results here.

Getting started with PlaidML

Part of making it as easy as possible to get started with deep learning is making it easy to install the tools. The quickest way to get started with PlaidML is to install a binary release. For more detailed notes about system requirements and what features are currently implemented see the README on GitHub. Briefly, the system requirements are:

To get PlaidML installed and do a quick benchmark all you need to do is:

sudo pip install plaidml plaidml-keras
git clone https://github.com/plaidml/plaidbench
cd plaidbench
pip install -r requirements.txt
python plaidbench.py mobilenet

By default, plaidbench will benchmark 1024 inferences at batch size 1 using Keras on PlaidML and print a result similar to the following:

Using PlaidML backend.
INFO:plaidml:Opening device "tesla_p100-pcie-16gb.0": "Tesla P100-PCIE-16GB"
…
Example finished, elapsed: 11.72719311714 (compile), 6.80260443687 (execution)

In this case the result is 6.8 seconds on a Tesla P100 on Google Cloud Platform. To test the same workload running on Keras’s TensorFlow back end first you’ll need to install tensorflow or tensorflow-gpu, cuDNN, and other dependencies separately. Then run plaidbench with the “no-plaid” option:

python plaidbench.py mobilenet --no-plaid

The output should look like the following:

Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0)
…
Example finished, elapsed: 9.71920609474 (compile), 7.94898986816 (execution)

PlaidML can take longer to execute on the first run, but tends to outperform TensorFlow + cuDNN, even on the latest NVIDIA hardware (in this case by about 14%).

Closing thoughts

There is a lot more to do. In addition to the compatibility and performance work already mentioned we’ll be adding documentation covering how to build the source and make modifications. To support researchers developing new architectures we’ll be documenting the Tile language we use to add new ops (or layer types) in a device-portable way. Finally, by opening PlaidML’s source for contributions we’re opening the door to outside collaborators to bring deep learning to new use cases and platforms. We’d love your involvement, from letting us know your experiences, to sharing benchmark data, to code contributions. Let us know how we can help you get started.

You can get in touch by joining us at:

© 2017 Vertex.AI