Vertex.AI brings the full power of deep learning to every device, big or small. We add visual intelligence capabilities to edge devices with our Vision framework and deliver advanced deep learning capabilities to any device with our fully supported Vantage platform.
Do you have a uniquely challenging machine vision problem or deployment scenario? Our experts are here to help.
Apply state of the art deep neural networks to detect, classify, track, and identify objects and other visual elements in real time. Building on Vantage, Vision is compatible with nearly any device and operating system. Batteries included, PhD optional.
Everything you need to deploy deep learning to fit your unique requirements. Vantage provides a fully supported software platform that unlocks the deep learning capability of chips from low-power mobile processors to full-size GPUs, all while maintaining compatibility with popular open source tools.
PlaidML is the fastest and most complete tensor compiler available today. We’ve posted other articles that explain our unique approach. Here we present a performance and feature comparison of PlaidML to the other publicly available tensor compilers (TVM and Tensor Comprehensions)
Automatic Kernel Generation in PlaidML — May 19, 2018
A detailed look at the unique ways that PlaidML automatically performs several key aspects of generating efficiently parallelized kernels, including streamlining cache performance and minimizing edge-case conditionals.
Fully Automatic Differentiation for Tensor Expressions — May 17, 2018
Deep learning advances routinely require the construction of new neural network operations. Adding such operations has proven to be a labor-intensive step that introduces delays on the order of months. PlaidML enables researchers to add ops in hours through sophisticated code generation algorithms utilizing the Tile language. This post will explain a portion of this process, unique to PlaidML, that automatically generates gradient kernels, and will compare to related works such as TensorComprehensions and NNVM/TVM.
Our overarching goal is to bring intelligence to more devices. Today, deep learning research is showing new and powerful accuracy in areas like image understanding, speech recognition, language translation, and more. At the same time, to get the necessary computations running on most chips requires rare expertise and substantial software development effort. We are addressing that problem at two layers: First by making it possible with Vantage to run deep neural nets on a wide variety of chips, and second by building on top of that our Vision ready-to-use toolbox of pre-packaged visual intelligence capabilities. We support each with hands-on support and training to ensure your challenge is solved.