Vertex.AI brings the full power of deep learning to every device, big or small. We add visual intelligence capabilities to edge devices with our Vision framework and deliver advanced deep learning capabilities to any device with our fully supported Vantage platform.
Do you have a uniquely challenging machine vision problem or deployment scenario? Our experts are here to help.
Apply state of the art deep neural networks to detect, classify, track, and identify objects and other visual elements in real time. Building on Vantage, Vision is compatible with nearly any device and operating system. Batteries included, PhD optional.
Everything you need to deploy deep learning to fit your unique requirements. Vantage provides a fully supported software platform that unlocks the deep learning capability of chips from low-power mobile processors to full-size GPUs, all while maintaining compatibility with popular open source tools.
Practical Embedded Object Detection with PlaidML — Jan 23, 2018
PlaidML allows GPU accelerated applications to be deployed on almost any hardware. We introduce microplaid – an open source set of tools for developing accelerated object detection applications on embedded devices. We provide a parts list and outline using microplaid to build a mobile object detector based on the UP Squared board.
Deep Learning for Everyone: PlaidML for Windows — Nov 22, 2017
When we first announced PlaidML we promised to bring deep learning to every platform. With today’s release of preliminary Windows support we’re moving much closer to that goal – PlaidML now supports all the common desktop and server platforms.
Tile: A New Language for Machine Learning — Nov 10, 2017
With the release of the PlaidML machine learning framework, Vertex.AI is helping make accelerated machine learning on every platform a reality. Historically the key obstacle to acceleration on a wide range of platforms has been software support, that support being constrained by the need for laborious implementation of libraries of hand-crafted software “kernels” for each processor. PlaidML takes a different approach, using a tensor manipulation language we’ve developed called Tile to automatically generate the kernels, making it many times easier easier to add support for GPUs and new types of processors. Our benchmarks show that this approach is competitive with existing frameworks on NVIDIA GPUs, while also extending compatibility to other common GPUs such as those from AMD and Intel.
Our overarching goal is to bring intelligence to more devices. Today, deep learning research is showing new and powerful accuracy in areas like image understanding, speech recognition, language translation, and more. At the same time, to get the necessary computations running on most chips requires rare expertise and substantial software development effort. We are addressing that problem at two layers: First by making it possible with Vantage to run deep neural nets on a wide variety of chips, and second by building on top of that our Vision ready-to-use toolbox of pre-packaged visual intelligence capabilities. We support each with hands-on support and training to ensure your challenge is solved.