At Baidu Create in Beijing, Intel Vice President Gadi Singer shared a series of collaborations with Baidu on artificial intelligence (AI), including powering Baidu’s Xeye a new AI retail camera with Intel Movidius vision processing units (VPUs); highlighting Baidu’s plans to offer workload acceleration as a service using Intel FPGAs; and optimizing PaddlePaddle, Baidu’s deep learning framework for Intel Xeon Scalable processors.
“From enabling in-device intelligence, to providing data center scale on Intel Xeon Scalable processors, to accelerating workloads with Intel FPGAs, to making it simpler for PaddlePaddle developers to code across platforms, Baidu is taking advantage of Intel’s products and expertise to bring its latest AI advancements to life.” Says Gadi Singer, Vice President and Architecture General Manager, Artificial Intelligence Products Group, Intel.
Baidu’s Xeye camera uses Intel Movidius Myriad 2 VPUs to deliver low-power, high-performance visual intelligence for retailers. The camera can analyze objects and gestures, while also detecting people to provide personalized shopping experiences in retail settings for Intel’s VPU solutions coupled with Baidu’s advanced machine learning algorithms.
Baidu is developing a heterogeneous computing platform based on Intel’s latest field programmable gate arrays (FPGA) technology. Intel FPGAs will accelerate performance and energy efficiency, add flexibility to data center workloads, and enable workload acceleration as a service on Baidu Cloud.
With PaddlePaddle now optimized for Intel Xeon Scalable processors, developers and data scientists can now use the same hardware that powers the world’s data centers and clouds to advance their AI algorithms. PaddlePaddle is optimized for Intel technology at several levels, including compute, memory, architecture and communication.
Intel and Baidu are also exploring integration of PaddlePaddle and nGraph, a framework-neutral, deep neural network (DNN) model compiler that can target a variety of devices. With nGraph, data scientists can write once, without worrying about how to adapt their DNN models to train and run efficiently on different hardware platforms.