7.8 KiB
Google launches TensorFlow-based vision recognition kit for RPi Zero W
Google’s $45 “AIY Vision Kit” for the Raspberry Pi Zero W performs TensorFlow-based vision recognition using a “VisionBonnet” board with a Movidius chip.
Google’s AIY Vision Kit for on-device neural network acceleration follows an earlier AIY Projects voice/AI kit for the Raspberry Pi that shipped to MagPi subscribers back in May. Like the voice kit and the older Google Cardboard VR viewer, the new AIY Vision Kit has a cardboard enclosure. The kit differs from the Cloud Vision API, which was demo’d in 2015 with a Raspberry Pi based GoPiGo robot, in that it runs entirely on local processing power rather than requiring a cloud connection. The AIY Vision Kit is available now for pre-order at $45, with shipments due in early December.
AIY Vision Kit, fully assembled (left) and Raspberry Pi Zero W (click images to enlarge)
The kit’s key processing element, aside from the 1GHz ARM11-based Broadcom BCM2836 SoC found on the required Raspberry Pi Zero W SBC, is Google’s new VisionBonnet RPi accessory board. The VisionBonnet pHAT board uses a Movidius MA2450, a version of the Movidius Myriad 2 VPU processor. On the VisionBonnet, the processor runs Google’s open source TensorFlowmachine intelligence library for neural networking. The chip enables visual perception processing at up to 30 frames per second.
The AIY Vision Kit requires a user-supplied RPi Zero W, a Raspberry Pi Camera v2, and a 16GB micro SD card for downloading the Linux-based image. The kit includes the VisionBonnet, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, and the cardboard enclosure. You also get flex cables, standoffs, a tripod mounting nut, and connecting components.
AIY Vision Kit kit components (left) and VisonBonnet accessory board (click images to enlarge)
Three neural network models are available. There’s a general-purpose model that can recognize 1,000 common objects, a facial detection model that can also score facial expression on a “joy scale” that ranges from “sad” to “laughing,” and a model that can identify whether the image contains a dog, cat, or human. The 1,000-image model derives from Google’s open source MobileNets, a family of TensorFlow based computer vision models designed for the restricted resources of a mobile or embedded device.
MobileNet models offer low latency and low power consumption, and are parameterized to meet the resource constraints of different use cases. The models can be built for classification, detection, embeddings, and segmentation, says Google. Earlier this month, Google released a developer preview of a mobile-friendly TensorFlow Lite library for Android and iOS that is compatible with MobileNets and the Android Neural Networks API.
AIY Vision Kit assembly views (click image to enlarge)
In addition to providing the three models, the AIY Vision Kit provides basic TensorFlow code and a compiler, so users can develop their own models. In addition, Python developers can write new software to customize RGB button colors, piezo element sounds, and 4x GPIO pins on the VisionBonnet that can add additional lights, buttons, or servos. Potential models include recognizing food items, opening a dog door based on visual input, sending a text when your car leaves the driveway, or playing particular music based on facial recognition of a person entering the camera’s viewpoint.
Myriad 2 VPU block diagram (left) and reference board (click image to enlarge)
The Movidius Myriad 2 processor provides TeraFLOPS of performance within a nominal 1 Watt power envelope. The chip appeared on early Project Tango reference platforms, and is built into the Ubuntu-driven Fathom neural processing USB stick that Movidius debuted in May 2016, prior to being acquired by Intel. According to Movidius, the Myriad 2 is available “in millions of devices on the market today.”
Further information
The AIY Vision Kit is available for pre-order from Micro Center at $44.99, with shipments due in early December. More information may be found in the AIY Vision Kit announcement, Google Blog notice, and Micro Center shopping page.
via: http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
作者: Eric Brown 译者:译者ID 校对:校对者ID