At DeepScale, we are primarily focused on serving the automotive industry. We are a product-oriented company, and we will release more details of our product direction throughout 2017. Meanwhile, our product development efforts have required us to break new ground on core research problems in machine learning and deep learning. Where possible, we like to give back to the research community by releasing our new discoveries in the form of technical reports and open-source software. In this blog post, we summarize some of the things that we have released so far. Some of these projects were done in collaboration with researchers at UC Berkeley, and some of these researchers have since joined the DeepScale team.

SqueezeNet

SqueezeNet is a deep neural network (DNN) model that we developed with the goal of making the model as small as possible while preserving reasonable accuracy on a computer vision dataset. We do not propose that SqueezeNet should be applied directly to automated driving problems. Rather, we released SqueezeNet to illustrate a DNN architectural style is conducive to developing small DNN models.

Paper: https://arxiv.org/abs/1602.07360

Code: https://github.com/DeepScale/SqueezeNet

SqueezeDet

While SqueezeNet is designed for full-image classification, SqueezeDet performs the task of object localization and detection. As of December 2016, SqueezeDet is simultaneously the fastest, smallest, and most accurate (in terms of mean-average precision) model on the KITTI object detection benchmark. We do not propose that SqueezeDet should be applied directly to automated driving problems. Rather, SqueezeDet exposes one of the DNN architectural styles that we are evaluating for object detection problems.

Paper: https://arxiv.org/abs/1612.01051

Code: https://github.com/BichenWuUCB/squeezeDet

BeaverDam

Labeled data is a key resource for visual perception models. We collect large quantities of training data. To label this data, we developed the BeaverDam tool. In the open-source version of BeaverDam, we provide a web interface that allows workers to complete tasks such as drawing rectangles around objects in videos.

Paper: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-193.html

Code: https://github.com/antingshen/BeaverDam

FireCaffe

On a single GPU or a single-socket CPU system, DNNs can take weeks or months to train on publicly available datasets. Scaling DNN training over a cluster of servers enables faster time-to-solution in training, and it also enables training DNNs on larger volumes of data in a fixed amount of time. We developed the FireCaffe training system to address this problem. Using FireCaffe, we accelerated the training of the GoogLeNet model from 3 weeks to 10 hours. While we have not released the FireCaffe implementation, it appears that the our FireCaffe paper has helped to popularize the approach of synchronous data-parallelism for distributing DNN training over a cluster of servers. Today at DeepScale, we use distributed DNN training to rapidly and productively experiment with new DNN architectures on large volumes of training data.

Paper (published at CVPR 2016): https://arxiv.org/abs/1511.00175

Stay tuned for more contributions to open-source software for 2017.