MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems
Published in NIPS 2015 Workshop on Machine Learning Systems, 2015
Recommended citation: T Chen, M Li, Y Li, M Lin, N Wang, M Wang, T Xiao, B Xu, C Zhang, and Z Zhang. In NeurIPS 2015 Workshop on Machine Learning Systems. NeurIPS 2015 Workshop
Abstract
MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters.
This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.