[关闭]
@atry 2017-04-10T10:31:28.000000Z 字数 1394 阅读 1540

Monadic Deep Learning


Abstract

Computational graph approach

Most of deep learning frameworks(1, 2) think neural networks as computational graphs. A node in the graph represents a mathematical operation, a function, or even a control flow operation.

However, neural networks built from those frameworks interoperate badly between their hosting language. For example, a neural network written in Python is not able to use any native Python functions, data structures, nor control flows, during the network running.

Automatic differentiation approach

Recent studies reveal that neural networks and programs are isomorphism(3). Some other libraries treat neural networks as ordinary programs with ability of automatic differentiation(4, 5). Ordinary programming control flow operations are allowed in neural networks.

Unfortunately, those libraries have bad performance. They cannot perform multiple calculations parallelly, nor enqueue multiple commands to CUDA streams. Because the programs are directly written by users, those libraries have no room to optimize the computation process.

Our approach

In DeepLearning.scala, we introduce a new approach that treats neural networks as Monads. Users create neural networks in a way almost the same as ordinary programs, and all Scala language features are available in neural networks. At the mean time, the DeepLearning.scala runtime is still able to schedule computation onto GPU and CPU parallelly.

In addition, our monads manage resource automatically, without depending on garbage collection. As a result, unlike other Lua or JVM frameworks, our framework never leaks memory.

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注