Authors
Alexander Zlokapa, Hartmut Neven, Seth Lloyd
Publication date
2021/7/19
Journal
arXiv preprint arXiv:2107.09200
Description
Given the success of deep learning in classical machine learning, quantum algorithms for traditional neural network architectures may provide one of the most promising settings for quantum machine learning. Considering a fully-connected feedforward neural network, we show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems. We propose a quantum algorithm to approximately train a wide and deep neural network up to error for a training set of size by performing sparse matrix inversion in time. To achieve an end-to-end exponential speedup over gradient descent, the data distribution must permit efficient state preparation and readout. We numerically demonstrate that the MNIST image dataset satisfies such conditions; moreover, the quantum algorithm matches the accuracy of the fully-connected network. Beyond the proven architecture, we provide empirical evidence for training of a convolutional neural network with pooling.
Total citations
20212022202320243363
Scholar articles