
26 Jul 2018
Abstract
Deep learning has been transforming our ability to execute advanced inference tasks using computers. We introduce a physical mechanism to perform machine learning by demonstrating an all-optical Diffractive Deep Neural Network (D2NN) architecture that can implement various functions following the deep learning-based design of passive diffractive layers that work collectively. We create 3D-printed D2NNs that implement classification of images of handwritten digits and fashion products as well as the function of an imaging lens at terahertz spectrum. Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that perform unique tasks using D2NNs.
[Image]
Fig. 4 Fashion product classifier D2NN.
(A) As an example, the output image of the 3D-printed D2NN for a sandal input (Fashion MNIST class #5) is demonstrated, where the red dotted squares represent the trained detector regions for each fashion product. Other examples of our experimental results are also shown in fig. S10. (B) shows the confusion matrix and the energy distribution percentage for our experimental results, using 50 different fashion products that were 3D-printed (i.e., 5 per class) selected among the images that numerical testing was successful. (C) is the same as (B), except it summarizes our numerical testing results for 10,000 different fashion products (~1,000 per class), achieving a classification accuracy of 81.13% using a 5-layer design. By increasing the number of diffractive layers to 10, our classification accuracy increased to 86.60% (fig. S5).