Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.
|Title of host publication||26th International Joint Conference on Artificial Intelligence, IJCAI 2017|
|Publisher||International Joint Conferences on Artificial Intelligence|
|Number of pages||8|
|Publication status||Published - 2017|
|Event||26th International Joint Conference on Artificial Intelligence, IJCAI 2017 - Melbourne, Australia|
Duration: 2017 Aug 19 → 2017 Aug 25
|Name||IJCAI International Joint Conference on Artificial Intelligence|
|Other||26th International Joint Conference on Artificial Intelligence, IJCAI 2017|
|Period||17/8/19 → 17/8/25|
Bibliographical noteFunding Information:
This work was supported in part by ONR N00014-16-1-2928, NSF CAREER IIS-1453651, and Sloan Research Fellowship. We would like to thank Michael Wakin for helpful discussions about concentration of measure for structured random matrices.
All Science Journal Classification (ASJC) codes
- Artificial Intelligence