Abstract
Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.
Original language | English |
---|---|
Title of host publication | 26th International Joint Conference on Artificial Intelligence, IJCAI 2017 |
Editors | Carles Sierra |
Publisher | International Joint Conferences on Artificial Intelligence |
Pages | 1703-1710 |
Number of pages | 8 |
ISBN (Electronic) | 9780999241103 |
DOIs | |
Publication status | Published - 2017 |
Event | 26th International Joint Conference on Artificial Intelligence, IJCAI 2017 - Melbourne, Australia Duration: 2017 Aug 19 → 2017 Aug 25 |
Publication series
Name | IJCAI International Joint Conference on Artificial Intelligence |
---|---|
Volume | 0 |
ISSN (Print) | 1045-0823 |
Other
Other | 26th International Joint Conference on Artificial Intelligence, IJCAI 2017 |
---|---|
Country/Territory | Australia |
City | Melbourne |
Period | 17/8/19 → 17/8/25 |
Bibliographical note
Funding Information:This work was supported in part by ONR N00014-16-1-2928, NSF CAREER IIS-1453651, and Sloan Research Fellowship. We would like to thank Michael Wakin for helpful discussions about concentration of measure for structured random matrices.
All Science Journal Classification (ASJC) codes
- Artificial Intelligence