Abstract
High inference times of machine learning-based axon tracing algorithms pose a significant challenge to the practical analysis and interpretation of large-scale brain imagery. This paper explores a distributed data pipeline that employs a SLURM-based job array to run multiple machine learning algorithm predictions simultaneously. Image volumes were split into N (1-16) equal chunks that are each handled by a unique compute node and stitched back together into a single 3D prediction. Preliminary results comparing the inference speed of 1 versus 16 node job arrays demonstrated a 90.95% decrease in compute time for 32 GB input volume and 88.41% for 4 GB input volume. The general pipeline may serve as a baseline for future improved implementations on larger input volumes which can be tuned to various application domains.
Original language | English |
---|---|
Title of host publication | 2022 IEEE High Performance Extreme Computing Conference, HPEC 2022 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9781665497862 |
DOIs | |
Publication status | Published - 2022 |
Event | 2022 IEEE High Performance Extreme Computing Conference, HPEC 2022 - Virtual, Online, United States Duration: 2022 Sept 19 → 2022 Sept 23 |
Publication series
Name | 2022 IEEE High Performance Extreme Computing Conference, HPEC 2022 |
---|
Conference
Conference | 2022 IEEE High Performance Extreme Computing Conference, HPEC 2022 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 22/9/19 → 22/9/23 |
Bibliographical note
Funding Information:The authors would like to acknowledge Adam Michaleas and the MIT Lincoln Laboratory Supercomputing Center (LLSC) for their support of high performance computing tasks
Publisher Copyright:
© 2022 IEEE.
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Computer Science Applications
- Hardware and Architecture
- Software
- Computational Mathematics
- Numerical Analysis