Domain decomposition based parallel computing for multi-scale coronary blood flow simulations

Minh Tuan Nguyen, Byoung Jin Jeon, Hyuk Jae Chang, Sang Wook Lee

Research output: Contribution to journalArticle

Abstract

In the present study, we demonstrated a parallel P2P1 finite element scheme with a four-step fractional splitting approach to conduct a multiscale coronary flow simulation. The three-dimensional (3D) computational fluid dynamics (CFD) for patient-specific coronary artery flow and zero-dimensional (0D) lumped-parameter network (LPN) modeling for distal coronary beds was fully coupled, and an MPI parallel algorithm based on domain decomposition was applied. A parallel conjugate gradient (CG)-LPN subroutine for the 3D-0D coupled system with a monolithic scheme was derived, and it provides a correct pressure solution that may not be obtained by the conventional CG solver, particularly when the subdomain division intersects the 3D-0D coupling outlet. The overall computing time for parallel CG-LPN does not show a noticeable difference from the conventional CG solver, despite the extra MPI calls for data transfer at the interfaces of subdomains. For the BiCGSTAB solver, the block ILU(0) preconditioner showed favorable performance for a high density mesh compared with the simple Jacobi preconditioner. MPI_COMM is a major bottleneck that saturates the overall parallel performance at high core count, but a computing time of less than 10 min per cardiac cycle on a medium density mesh could be attained for a patient-specific coronary flow simulation using 60 CPU cores run in parallel, which is in an acceptable range for clinical practice. Further tests of accuracy are needed with a large set of patients to enable wide use of the proposed technique in helping to make interventional decisions in routine clinical practice for coronary stenotic lesions.

Original languageEnglish
Article number104254
JournalComputers and Fluids
Volume191
DOIs
Publication statusPublished - 2019 Sep 15

Fingerprint

Lumped parameter networks
Flow simulation
Parallel processing systems
Blood
Decomposition
Subroutines
Data transfer
Parallel algorithms
Program processors
Computational fluid dynamics

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Engineering(all)

Cite this

@article{04f72c7e8fb147b7ba1d6a8e8f8cdfeb,
title = "Domain decomposition based parallel computing for multi-scale coronary blood flow simulations",
abstract = "In the present study, we demonstrated a parallel P2P1 finite element scheme with a four-step fractional splitting approach to conduct a multiscale coronary flow simulation. The three-dimensional (3D) computational fluid dynamics (CFD) for patient-specific coronary artery flow and zero-dimensional (0D) lumped-parameter network (LPN) modeling for distal coronary beds was fully coupled, and an MPI parallel algorithm based on domain decomposition was applied. A parallel conjugate gradient (CG)-LPN subroutine for the 3D-0D coupled system with a monolithic scheme was derived, and it provides a correct pressure solution that may not be obtained by the conventional CG solver, particularly when the subdomain division intersects the 3D-0D coupling outlet. The overall computing time for parallel CG-LPN does not show a noticeable difference from the conventional CG solver, despite the extra MPI calls for data transfer at the interfaces of subdomains. For the BiCGSTAB solver, the block ILU(0) preconditioner showed favorable performance for a high density mesh compared with the simple Jacobi preconditioner. MPI_COMM is a major bottleneck that saturates the overall parallel performance at high core count, but a computing time of less than 10 min per cardiac cycle on a medium density mesh could be attained for a patient-specific coronary flow simulation using 60 CPU cores run in parallel, which is in an acceptable range for clinical practice. Further tests of accuracy are needed with a large set of patients to enable wide use of the proposed technique in helping to make interventional decisions in routine clinical practice for coronary stenotic lesions.",
author = "Nguyen, {Minh Tuan} and Jeon, {Byoung Jin} and Chang, {Hyuk Jae} and Lee, {Sang Wook}",
year = "2019",
month = "9",
day = "15",
doi = "10.1016/j.compfluid.2019.104254",
language = "English",
volume = "191",
journal = "Computers and Fluids",
issn = "0045-7930",
publisher = "Elsevier Limited",

}

Domain decomposition based parallel computing for multi-scale coronary blood flow simulations. / Nguyen, Minh Tuan; Jeon, Byoung Jin; Chang, Hyuk Jae; Lee, Sang Wook.

In: Computers and Fluids, Vol. 191, 104254, 15.09.2019.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Domain decomposition based parallel computing for multi-scale coronary blood flow simulations

AU - Nguyen, Minh Tuan

AU - Jeon, Byoung Jin

AU - Chang, Hyuk Jae

AU - Lee, Sang Wook

PY - 2019/9/15

Y1 - 2019/9/15

N2 - In the present study, we demonstrated a parallel P2P1 finite element scheme with a four-step fractional splitting approach to conduct a multiscale coronary flow simulation. The three-dimensional (3D) computational fluid dynamics (CFD) for patient-specific coronary artery flow and zero-dimensional (0D) lumped-parameter network (LPN) modeling for distal coronary beds was fully coupled, and an MPI parallel algorithm based on domain decomposition was applied. A parallel conjugate gradient (CG)-LPN subroutine for the 3D-0D coupled system with a monolithic scheme was derived, and it provides a correct pressure solution that may not be obtained by the conventional CG solver, particularly when the subdomain division intersects the 3D-0D coupling outlet. The overall computing time for parallel CG-LPN does not show a noticeable difference from the conventional CG solver, despite the extra MPI calls for data transfer at the interfaces of subdomains. For the BiCGSTAB solver, the block ILU(0) preconditioner showed favorable performance for a high density mesh compared with the simple Jacobi preconditioner. MPI_COMM is a major bottleneck that saturates the overall parallel performance at high core count, but a computing time of less than 10 min per cardiac cycle on a medium density mesh could be attained for a patient-specific coronary flow simulation using 60 CPU cores run in parallel, which is in an acceptable range for clinical practice. Further tests of accuracy are needed with a large set of patients to enable wide use of the proposed technique in helping to make interventional decisions in routine clinical practice for coronary stenotic lesions.

AB - In the present study, we demonstrated a parallel P2P1 finite element scheme with a four-step fractional splitting approach to conduct a multiscale coronary flow simulation. The three-dimensional (3D) computational fluid dynamics (CFD) for patient-specific coronary artery flow and zero-dimensional (0D) lumped-parameter network (LPN) modeling for distal coronary beds was fully coupled, and an MPI parallel algorithm based on domain decomposition was applied. A parallel conjugate gradient (CG)-LPN subroutine for the 3D-0D coupled system with a monolithic scheme was derived, and it provides a correct pressure solution that may not be obtained by the conventional CG solver, particularly when the subdomain division intersects the 3D-0D coupling outlet. The overall computing time for parallel CG-LPN does not show a noticeable difference from the conventional CG solver, despite the extra MPI calls for data transfer at the interfaces of subdomains. For the BiCGSTAB solver, the block ILU(0) preconditioner showed favorable performance for a high density mesh compared with the simple Jacobi preconditioner. MPI_COMM is a major bottleneck that saturates the overall parallel performance at high core count, but a computing time of less than 10 min per cardiac cycle on a medium density mesh could be attained for a patient-specific coronary flow simulation using 60 CPU cores run in parallel, which is in an acceptable range for clinical practice. Further tests of accuracy are needed with a large set of patients to enable wide use of the proposed technique in helping to make interventional decisions in routine clinical practice for coronary stenotic lesions.

UR - http://www.scopus.com/inward/record.url?scp=85070993408&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070993408&partnerID=8YFLogxK

U2 - 10.1016/j.compfluid.2019.104254

DO - 10.1016/j.compfluid.2019.104254

M3 - Article

AN - SCOPUS:85070993408

VL - 191

JO - Computers and Fluids

JF - Computers and Fluids

SN - 0045-7930

M1 - 104254

ER -