Blockwise Recursive Moore-Penrose Inverse for Network Learning

Huiping Zhuang, Zhiping Lin, Kar Ann Toh

Research output: Contribution to journalArticlepeer-review


Training neural networks with the Moore-Penrose (MP) inverse has recently gained attention in view of its noniterative training nature. However, a significant drawback of learning based on the MP inverse is that the computational memory consumption grows along with the size of a dataset. In this article, based on the partitioning of the MP inverse, we propose a blockwise recursive MP inverse formulation (BRMP) for network learning with low-memory property while preserving its training effectiveness. The BRMP is an equivalent formulation to its batchwise counterpart since neither approximation nor assumption is made in the derivation process. Our further exploration of this recursive method leads to a switching structure among three different scenarios. This structure also reveals that the well-known recursive least squares method is a special case of our proposed technique. Subsequently, we apply BRMP to the training of radial basis function networks as well as multilayer perceptrons. The experimental validation covers both regression and classification tasks.

Original languageEnglish
Pages (from-to)3237-3250
Number of pages14
JournalIEEE Transactions on Systems, Man, and Cybernetics: Systems
Issue number5
Publication statusPublished - 2022 May 1

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Human-Computer Interaction
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Blockwise Recursive Moore-Penrose Inverse for Network Learning'. Together they form a unique fingerprint.

Cite this