Adaptive error-constrained method for LMS algorithms and applications

Research output: Contribution to journalArticle

8 Citations (Scopus)

Abstract

An adaptive error-constrained least mean square (AECLMS) algorithm is derived and proposed using adaptive error-constrained optimization techniques. This is accomplished by modifying the cost function of the LMS algorithm using augmented Lagrangian multipliers. Theoretical analyses of the proposed method are presented in detail. The method shows improved performance in terms of convergence speed and misadjustment. This proposed adaptive error-constrained method can easily be applied to and combined with other LMS-type stochastic algorithms. Therefore, we also apply the method to constant modulus criterion for blind method and backpropagation algorithm for multilayer perceptrons. Simulation results show that the proposed method can accelerate the convergence speed by 2 to 20 times depending on the complexity of the problem.

Original languageEnglish
Pages (from-to)1875-1897
Number of pages23
JournalSignal Processing
Volume85
Issue number10
DOIs
Publication statusPublished - 2005 Oct 1

Fingerprint

Backpropagation algorithms
Constrained optimization
Multilayer neural networks
Cost functions

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Cite this

@article{1c384c284e96444e9515d58cd1898a7d,
title = "Adaptive error-constrained method for LMS algorithms and applications",
abstract = "An adaptive error-constrained least mean square (AECLMS) algorithm is derived and proposed using adaptive error-constrained optimization techniques. This is accomplished by modifying the cost function of the LMS algorithm using augmented Lagrangian multipliers. Theoretical analyses of the proposed method are presented in detail. The method shows improved performance in terms of convergence speed and misadjustment. This proposed adaptive error-constrained method can easily be applied to and combined with other LMS-type stochastic algorithms. Therefore, we also apply the method to constant modulus criterion for blind method and backpropagation algorithm for multilayer perceptrons. Simulation results show that the proposed method can accelerate the convergence speed by 2 to 20 times depending on the complexity of the problem.",
author = "Sooyong Choi and Lee, {Te Won} and Daesik Hong",
year = "2005",
month = "10",
day = "1",
doi = "10.1016/j.sigpro.2005.03.017",
language = "English",
volume = "85",
pages = "1875--1897",
journal = "Signal Processing",
issn = "0165-1684",
publisher = "Elsevier",
number = "10",

}

Adaptive error-constrained method for LMS algorithms and applications. / Choi, Sooyong; Lee, Te Won; Hong, Daesik.

In: Signal Processing, Vol. 85, No. 10, 01.10.2005, p. 1875-1897.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Adaptive error-constrained method for LMS algorithms and applications

AU - Choi, Sooyong

AU - Lee, Te Won

AU - Hong, Daesik

PY - 2005/10/1

Y1 - 2005/10/1

N2 - An adaptive error-constrained least mean square (AECLMS) algorithm is derived and proposed using adaptive error-constrained optimization techniques. This is accomplished by modifying the cost function of the LMS algorithm using augmented Lagrangian multipliers. Theoretical analyses of the proposed method are presented in detail. The method shows improved performance in terms of convergence speed and misadjustment. This proposed adaptive error-constrained method can easily be applied to and combined with other LMS-type stochastic algorithms. Therefore, we also apply the method to constant modulus criterion for blind method and backpropagation algorithm for multilayer perceptrons. Simulation results show that the proposed method can accelerate the convergence speed by 2 to 20 times depending on the complexity of the problem.

AB - An adaptive error-constrained least mean square (AECLMS) algorithm is derived and proposed using adaptive error-constrained optimization techniques. This is accomplished by modifying the cost function of the LMS algorithm using augmented Lagrangian multipliers. Theoretical analyses of the proposed method are presented in detail. The method shows improved performance in terms of convergence speed and misadjustment. This proposed adaptive error-constrained method can easily be applied to and combined with other LMS-type stochastic algorithms. Therefore, we also apply the method to constant modulus criterion for blind method and backpropagation algorithm for multilayer perceptrons. Simulation results show that the proposed method can accelerate the convergence speed by 2 to 20 times depending on the complexity of the problem.

UR - http://www.scopus.com/inward/record.url?scp=24344462250&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=24344462250&partnerID=8YFLogxK

U2 - 10.1016/j.sigpro.2005.03.017

DO - 10.1016/j.sigpro.2005.03.017

M3 - Article

VL - 85

SP - 1875

EP - 1897

JO - Signal Processing

JF - Signal Processing

SN - 0165-1684

IS - 10

ER -