Data partition method for parallel self-organizing map

Ming Hsuan Yang, Narendra Ahuja

Research output: Contribution to conferencePaper

8 Citations (Scopus)

Abstract

We propose a method to partition training vectors into clusters for a parallel implementation of Self-Organizing Map (SOM) algorithm. The proposed algorithm assigns a cluster to a processor such that, in updating weights, the neighborhoods of a winning node in a cluster do not overlap the neighboring nodes of some winning nodes in other clusters. It reduces the overheads caused by synchronization (i.e. maintaining coherency) of the weight matrices in the processors since the proposed algorithm allows multiple vectors to find their winning nodes and update weights in parallel. Our experimental results show that an average speedup of 3.15 for a parallel implementation of a four-processor simulation.

Original languageEnglish
Pages1929-1933
Number of pages5
Publication statusPublished - 1999 Dec 1
EventInternational Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA
Duration: 1999 Jul 101999 Jul 16

Other

OtherInternational Joint Conference on Neural Networks (IJCNN'99)
CityWashington, DC, USA
Period99/7/1099/7/16

Fingerprint

Self organizing maps
Synchronization

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Cite this

Yang, M. H., & Ahuja, N. (1999). Data partition method for parallel self-organizing map. 1929-1933. Paper presented at International Joint Conference on Neural Networks (IJCNN'99), Washington, DC, USA, .
Yang, Ming Hsuan ; Ahuja, Narendra. / Data partition method for parallel self-organizing map. Paper presented at International Joint Conference on Neural Networks (IJCNN'99), Washington, DC, USA, .5 p.
@conference{7db8634fbcb9479081e0e3b880d5d902,
title = "Data partition method for parallel self-organizing map",
abstract = "We propose a method to partition training vectors into clusters for a parallel implementation of Self-Organizing Map (SOM) algorithm. The proposed algorithm assigns a cluster to a processor such that, in updating weights, the neighborhoods of a winning node in a cluster do not overlap the neighboring nodes of some winning nodes in other clusters. It reduces the overheads caused by synchronization (i.e. maintaining coherency) of the weight matrices in the processors since the proposed algorithm allows multiple vectors to find their winning nodes and update weights in parallel. Our experimental results show that an average speedup of 3.15 for a parallel implementation of a four-processor simulation.",
author = "Yang, {Ming Hsuan} and Narendra Ahuja",
year = "1999",
month = "12",
day = "1",
language = "English",
pages = "1929--1933",
note = "International Joint Conference on Neural Networks (IJCNN'99) ; Conference date: 10-07-1999 Through 16-07-1999",

}

Yang, MH & Ahuja, N 1999, 'Data partition method for parallel self-organizing map', Paper presented at International Joint Conference on Neural Networks (IJCNN'99), Washington, DC, USA, 99/7/10 - 99/7/16 pp. 1929-1933.

Data partition method for parallel self-organizing map. / Yang, Ming Hsuan; Ahuja, Narendra.

1999. 1929-1933 Paper presented at International Joint Conference on Neural Networks (IJCNN'99), Washington, DC, USA, .

Research output: Contribution to conferencePaper

TY - CONF

T1 - Data partition method for parallel self-organizing map

AU - Yang, Ming Hsuan

AU - Ahuja, Narendra

PY - 1999/12/1

Y1 - 1999/12/1

N2 - We propose a method to partition training vectors into clusters for a parallel implementation of Self-Organizing Map (SOM) algorithm. The proposed algorithm assigns a cluster to a processor such that, in updating weights, the neighborhoods of a winning node in a cluster do not overlap the neighboring nodes of some winning nodes in other clusters. It reduces the overheads caused by synchronization (i.e. maintaining coherency) of the weight matrices in the processors since the proposed algorithm allows multiple vectors to find their winning nodes and update weights in parallel. Our experimental results show that an average speedup of 3.15 for a parallel implementation of a four-processor simulation.

AB - We propose a method to partition training vectors into clusters for a parallel implementation of Self-Organizing Map (SOM) algorithm. The proposed algorithm assigns a cluster to a processor such that, in updating weights, the neighborhoods of a winning node in a cluster do not overlap the neighboring nodes of some winning nodes in other clusters. It reduces the overheads caused by synchronization (i.e. maintaining coherency) of the weight matrices in the processors since the proposed algorithm allows multiple vectors to find their winning nodes and update weights in parallel. Our experimental results show that an average speedup of 3.15 for a parallel implementation of a four-processor simulation.

UR - http://www.scopus.com/inward/record.url?scp=0033352948&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0033352948&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:0033352948

SP - 1929

EP - 1933

ER -

Yang MH, Ahuja N. Data partition method for parallel self-organizing map. 1999. Paper presented at International Joint Conference on Neural Networks (IJCNN'99), Washington, DC, USA, .