Analyzing weight distribution of feedforward neural networks and efficient weight initialization

Jimvook Go, Byungjoon Baek, Chulhee Lee

Research output: Contribution to journalArticle

Abstract

In this paper, we investigate and analyze the weight distribution of feedforward two-layer neural networks in order to understand and improve the time-consuming training process of neural networks. Generally, it takes a long time to train neural networks. However, when a new problem is presented, neural networks have to be trained again without any benefit from previous training. In order to address this problem, we view training process as finding a solution weight point in a weight space and analyze the distribution of solution weight points in the weight space. Then, we propose a weight initialization method that uses the information on the distribution of the solution weight points. Experimental results show that the proposed weight initialization method provides a better performance than the conventional method that uses a random generator in terms of convergence speed.

Original languageEnglish
Pages (from-to)840-849
Number of pages10
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3138
Publication statusPublished - 2004 Dec 1

Fingerprint

Weight Distribution
Feedforward neural networks
Feedforward Neural Networks
Initialization
Neural networks
Neural Networks
Speed of Convergence
Feedforward
Generator
Experimental Results
Training

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

@article{71ad629f6b3e41faacd0ad2b8e947c8f,
title = "Analyzing weight distribution of feedforward neural networks and efficient weight initialization",
abstract = "In this paper, we investigate and analyze the weight distribution of feedforward two-layer neural networks in order to understand and improve the time-consuming training process of neural networks. Generally, it takes a long time to train neural networks. However, when a new problem is presented, neural networks have to be trained again without any benefit from previous training. In order to address this problem, we view training process as finding a solution weight point in a weight space and analyze the distribution of solution weight points in the weight space. Then, we propose a weight initialization method that uses the information on the distribution of the solution weight points. Experimental results show that the proposed weight initialization method provides a better performance than the conventional method that uses a random generator in terms of convergence speed.",
author = "Jimvook Go and Byungjoon Baek and Chulhee Lee",
year = "2004",
month = "12",
day = "1",
language = "English",
volume = "3138",
pages = "840--849",
journal = "Lecture Notes in Computer Science",
issn = "0302-9743",
publisher = "Springer Verlag",

}

TY - JOUR

T1 - Analyzing weight distribution of feedforward neural networks and efficient weight initialization

AU - Go, Jimvook

AU - Baek, Byungjoon

AU - Lee, Chulhee

PY - 2004/12/1

Y1 - 2004/12/1

N2 - In this paper, we investigate and analyze the weight distribution of feedforward two-layer neural networks in order to understand and improve the time-consuming training process of neural networks. Generally, it takes a long time to train neural networks. However, when a new problem is presented, neural networks have to be trained again without any benefit from previous training. In order to address this problem, we view training process as finding a solution weight point in a weight space and analyze the distribution of solution weight points in the weight space. Then, we propose a weight initialization method that uses the information on the distribution of the solution weight points. Experimental results show that the proposed weight initialization method provides a better performance than the conventional method that uses a random generator in terms of convergence speed.

AB - In this paper, we investigate and analyze the weight distribution of feedforward two-layer neural networks in order to understand and improve the time-consuming training process of neural networks. Generally, it takes a long time to train neural networks. However, when a new problem is presented, neural networks have to be trained again without any benefit from previous training. In order to address this problem, we view training process as finding a solution weight point in a weight space and analyze the distribution of solution weight points in the weight space. Then, we propose a weight initialization method that uses the information on the distribution of the solution weight points. Experimental results show that the proposed weight initialization method provides a better performance than the conventional method that uses a random generator in terms of convergence speed.

UR - http://www.scopus.com/inward/record.url?scp=35048841654&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=35048841654&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:35048841654

VL - 3138

SP - 840

EP - 849

JO - Lecture Notes in Computer Science

JF - Lecture Notes in Computer Science

SN - 0302-9743

ER -