Selection of Network Type and Architecture

Get Complete Project Material File(s) Now! »


Forecasting is an essential task in dealing with uncertainties. Most decisions taken by businesses, governments, and people are based on forecasts. Forecasting in general,involves estimating or predicting future events (values) of a time based sequence of historical data (time series). The oldest and simplest method used in forecasting is human judgment, and is often subject to errors. These errors may be within acceptable limits when the data to be predicted is well understood. When the forecasting problem is relatively more complex and less understood, other methods that require little or no human judgment become more appropriate. A time series forecasting problem becomes increasingly more complex in an environment with continuously shifting conditions. In such conditions, a method that can adapt to the changing environment is required.


A study of the NN literature reveals that researchers and analysts arbitrarily choose the type of NN they use in forecasting applications. While the FNN is the type dominantly used, recent trends indicate that RNNs are now the widely used architecture because of their ability to model dynamic systems implicitly. The recurent/delayed connections in RNNs, however, increase the number of weights that are required to be optimized during training of the NN.


The primary objectives of this thesis can be summarized as follows:
To formulate training of a NN forecaster as a dynamic optimization problem. To investigate the applicability and performance of a dymanic PSO as a training algorithm for FNN forecasters for non-stationary environments. To investigate if recurrent/delayed connections are necessary for a NN time series forecaster when a dynamic PSO is used as the training algorithm.


The main contributions of the this thesis are:
The formulation of training a NN forecaster for non-stationary time series as a dynamic optimization problem. The rst analysis of the applicability of a dynamic PSO algorithm to the training of NN forecasters under dierent kinds of dynamic environments. The implementation of the RPROP algorithm, and addition of the algorithm to CIlib [102]. An empirical analysis comparing the performance of a cooperative quantum PSO to the RPROP and the standard PSO algorithms in training FNNs and simple recurrent NN forecasters under dierent dynamic environmental scenarios.
An empirical analysis showing that recurrent/delayed connections are not necessary for NN forecasters if a dynamic PSO is used as the training method.

1 Introduction
1.1 Motivation
1.2 Objectives
1.3 Contributions
1.4 Thesis Outline
2 Time Series Forecasting
2.1 Time Series
2.1.1 Characteristics of Time Series
2.1.2 Time Series Analysis
2.1.3 Stochastic Processes in Time Series
2.2 Time series forecasting methods
2.2.1 Statistical Methods
2.2.2 k-Nearest Neighbours Technique
2.2.3 Support Vector Machines
2.2.4 Articial Neural Networks
2.3 Summary
3 Articial Neural Networks
3.1 Neural Network Forecasters
3.1.1 Feedforward Neural Networks
3.1.2 Time Delay Neural Networks
3.1.3 Simple Recurrent Neural Networks
3.2 Neural Network Training
3.2.1 Backpropagation Algorithm
3.2.2 Resilient Propagation Algorithm
3.2.3 Particle Swarm Optimization
3.3 Issues in Modeling Neural Network Forecaster
3.3.1 Data Collection and Preparation
3.3.2 Selection of Network Type and Architecture
3.3.3 Weight Initialization
3.4 Summary
4 Particle Swarm Optimization
4.1 Basic PSO Algorithm
4.2 Control Parameters of Particle Swarm Optimization Algorithms
4.3 Applications of Particle Swarm Optimization
4.4 Summary
5 Particle Swarm Optimization in Dynamic Environments
5.1 Dynamic Optimization Problems
5.1.1 Characteristics of Dynamic Environments
5.1.2 Types of Dynamic Environments
5.2 Standard PSO in Dynamic Environments
5.3 Change Detection and Response
5.3.1 Change Detection
5.3.2 Response to Change
5.4 Particle Swarm Optimizers for Dynamic Environments
5.4.1 Reinitialising Particle Swarm Optimization
5.4.2 Charged Particle Swarm Optimization
5.4.3 Quantum Particle Swarm Optimization
5.4.4 Multiswam Particle Swarm Optimization
5.4.5 Cooperative Particle Swarm Optimization
5.5 Summary
6 Training Feedforward Neural Network Forecasters using Dynamic Particle Swarm Optimizer
6.1 Experimental Procedure
6.1.1 Datasets
6.1.2 Dataset Preparation
6.1.3 Simulating Dynamic Environments
6.1.4 Parameter Selection
6.1.5 Performance Measure
6.2 Results
6.2.1 SAM Time Series
6.2.2 HIT Time Series
6.2.3 DMT Time series
6.2.4 Mackay Glass
6.2.5 Lorenz Time Series
6.2.6 IAP Time Series
6.2.7 S&P 500 Time Series
6.2.8 AWS Time Series
6.2.9 USD Time Series
6.2.10 LM Time Series
6.3 Summary
7 Recurrent Connections: Are They Necessary?
7.1 Methodology
7.1.1 Datasets
7.1.2 Parameter Selection
7.2 Results
7.2.1 SAM Time Series
7.2.2 HIT Time Series
7.2.3 DMT Time Series
7.2.4 MG Time Series
7.2.5 Lorenz Time Series
7.2.6 IAP Time Series
7.2.7 S&P Time Series
7.2.8 AWS Time Series
7.2.9 USD Time Series
7.2.10 LM Time Series
7.3 Summary
8 Conclusions
8.1 Summary of Conclusions
8.2 Future Work

Time Series Forecasting using Dynamic Particle Swarm Optimizer Trained Neural Networks

Related Posts