131
Zhang
et al
.,
Improved online sequential extreme learning machine for simulation of daily reference evapotranspiration
Tecnología y Ciencias del Agua
, vol. VIII, núm. 2, marzo-abril de 2017, pp. 127-140
ISSN 2007-2422
•
where
H
is called the hidden layer output ma-
trix in the neural network, and the ith column of
H
is the ith hidden node output toward inputs
x
1
,
x
2
,...,
x
N
.
The aim is to solve the above issues and
put forward an extreme learning machine for
SLFNs.
A training set was provided as:
=
x
i
,
t
i
( )
x
i
R
n
,
t
i
R
m
{
}
i
=
1
N
, the activation
function
g
(
x
), and hidden node number
Ñ
.
Step 1: Randomly allocate input weight
w
i
and bias
b
i
,
i
= 1, 2,...,
Ñ
.
Step 2: Calculate the hidden layer output
matrix
H
.
Step 3: Calculate the output weight
b
.
ˆ
=
H
+
T
β
(10)
where
T
= [
t
1
, ...,
t
N
]
T
,
H
+
is a generalized inverse
of MP.
Online sequential ELM (OS-ELM)
ELM is a relatively effective and simple al-
gorithm that is also able to learn quickly and
generalize well. However, meteorological data
are difficult to collect and the data set is large,
which may cause a decline in the performance
of the
ET
0 model. Thus, the online sequential
extreme learning machine (OS-ELM) by Liang
(2006) was referenced in the previous research.
The output weight matrix
ˆ
=
H
+
T
β
is a least-
squares solution of (7). Meanwhile, the matter
where
rank
(
H
) =
Ñ
the number of hidden nodes
(Ao, Xiao, & Mao, 2009) is considered. So,
H
+
of
(10) is given as:
H
+
=
H
T
H
(
)
1
H
T
(11)
If
H
T
H
tends to become fantastic, it can also
be made nonsingular by increasing the number
of data or choosing a smaller network size.
Substituting (11) into (10) gives:
ˆ
=
H
T
H
(
)
1
H
T
T
(12)
Equation (12) is called the least-squares
solution to
H
b
=
T
. Sequential implementation
of the least-squares solution of (12) gives the OS-
ELM. However, the OS-ELM may have some
deficiencies, especially the fact that solving the
generalized inverse matrix MP of H may cost a
huge amount of time in the training process. The
general method of singular value decomposition
is used to solve matrix
H
, but its computational
complexity is
O
(4
NÑ
2
+ 8
Ñ
3
) (Brown, 2009).
Improved algorithm of OS-ELM (IOS-
ELM)
This paper proposes an improved OS - ELM
called IOS-ELM. This new model was devel-
oped by modifying and improving the singu-
larity of the matrix. First, Equation
H
b
=
T
will be replaced by
H
T
H
b
=
H
T
T
, which has at
least one optimization solution. This reduces
the computational complexity of solving the
inverse, which results in a reduction of the train-
ing time. Second, the regular factor l/
l
is joined
when calculating the output weights. Last, the
subsequent online learning stage is added. In
theory, this algorithm can provide good gen-
eralization performance at an extremely fast
learning speed.
Step (1): Allocate random input weights
w
i
and bias
b
i
, initialize network and calculate the
initially hidden layer output matrix
H
0
.
Step (2): Set
r
=
rank
(
H
), if
r
=
N
0
, then cal-
culate the initial weight matrix
b
0
=
P
1
H
0
T
T
0
.If
r
=
N
then calculate the initial weight
b
0
=
H
0
T
P
2
T
0
.
Where
P
1
=
1
+
H
0
H
0
T
1
,
P
2
=
1
+
H
0
T
H
0
1
.
If
r N
0
and
r Ñ
, to solve the two optimiza-
tion models:
min
B g
+
M B
and min
B R
N
0
B
* 0
c
Then, the optimization solution
B
* and
b
0
can
be obtained.