AD ALTA
JOURNAL OF INTERDISCIPLINARY RESEARCH
sets. In our case the lowest error is optically proved by the RBF
networks. All reach a value lower than 0.16. Optically we will be
looking for a result in the form of one of RBF networks.
Nonetheless, to be able to determine whether this or other neural
network is useful in practice, i.e. whether its results are
economically reliably interpretable, and whether they prove
acceptable accuracy, a confusing matrix has to be set. In fact, it
is a confusion matrix made of several partial matrices. It is a
10x4 matrix (10 neural structures, 4 possible results) always for
three data sets (training, testing and validation). It is necessary
for us to find one, which will be able to predict all assumed
results, i.e. the enterprise is not going bankrupt, it will go
bankrupt in the given year, it will go bankrupt in two years, and
it will go bankrupt in the future. Moreover, it is important for the
neural structure not to be mistaken in its predictions. Relatively
interesting results are presented by neural networks No. 3, 4, and
5 (i.e. MLP 15:15-54-66-4:1, Linear 84:86-4:1 a Linear 90:98-
4:1). Network No. 3 is a multiple perceptron network with two
hidden layers. It works with 15 input variables, which are
processed by 54 neurons in the first hidden layer, and 66 neurons
in the second hidden layer. The output layer is represented by
four neurons (i.e. four possible results) out of which the only
option is being opted for. With regard to the fact that we are
using 15 input variables, and at the same time the network
contains 15 neurons in the input layer, the network uses only
continuous quantities input
variables. The network model is the object of Figure No. 1.
Figure 1: MLP 15:15-54-66-4:1 neural network model
Source: Author
The obtained linear networks work with both continuous and
discrete quantities. The first one, Linear 84:86-4:1 assumes 84
input variables. The network model is the object of Figure No. 2.
Figure 2: Linear 84:86-4:2 neural network model
Source: Author
The second linear neural network called ´Linear 90:98-4:1 works
with 90 input quantities. The neural network model is captured
in Figure No. 3.
Figure 3: Linear 90:98-4:1 neural network model
Source: Author
Figures No. 1-3 are best able to interpret the network structure.
The figure always shows clearly which input variable is meant
(categorical, continuous), and the neuron function (signal
amplification and weakening). Also, it is clear in what manner
the signal is further modified. Unfortunately, detailed
modification is unclear (the input variable in hidden neuron
functions and in output layer neurons). Finally, even the output
of the neural function is noticeable. The description of individual
model components in weight decomposition is available
in the xml form at the following link http://www.vstecb.cz/data/1
487593732162SANN_PMML_Code_rozcleneni-souboru-214-
podniku-do-5-skupin.rar (the length of each of them significantly
exceeds the size of this contribution itself that is why they are
not included standardly in the contribution appendix).
The implemented sensitivity analysis evaluates the meaning of
individual input variables for preserved neural networks.
However, the range of this contribution does not allow
interpreting the complete executed analysis. Nevertheless, even
so we are able to identify the most significant variables to
determine the prediction model. They are the following:
- 235 -