Click here to Skip to main content
Click here to Skip to main content
Add your own
alternative version
Go to top

Artificial Neural Networks made easy with the FANN library

, 28 Aug 2013
Neural networks are typically associated with specialised applications, developed only by select groups of experts. This misconception has had a highly negative effect on its popularity. Hopefully, the FANN library will help fill this gap.
fann-1_2_0.zip
fann-1.2.0
debian
changelog
compat
control
copyright
docs
libfann1-dev.dirs
libfann1-dev.examples
libfann1-dev.files
libfann1-dev.install
libfann1.dirs
libfann1.files
libfann1.install
rules
doc
fann_doc_complete_1.0.pdf
Makefile
html
src
include
Makefile.in
Makefile.am
Makefile.in
COPYING
Makefile.am
win32_dll
examples
makefile
README
Makefile.in
configure
AUTHORS
COPYING
ChangeLog
INSTALL
Makefile.am
NEWS
TODO
aclocal.m4
config.guess
config.sub
configure.in
depcomp
fann.pc.in
fann.spec.in
install-sh
ltmain.sh
missing
mkinstalldirs
benchmarks
datasets
building.test
building.train
diabetes.test
diabetes.train
gene.test
gene.train
mushroom.test
mushroom.train
robot.test
robot.train
soybean.test
soybean.train
thyroid.test
thyroid.train
two-spiral.train
pumadyn-32fm.test
pumadyn-32fm.train
two-spiral.test
parity8.train
parity8.test
parity13.test
parity13.train
Makefile
README
benchmark.sh
benchmarks.pdf
gnuplot
performance.cc
quality.cc
.cvsignore
examples
Makefile
xor.data
python
README
examples
libfann.i
makefile.gnu
makefile.msvc
libfann.pyc
MSVC++
libfann.dsp
all.dsw
simple_test.dsp
simple_train.dsp
steepness_train.dsp
xor_test.dsp
xor_train.dsp
config.in
fann_win32_dll-1_2_0.zip
changelog
compat
control
copyright
docs
libfann1-dev.dirs
libfann1-dev.examples
libfann1-dev.files
libfann1-dev.install
libfann1.dirs
libfann1.files
libfann1.install
rules
fann_doc_complete_1.0.pdf
Makefile
Makefile.in
Makefile.am
Makefile.in
COPYING
Makefile.am
makefile
README
Makefile.in
configure
AUTHORS
COPYING
ChangeLog
INSTALL
Makefile.am
NEWS
TODO
aclocal.m4
config.guess
config.sub
configure.in
depcomp
fann.pc.in
fann.spec.in
install-sh
ltmain.sh
missing
mkinstalldirs
building.test
building.train
diabetes.test
diabetes.train
gene.test
gene.train
mushroom.test
mushroom.train
robot.test
robot.train
soybean.test
soybean.train
thyroid.test
thyroid.train
two-spiral.train
pumadyn-32fm.test
pumadyn-32fm.train
two-spiral.test
parity8.train
parity8.test
parity13.test
parity13.train
Makefile
README
benchmark.sh
benchmarks.pdf
gnuplot
performance.cc
quality.cc
.cvsignore
Makefile
xor.data
README
libfann.i
makefile.gnu
makefile.msvc
libfann.pyc
libfann.dsp
all.dsw
simple_test.dsp
simple_train.dsp
steepness_train.dsp
xor_test.dsp
xor_train.dsp
config.in
bin
fanndoubled.dll
fanndoubled.lib
fanndoubleMTd.dll
fanndoubleMTd.lib
fannfixedd.dll
fannfixedd.lib
fannfixedMTd.dll
fannfixedMTd.lib
fannfloatd.dll
fannfloatd.lib
fannfloatMTd.dll
fannfloatMTd.lib
fanndouble.dll
fanndouble.lib
fanndoubleMT.dll
fanndoubleMT.lib
fannfixed.dll
fannfixed.lib
fannfixedMT.dll
fannfixedMT.lib
fannfloat.dll
fannfloat.lib
fannfloatMT.dll
fannfloatMT.lib
vs_net2003.zip
VS.NET2003
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<HTML
><HEAD
><TITLE
>struct fann</TITLE
><link href="../style.css" rel="stylesheet" type="text/css"><META
NAME="GENERATOR"
CONTENT="Modular DocBook HTML Stylesheet Version 1.7"><LINK
REL="HOME"
TITLE="Fast Artificial Neural Network Library"
HREF="index.html"><LINK
REL="UP"
TITLE="Data Structures"
HREF="x1595.html"><LINK
REL="PREVIOUS"
TITLE="Data Structures"
HREF="x1595.html"><LINK
REL="NEXT"
TITLE="struct fann_train_data"
HREF="r1837.html"></HEAD
><BODY
CLASS="refentry"
BGCOLOR="#FFFFFF"
TEXT="#000000"
LINK="#0000FF"
VLINK="#840084"
ALINK="#0000FF"
><DIV
CLASS="NAVHEADER"
><TABLE
SUMMARY="Header navigation table"
WIDTH="100%"
BORDER="0"
CELLPADDING="0"
CELLSPACING="0"
><TR
><TH
COLSPAN="3"
ALIGN="center"
>Fast Artificial Neural Network Library</TH
></TR
><TR
><TD
WIDTH="10%"
ALIGN="left"
VALIGN="bottom"
><A
HREF="x1595.html"
ACCESSKEY="P"
>Prev</A
></TD
><TD
WIDTH="80%"
ALIGN="center"
VALIGN="bottom"
></TD
><TD
WIDTH="10%"
ALIGN="right"
VALIGN="bottom"
><A
HREF="r1837.html"
ACCESSKEY="N"
>Next</A
></TD
></TR
></TABLE
><HR
ALIGN="LEFT"
WIDTH="100%"></DIV
><H1
><A
NAME="api.struct.fann"
></A
>struct fann</H1
><DIV
CLASS="refnamediv"
><A
NAME="AEN1598"
></A
><H2
>Name</H2
>struct fann&nbsp;--&nbsp;Describes a neural network.</DIV
><DIV
CLASS="refsect1"
><A
NAME="AEN1601"
></A
><H2
>Description</H2
><P
>&#13;	    This structure is subject to change at any time. If you need to use the values contained herein, please
	    see the <A
HREF="x938.html"
>Options</A
> functions. If these functions do not fulfill your
	    needs, please open a feature request on our SourceForge
	    <A
HREF="http://www.sourceforge.net/projects/fann"
TARGET="_top"
>project page</A
>.
	  </P
><P
></P
><DIV
CLASS="variablelist"
><P
><B
>Properties</B
></P
><DL
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>errno_f</VAR
></DT
><DD
><P
>The type of error that last occurred.</P
></DD
><DT
><SPAN
CLASS="type"
>FILE *</SPAN
>
                <VAR
CLASS="varname"
>error_log</VAR
></DT
><DD
><P
>Where to log error messages.</P
></DD
><DT
><SPAN
CLASS="type"
>char *</SPAN
>
                <VAR
CLASS="varname"
>errstr</VAR
></DT
><DD
><P
>A string representation of the last error.</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>learning_rate</VAR
></DT
><DD
><P
>The learning rate of the network.</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>connection_rate</VAR
></DT
><DD
><P
>The connection rate of the network. Between 0 and 1, 1 meaning fully connected.</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>shortcut_connections</VAR
></DT
><DD
><P
>&#13;		  Is 1 if shortcut connections are used in the ann otherwise 0
		  Shortcut connections are connections that skip layers.
		  A fully connected ann with shortcut connections is an ann where
		  neurons have connections to all neurons in all later layers.
		</P
><P
>&#13;		  ANNs with shortcut connections are created by <A
HREF="r315.html"
><CODE
CLASS="function"
>fann_create_shortcut</CODE
></A
>.
		</P
></DD
><DT
><SPAN
CLASS="type"
>struct fann_layer *</SPAN
>
                <VAR
CLASS="varname"
>first_layer</VAR
></DT
><DD
><P
>&#13;		  Pointer to the first layer (input layer) in an array of all the layers, including the input and
                  output layer.
		</P
></DD
><DT
><SPAN
CLASS="type"
>struct fann_layer *</SPAN
>
                <VAR
CLASS="varname"
>last_layer</VAR
></DT
><DD
><P
>&#13;		  Pointer to the layer past the last layer in an array of all the layers, including the input and
                  output layer.
		</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>total_neurons</VAR
></DT
><DD
><P
>&#13;		  Total number of neurons. Very useful, because the actual neurons are allocated in one long
                  array.
		</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>num_input</VAR
></DT
><DD
><P
>Number of input neurons (not calculating bias)</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>num_output</VAR
></DT
><DD
><P
>Number of output neurons (not calculating bias)</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>train_errors</VAR
></DT
><DD
><P
>&#13;		  Used to contain the error deltas used during training Is allocated during first training session,
                  which means that if we do not train, it is never allocated.
		</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>activation_function_output</VAR
></DT
><DD
><P
>Used to choose which activation function to use in the output layer.</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>activation_function_hidden</VAR
></DT
><DD
><P
>Used to choose which activation function to use in the hidden layers.</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>activation_steepness_hidden</VAR
></DT
><DD
><P
>Parameters for the activation function in the hidden layers.</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>activation_steepness_output</VAR
></DT
><DD
><P
>Parameters for the activation function in the output layer.</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>training_algorithm</VAR
></DT
><DD
><P
>&#13;		  Training algorithm used when calling fann_train_on_... and <A
HREF="r685.html"
><CODE
CLASS="function"
>fann_train_epoch</CODE
></A
>.
		</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>decimal point</VAR
></DT
><DD
><P
>&#13;                <SPAN
CLASS="emphasis"
><I
CLASS="emphasis"
>Fixed point only.</I
></SPAN
> The decimal point, used for shifting the fix point in fixed point
                integer operations.</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>multiplier</VAR
></DT
><DD
><P
>&#13;                  <SPAN
CLASS="emphasis"
><I
CLASS="emphasis"
>Fixed point only.</I
></SPAN
> The multiplier, used for multiplying the fix point in fixed point
                  integer operations. Only used in special cases, since the decimal_point is much faster.
		</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>activation_results_hidden</VAR
></DT
><DD
><P
>&#13;		  An array of six members used by some activation functions to hold results for the hidden
                  layer(s).
		</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>activation_values_hidden</VAR
></DT
><DD
><P
>&#13;		  An array of six members used by some activation functions to hold values for the hidden
                  layer(s).
		</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>activation_results_output</VAR
></DT
><DD
><P
>&#13;		  An array of six members used by some activation functions to hold results for the output
                  layer.
		</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>activation_values_output</VAR
></DT
><DD
><P
>&#13;		  An array of six members used by some activation functions to hold values for the output
                  layer.
		</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>total_connections</VAR
></DT
><DD
><P
>&#13;		  Total number of connections. Very useful, because the actual connections are allocated in one
                  long array.
		</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>output</VAR
></DT
><DD
><P
>Used to store outputs in.</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>num_MSE</VAR
></DT
><DD
><P
>The number of data used to calculate the mean square error.</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>MSE_value</VAR
></DT
><DD
><P
>The total error value. The real mean square error is MSE_value/num_MSE.</P
></DD
><DT
><SPAN
CLASS="type"
>unsigned int</SPAN
>
                <VAR
CLASS="varname"
>train_error_function</VAR
></DT
><DD
><P
>When using this, training is usually faster.
		  Makes the error used for calculating the slopes
	          higher when the difference is higher.
		</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>quickprop_decay</VAR
></DT
><DD
><P
>Decay is used to make the weights not go so high.</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>quickprop_mu</VAR
></DT
><DD
><P
>Mu is a factor used to increase and decrease the step-size.</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>rprop_increase_factor</VAR
></DT
><DD
><P
>Tells how much the step-size should increase during learning.</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>rprop_decrease_factor</VAR
></DT
><DD
><P
>Tells how much the step-size should decrease during learning.</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>rprop_delta_min</VAR
></DT
><DD
><P
>The minimum step-size.</P
></DD
><DT
><SPAN
CLASS="type"
>float</SPAN
>
                <VAR
CLASS="varname"
>rprop_delta_max</VAR
></DT
><DD
><P
>The maximum step-size.</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>train_slopes</VAR
></DT
><DD
><P
>&#13;		  Used to contain the slope errors used during batch training
		  Is allocated during first training session,
		  which means that if we do not train, it is never allocated.
		</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>prev_steps</VAR
></DT
><DD
><P
>&#13;		  The previous step taken by the quickprop/rprop procedures.
		  Not allocated if not used.
		</P
></DD
><DT
><SPAN
CLASS="type"
>fann_type *</SPAN
>
                <VAR
CLASS="varname"
>prev_train_slopes</VAR
></DT
><DD
><P
>&#13;		  The slope values used by the quickprop/rprop procedures.
		  Not allocated if not used.
		</P
></DD
></DL
></DIV
></DIV
><DIV
CLASS="NAVFOOTER"
><HR
ALIGN="LEFT"
WIDTH="100%"><TABLE
SUMMARY="Footer navigation table"
WIDTH="100%"
BORDER="0"
CELLPADDING="0"
CELLSPACING="0"
><TR
><TD
WIDTH="33%"
ALIGN="left"
VALIGN="top"
><A
HREF="x1595.html"
ACCESSKEY="P"
>Prev</A
></TD
><TD
WIDTH="34%"
ALIGN="center"
VALIGN="top"
><A
HREF="index.html"
ACCESSKEY="H"
>Home</A
></TD
><TD
WIDTH="33%"
ALIGN="right"
VALIGN="top"
><A
HREF="r1837.html"
ACCESSKEY="N"
>Next</A
></TD
></TR
><TR
><TD
WIDTH="33%"
ALIGN="left"
VALIGN="top"
>Data Structures</TD
><TD
WIDTH="34%"
ALIGN="center"
VALIGN="top"
><A
HREF="x1595.html"
ACCESSKEY="U"
>Up</A
></TD
><TD
WIDTH="33%"
ALIGN="right"
VALIGN="top"
>struct fann_train_data</TD
></TR
></TABLE
></DIV
></BODY
></HTML
>

By viewing downloads associated with this article you agree to the Terms of Service and the article's licence.

If a file you wish to view isn't highlighted, and is a text file (not binary), please let us know and we'll add colourisation support for it.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Software Developer's Journal
Publisher
Poland Poland
Software Developer's Journal (formerly Software 2.0) is a magazine for professional programmers and developers publishing news from the software world and practical articles presenting very interesting ready programming solutions. To read more

| Advertise | Privacy | Mobile
Web01 | 2.8.140916.1 | Last Updated 28 Aug 2013
Article Copyright 2006 by Software Developer's Journal
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid