Next: Introduction, Previous: (dir), Up: (dir) [Contents][Index]
This is the neural-classifier Reference Manual, version 0.1, generated automatically by Declt version 3.0 "Montgomery Scott" on Tue Dec 22 14:29:58 2020 GMT+0.
• Introduction | What neural-classifier is all about | |
• Systems | The systems documentation | |
• Files | The files documentation | |
• Packages | The packages documentation | |
• Definitions | The symbols documentation | |
• Indexes | Concepts, functions, variables and data types |
neural-classifier
is a neural network library based on the first chapters
from this book. It is divided on
two systems: neural-classifier
which is a general API for neural networks
and neural-classifier/mnist
which contains helper functions for working with
MNIST/EMNIST datasets. For API documentation visit
this page.
t10k-images-idx3-ubyte
, t10k-labels-idx1-ubyte
,
train-images-idx3-ubyte
and train-labels-idx1-ubyte
.neural-classifier/mnist
system: (ql:quickload :neural-classifier/mnist)
.neural-classifier-mnist:*mnist-dataset-path*
to your directory with
MNIST data and execute (neural-classifier-mnist:load-mnist-database)
(this
will take about 10-15 seconds).(defparameter *nn* neural-classifier-mnist:make-mnist-classifier 35)
where 35
is a number of
hidden neurons.(neural-classifier-mnist:train-epochs *nn* 10)
to train the network
for 10 epochs. This function will return data about the network's accuracy for
each epoch.784x1
matrix of type
magicl:matrix/single-float
and pass it to neural-classifier:calculate
function.neural-classifier:*learn-rate*
is how fast gradient descent
algorithm works (i.e. how fast your network learns),
neural-classifier:*decay-rate*
is related to regularization and should be
about 5/N
where N
is a number of training samples. Zero means no
regularization.blas
and lapack
foreign libraries.magicl
for matrix operations.nibbles
for loading MNIST data.magicl
and nibbles
can be downloaded with quicklisp
.
If the accuracy returned by train-epochs
is good, but the network fails to
recognize digits draws by your own hand, try EMNIST database instead of
MNIST. Copy four emnist-digits-*
files to your MNIST directory preserving
the name of destination files. Images in EMNIST set are transposed (x and y
coordinates swapped), so do the same with your own images.
Next: Files, Previous: Introduction, Up: Top [Contents][Index]
The main system appears first, followed by any subsystem dependency.
• The neural-classifier system |
Vasily Postnicov <shamaz.mazum at gmail dot com>
2-clause BSD
Classification of samples based on neural network.
0.1
neural-classifier.asd (file)
Files are sorted by type and then listed depth-first from the systems components trees.
• Lisp files |
Next: The neural-classifier/package․lisp file, Previous: Lisp files, Up: Lisp files [Contents][Index]
/home/quickref/quicklisp/dists/quicklisp/software/neural-classifier-20201220-git/neural-classifier.asd
neural-classifier (system)
Next: The neural-classifier/magicl-blas․lisp file, Previous: The neural-classifier․asd file, Up: Lisp files [Contents][Index]
neural-classifier (system)
package.lisp
Next: The neural-classifier/definitions․lisp file, Previous: The neural-classifier/package․lisp file, Up: Lisp files [Contents][Index]
package.lisp (file)
neural-classifier (system)
magicl-blas.lisp
Next: The neural-classifier/utility․lisp file, Previous: The neural-classifier/magicl-blas․lisp file, Up: Lisp files [Contents][Index]
magicl-blas.lisp (file)
neural-classifier (system)
definitions.lisp
Next: The neural-classifier/neural-network․lisp file, Previous: The neural-classifier/definitions․lisp file, Up: Lisp files [Contents][Index]
definitions.lisp (file)
neural-classifier (system)
utility.lisp
idx-abs-max (function)
Previous: The neural-classifier/utility․lisp file, Up: Lisp files [Contents][Index]
utility.lisp (file)
neural-classifier (system)
neural-network.lisp
Next: Definitions, Previous: Files, Up: Top [Contents][Index]
Packages are listed by definition order.
• The neural-classifier package |
package.lisp (file)
common-lisp
Definitions are sorted by export status, category, package, and then by lexicographic order.
• Exported definitions | ||
• Internal definitions |
Next: Internal definitions, Previous: Definitions, Up: Definitions [Contents][Index]
• Exported special variables | ||
• Exported functions | ||
• Exported generic functions | ||
• Exported classes |
Next: Exported functions, Previous: Exported definitions, Up: Exported definitions [Contents][Index]
Regularization parameter @c(λ/N), where @c(N) is the number of objects in the training set and @c(λ) must be about 1-10. If not sure, start with zero (which is the default).
definitions.lisp (file)
Speed of gradient descent algorithm. Bigger values result in faster learning, but too big is bad.
definitions.lisp (file)
Number of samples to be used in stochastic gradient descent algorithm.
definitions.lisp (file)
Next: Exported generic functions, Previous: Exported special variables, Up: Exported definitions [Contents][Index]
Calculate the output from the network @c(neural-network) for the object @c(object). The input transformation function is applied to the @c(object) and the output transformation function is applied to the output column from the network.
neural-network.lisp (file)
Returns index of first element with maximal absolute value by calling isamax() function from BLAS. Works only for rows or columns.
utility.lisp (file)
Create a new neural network.
@c(layout) is a list of positive integers which describes the amount
of neurons in each layer (starting from input layer).
@c(activation-funcs) is a list all the elements of which are either
@c(:sigmoid), @c(:tanh), @c(:rlu) or @c(:softmax). The length of this
list must be equal to the length of @c(layout) minus one. The last
element cannot be @c(:rlu). @c(:softmax) can be only the last
element.
@c(input-trans) is a function which is applied to an object passed to
@c(calculate) to transform it into an input column (that is a matrix
with the type @c(magicl:matrix/single-float) and the shape @c(Nx1),
where @c(N) is the first number in the @c(layout)). For example, if we
are recognizing digits from the MNIST set, this function can take a
number of an image in the set and return @c(784x1) matrix.
@c(output-trans) is a function which is applied to the output of
@c(calculate) function (that is a matrix with the type
@c(magicl:matrix/single-float) and the shape Mx1, where M is the last
number in the @c(layout)) to return some object with user-defined
meaning (called a label). Again, if we are recognizing digits, this
function transforms @c(10x1) matrix to a number from 0 to 9.
@c(input-trans%) is just like @c(input-trans), but is used while
training. It can include additional transformations to extend your
training set (e.g. it can add some noise to resulting vector, rotate a
picture by a small random angle, etc.).
@c(label-trans) is a function which is applied to a label to get a
column (that is a matrix with the type @c(magicl:matrix/single-float)
and the shape @c(Mx1), where @c(M) is the last number in the
@c(layout)) which is optimal output from the network for this
object. With digits recognition, this function may take a digit @c(n)
and return @c(10x1) matrix of all zeros with exception for @c(n)-th
element which would be @c(1f0).
Default value for all transformation functions is @c(identity).
neural-network.lisp (file)
Calculate accuracy of the @c(neural-network) (that is a ratio of correctly guessed samples to all samples) using testing data from the generator @c(generator). Each item returned by @c(generator) must be a cons pair containing an object which is passed to the network and its label. @c(test) is a function used to compare the expected label and the actual one.
neural-network.lisp (file)
Perform a training of @c(neural-network) on every object returned by the generator @c(generator). Each item returned by @c(generator) must be a cons pair containing an object which is passed to the neural network and its label. @c(input-trans%) and @c(label-trans) functions passed to @c(make-neural-network) are applied to @c(car) and @c(cdr) of each cons pair.
neural-network.lisp (file)
Next: Exported classes, Previous: Exported functions, Up: Exported definitions [Contents][Index]
Function which translates an input object to a vector
definitions.lisp (file)
Function which translates an input object to a vector (used for training)
definitions.lisp (file)
Function which translates a label to a vector
definitions.lisp (file)
Number of neurons in each layer of the network
definitions.lisp (file)
Function which translates an output vector to a label.
definitions.lisp (file)
Previous: Exported generic functions, Up: Exported definitions [Contents][Index]
Class for neural networks
definitions.lisp (file)
standard-object (class)
Number of neurons in each layer of the network
list
:layout
(error "specify number of neurons in each layer")
neural-network-layout (generic function)
List of activation functions.
list
:activation-funcs
neural-network-activation-funcs (generic function)
(setf neural-network-activation-funcs) (generic function)
Weight matrices for each layer
list
neural-network-weights (generic function)
(setf neural-network-weights) (generic function)
Bias vectors for each layer
list
neural-network-biases (generic function)
(setf neural-network-biases) (generic function)
Function which translates an input object to a vector
function
:input-trans
(function identity)
neural-network-input-trans (generic function)
(setf neural-network-input-trans) (generic function)
Function which translates an output vector to a label.
function
:output-trans
(function identity)
neural-network-output-trans (generic function)
(setf neural-network-output-trans) (generic function)
Function which translates an input object to a vector (used for training)
function
:input-trans%
(function identity)
neural-network-input-trans% (generic function)
(setf neural-network-input-trans%) (generic function)
Function which translates a label to a vector
function
:label-trans
(function identity)
neural-network-label-trans (generic function)
(setf neural-network-label-trans) (generic function)
Previous: Exported definitions, Up: Definitions [Contents][Index]
• Internal functions | ||
• Internal generic functions | ||
• Internal types |
Next: Internal generic functions, Previous: Internal definitions, Up: Internal definitions [Contents][Index]
utility.lisp (file)
utility.lisp (file)
Calculate partial derivative of the cost function by z for all layers
neural-network.lisp (file)
Calculate gradient of the cost function
neural-network.lisp (file)
Calculate gradient of the cost function based on multiple input samples
neural-network.lisp (file)
Calculate argument and value of activation function for all layers
neural-network.lisp (file)
neural-network.lisp (file)
utility.lisp (file)
utility.lisp (file)
utility.lisp (file)
utility.lisp (file)
utility.lisp (file)
utility.lisp (file)
utility.lisp (file)
utility.lisp (file)
Next: Internal types, Previous: Internal functions, Up: Internal definitions [Contents][Index]
List of activation functions.
definitions.lisp (file)
Bias vectors for each layer
definitions.lisp (file)
Weight matrices for each layer
definitions.lisp (file)
Previous: Internal generic functions, Up: Internal definitions [Contents][Index]
definitions.lisp (file)
definitions.lisp (file)
definitions.lisp (file)
Previous: Definitions, Up: Top [Contents][Index]
• Concept index | ||
• Function index | ||
• Variable index | ||
• Data type index |
Next: Function index, Previous: Indexes, Up: Indexes [Contents][Index]
Jump to: | F L N |
---|
Jump to: | F L N |
---|
Next: Variable index, Previous: Concept index, Up: Indexes [Contents][Index]
Jump to: | (
A C F G I L M N R S T |
---|
Jump to: | (
A C F G I L M N R S T |
---|
Next: Data type index, Previous: Function index, Up: Indexes [Contents][Index]
Jump to: | *
A B I L O S W |
---|
Jump to: | *
A B I L O S W |
---|
Previous: Variable index, Up: Indexes [Contents][Index]
Jump to: | A C N P S T |
---|
Jump to: | A C N P S T |
---|