This is the neural-classifier Reference Manual, version 0.2, generated automatically by Declt version 4.0 beta 2 "William Riker" on Thu Aug 15 06:00:58 2024 GMT+0.
neural-classifier/neural-classifier.asd
neural-classifier/package.lisp
neural-classifier/magicl-blas.lisp
neural-classifier/definitions.lisp
neural-classifier/utility.lisp
neural-classifier/activation.lisp
neural-classifier/optimizers.lisp
neural-classifier/neural-network.lisp
The main system appears first, followed by any subsystem dependency.
neural-classifier
Classification of samples based on neural network.
Vasily Postnicov
2-clause BSD
0.2
alexandria
(system).
magicl/ext-blas
(system).
magicl/ext-lapack
(system).
snakes
(system).
package.lisp
(file).
magicl-blas.lisp
(file).
definitions.lisp
(file).
utility.lisp
(file).
activation.lisp
(file).
optimizers.lisp
(file).
neural-network.lisp
(file).
Files are sorted by type and then listed depth-first from the systems components trees.
neural-classifier/neural-classifier.asd
neural-classifier/package.lisp
neural-classifier/magicl-blas.lisp
neural-classifier/definitions.lisp
neural-classifier/utility.lisp
neural-classifier/activation.lisp
neural-classifier/optimizers.lisp
neural-classifier/neural-network.lisp
neural-classifier/neural-classifier.asd
neural-classifier
(system).
neural-classifier/magicl-blas.lisp
package.lisp
(file).
neural-classifier
(system).
neural-classifier/definitions.lisp
magicl-blas.lisp
(file).
neural-classifier
(system).
*decay-rate*
(special variable).
*learn-rate*
(special variable).
*minibatch-size*
(special variable).
*momentum-coeff*
(special variable).
neural-network
(class).
neural-network-input-trans
(reader method).
(setf neural-network-input-trans)
(writer method).
neural-network-input-trans%
(reader method).
(setf neural-network-input-trans%)
(writer method).
neural-network-label-trans
(reader method).
(setf neural-network-label-trans)
(writer method).
neural-network-layout
(reader method).
neural-network-output-trans
(reader method).
(setf neural-network-output-trans)
(writer method).
neural-network-activation-funcs
(reader method).
(setf neural-network-activation-funcs)
(writer method).
neural-network-biases
(reader method).
(setf neural-network-biases)
(writer method).
neural-network-weights
(reader method).
(setf neural-network-weights)
(writer method).
neural-classifier/utility.lisp
definitions.lisp
(file).
neural-classifier
(system).
idx-abs-max
(function).
nrandom-generator
(function).
sasum
(function).
standard-random
(function).
neural-classifier/activation.lisp
utility.lisp
(file).
neural-classifier
(system).
activation
(class).
hidden-layer-activation
(class).
identity%
(class).
leaky-relu
(class).
output-layer-activation
(class).
sigmoid
(class).
softmax
(class).
tanh%
(class).
activate
(generic function).
activate'
(generic function).
leaky-relu-coeff
(reader method).
σ
(function).
neural-classifier/optimizers.lisp
activation.lisp
(file).
neural-classifier
(system).
adagrad-optimizer
(class).
initialize-instance
(method).
make-optimizer
(function).
momentum-optimizer
(class).
nesterov-optimizer
(class).
rmsprop-optimizer
(class).
sgd-optimizer
(class).
learn
(generic function).
memoizing-optimizer
(class).
momentum-coeff
(reader method).
momentum-coeff
(reader method).
(setf momentum-coeff)
(writer method).
(setf momentum-coeff)
(writer method).
optimizer
(class).
optimizer-biases
(reader method).
(setf optimizer-biases)
(writer method).
optimizer-initial-value
(reader method).
(setf optimizer-initial-value)
(writer method).
optimizer-weights
(reader method).
(setf optimizer-weights)
(writer method).
neural-classifier/neural-network.lisp
optimizers.lisp
(file).
neural-classifier
(system).
calculate
(function).
initialize-instance
(method).
make-neural-network
(function).
rate
(function).
train-epoch
(function).
calculate-delta
(function).
calculate-gradient
(function).
calculate-gradient-minibatch
(function).
calculate-z-and-out
(function).
Packages are listed by definition order.
neural-classifier
alexandria
.
common-lisp
.
*decay-rate*
(special variable).
*learn-rate*
(special variable).
*minibatch-size*
(special variable).
*momentum-coeff*
(special variable).
activation
(class).
adagrad-optimizer
(class).
calculate
(function).
hidden-layer-activation
(class).
identity%
(class).
idx-abs-max
(function).
leaky-relu
(class).
make-neural-network
(function).
make-optimizer
(function).
momentum-optimizer
(class).
nesterov-optimizer
(class).
neural-network
(class).
neural-network-input-trans
(generic reader).
(setf neural-network-input-trans)
(generic writer).
neural-network-input-trans%
(generic reader).
(setf neural-network-input-trans%)
(generic writer).
neural-network-label-trans
(generic reader).
(setf neural-network-label-trans)
(generic writer).
neural-network-layout
(generic reader).
neural-network-output-trans
(generic reader).
(setf neural-network-output-trans)
(generic writer).
output-layer-activation
(class).
rate
(function).
rmsprop-optimizer
(class).
sgd-optimizer
(class).
sigmoid
(class).
softmax
(class).
tanh%
(class).
train-epoch
(function).
activate
(generic function).
activate'
(generic function).
calculate-delta
(function).
calculate-gradient
(function).
calculate-gradient-minibatch
(function).
calculate-z-and-out
(function).
leaky-relu-coeff
(generic reader).
learn
(generic function).
memoizing-optimizer
(class).
momentum-coeff
(generic reader).
(setf momentum-coeff)
(generic writer).
neural-network-activation-funcs
(generic reader).
(setf neural-network-activation-funcs)
(generic writer).
neural-network-biases
(generic reader).
(setf neural-network-biases)
(generic writer).
neural-network-weights
(generic reader).
(setf neural-network-weights)
(generic writer).
nrandom-generator
(function).
optimizer
(class).
optimizer-biases
(generic reader).
(setf optimizer-biases)
(generic writer).
optimizer-initial-value
(generic reader).
(setf optimizer-initial-value)
(generic writer).
optimizer-weights
(generic reader).
(setf optimizer-weights)
(generic writer).
sasum
(function).
standard-random
(function).
σ
(function).
Definitions are sorted by export status, category, package, and then by lexicographic order.
Regularization parameter @c(λ/N), where @c(N) is the number of objects in the training set and @c(λ) must be about 1-10. If not sure, start with zero (which is the default).
Learning speed for gradient descent algorithms. Bigger values result in faster learning, but too big is bad. Default value is good for SGD, SGD with momentum and NAG optimizers. For Adagrad and RMSprop try 0.001f0.
Number of samples to be used for one update of network parameters.
Hyperparameter for SGD optimizers which use momentum. Zero means just usual SGD. RMSprop also uses this parameter in accumulation of squared partial derivatives of network parameters. Good values are 0.8-0.9.
Calculate output from the network @c(neural-network) for the object @c(object). The input transformation function (specified by @c(:input-trans) when creating a network) is applied to the @c(object) and the output transformation function (specified by @c(:output-trans)) is applied to output Nx1 matrix from the network.
Returns index of first element with maximal absolute value by calling isamax() function from BLAS. Works only for rows or columns.
Create a new neural network.
@begin(list)
@item(@c(layout) is a list of positive integers which describes the
amount of neurons in each layer (starting from input layer).)
@item(@c(activation-funcs) is a list all the elements of which are
objects of type @c(activation). The length of this list must be
equal to the length of @c(layout) minus one because the input
layer does not have an activation function. The last element
must be of type @c(output-layer-activation) and the all elements
but last must be of type @c(hidden-layer-activation).)
@item(@c(input-trans) is a function which is applied to an object
passed to @c(calculate) to transform it into an input column
(that is a matrix with the type @c(magicl:matrix/single-float)
and the shape @c(Nx1), where @c(N) is the first number in the
@c(layout)). For example, if we are recognizing digits from the
MNIST set, this function can take a number of an image in the
set and return @c(784x1) matrix.)
@item(@c(output-trans) is a function which is applied to the output of
@c(calculate) function (that is a matrix with the type
@c(magicl:matrix/single-float) and the shape Mx1, where M is the
last number in the @c(layout)) to return some object with
user-defined meaning (called a label). Again, if we are
recognizing digits, this function transforms @c(10x1) matrix to
a number from 0 to 9.)
@item(@c(input-trans%) is just like @c(input-trans), but is used while
training. It can include additional transformations to extend
your training set (e.g. it can add some noise to input data,
rotate an input picture by a small random angle, etc.).)
@item(@c(label-trans) is a function which is applied to a label to get
a column (that is a matrix with the type
@c(magicl:matrix/single-float) and the shape @c(Mx1), where
@c(M) is the last number in the @c(layout)) which is the optimal
output from the network for this object. With digits
recognition, this function may take a digit @c(n) and return
@c(10x1) matrix of all zeros with exception for @c(n)-th element
which would be @c(1f0).)
@end(list)
Default value for all transformation functions is @c(identity).
Make optimizer of type @c(type) for a network @c(network).
Calculate accuracy of the @c(neural-network) (ratio of correctly guessed samples to all samples) using testing data from the generator @c(generator). Each item returned by @c(generator) must be a cons pair in the form @c((data-object . label)), as with @c(train-epoch) function. @c(test) is a function used to compare the expected label with the label returned by the network.
Perform training of @c(neural-network) on every object returned
by the generator @c(generator). Each item returned by @c(generator)
must be in the form @c((data-object . label)) cons
pair. @c(input-trans%) and @c(label-trans) functions passes to
@c(make-neural-network) are applied to @c(car) and @c(cdr) of each
pair respectively.
neural-network
)) ¶neural-network
)) ¶Function which translates an input object to a vector
neural-network
)) ¶neural-network
)) ¶Function which translates an input object to a vector (used for training)
neural-network
)) ¶neural-network
)) ¶Function which translates a label to a vector
neural-network
)) ¶Number of neurons in each layer of the network
neural-network
)) ¶neural-network
)) ¶Function which translates an output vector to a label.
neural-network
) &rest initargs) ¶memoizing-optimizer
) &rest initargs &key &allow-other-keys) ¶Generic class for activation functions. Not to be instantiated.
Adagrad optimizer
Initarg | Value |
---|---|
:initial-value | 1.0e-8 |
Generic class for activation functions associated with hidden layers. Not to be instantiated.
Identity activation function (does nothing on its input).
Leaky ReLU activation function. It returns its
argument when it is greater than zero or the argument multiplied by
@c(coeff) otherwise.
0.0
:coeff
This slot is read-only.
SGD optimizer with momentum
single-float
neural-classifier:*momentum-coeff*
:coeff
Nesterov accelerated SGD, improvement of SGD with momentum
Class for neural networks
initialize-instance
.
(setf neural-network-activation-funcs)
.
neural-network-activation-funcs
.
(setf neural-network-biases)
.
neural-network-biases
.
(setf neural-network-input-trans)
.
neural-network-input-trans
.
(setf neural-network-input-trans%)
.
neural-network-input-trans%
.
(setf neural-network-label-trans)
.
neural-network-label-trans
.
neural-network-layout
.
(setf neural-network-output-trans)
.
neural-network-output-trans
.
(setf neural-network-weights)
.
neural-network-weights
.
Number of neurons in each layer of the network
list
(error "specify number of neurons in each layer")
:layout
This slot is read-only.
List of activation functions.
list
:activation-funcs
Weight matrices for each layer
list
Bias vectors for each layer
list
Function which translates an input object to a vector
function
(function identity)
:input-trans
Function which translates an output vector to a label.
function
(function identity)
:output-trans
Function which translates an input object to a vector (used for training)
function
(function identity)
:input-trans%
Function which translates a label to a vector
function
(function identity)
:label-trans
Generic class for activation functions associated with an output layer. Not to be instantiated.
RMSprop optimizer
Initarg | Value |
---|---|
:initial-value | 1.0e-8 |
single-float
neural-classifier:*momentum-coeff*
:coeff
The simplest SGD optimizer
Sigmoid activation function.
Softmax activation function.
Hyberbolic tangent activation function.
Calculate partial derivative of the cost function by z for all layers
Calculate gradient of the cost function
Calculate gradient of the cost function based on multiple input samples
Calculate argument and value of activation function for all layers
Return a function which generates random values from a distibution N(μ, σ).
Return a random value sampled from a distribution N(0, 1).
Sigmoid activation function.
Apply activation function ACTIVATION to a
VECTOR. VECTOR is an output vector from a layer of a neural network.
leaky-relu
)) ¶Apply derivative of activation function ACTIVATION
to a VECTOR. VECTOR is an output vector from a layer of a neural
network.
leaky-relu
)) ¶leaky-relu
)) ¶automatically generated reader method
Update network parameters using SAMPLES for training.
rmsprop-optimizer
) neural-network samples) ¶adagrad-optimizer
) neural-network samples) ¶nesterov-optimizer
) neural-network samples) ¶momentum-optimizer
) neural-network samples) ¶sgd-optimizer
) neural-network samples) ¶rmsprop-optimizer
)) ¶automatically generated reader method
momentum-optimizer
)) ¶automatically generated reader method
rmsprop-optimizer
)) ¶automatically generated writer method
momentum-optimizer
)) ¶automatically generated writer method
neural-network
)) ¶neural-network
)) ¶List of activation functions.
neural-network
)) ¶neural-network
)) ¶Bias vectors for each layer
neural-network
)) ¶neural-network
)) ¶Weight matrices for each layer
memoizing-optimizer
)) ¶automatically generated reader method
memoizing-optimizer
)) ¶automatically generated writer method
memoizing-optimizer
)) ¶automatically generated reader method
memoizing-optimizer
)) ¶automatically generated writer method
memoizing-optimizer
)) ¶automatically generated reader method
memoizing-optimizer
)) ¶automatically generated writer method
Optimizer which memoizes some old state related to weights and biases. Not to be instantiated.
Generic optimizer class. Not to be instantiated
Jump to: | (
A C F G I L M N O R S T Σ |
---|
Jump to: | (
A C F G I L M N O R S T Σ |
---|
Jump to: | *
A B C I L O S W |
---|
Jump to: | *
A B C I L O S W |
---|
Jump to: | A C D F H I L M N O P R S T U |
---|
Jump to: | A C D F H I L M N O P R S T U |
---|