This is the neural-classifier Reference Manual, version 0.2, generated automatically by Declt version 4.0 beta 2 "William Riker" on Sun Dec 08 19:03:29 2024 GMT+0.
neural-classifier/neural-classifier.asd
neural-classifier/package.lisp
neural-classifier/magicl-blas.lisp
neural-classifier/definitions.lisp
neural-classifier/utility.lisp
neural-classifier/activation.lisp
neural-classifier/optimizers.lisp
neural-classifier/neural-network.lisp
The main system appears first, followed by any subsystem dependency.
neural-classifier
Classification of samples based on neural network.
Vasily Postnicov
2-clause BSD
0.2
alexandria
(system).
serapeum
(system).
magicl/ext-blas
(system).
magicl/ext-lapack
(system).
snakes
(system).
package.lisp
(file).
magicl-blas.lisp
(file).
definitions.lisp
(file).
utility.lisp
(file).
activation.lisp
(file).
optimizers.lisp
(file).
neural-network.lisp
(file).
Files are sorted by type and then listed depth-first from the systems components trees.
neural-classifier/neural-classifier.asd
neural-classifier/package.lisp
neural-classifier/magicl-blas.lisp
neural-classifier/definitions.lisp
neural-classifier/utility.lisp
neural-classifier/activation.lisp
neural-classifier/optimizers.lisp
neural-classifier/neural-network.lisp
neural-classifier/neural-classifier.asd
neural-classifier
(system).
neural-classifier/magicl-blas.lisp
package.lisp
(file).
neural-classifier
(system).
neural-classifier/definitions.lisp
magicl-blas.lisp
(file).
neural-classifier
(system).
neural-network
(class).
neural-network-input-trans
(reader method).
(setf neural-network-input-trans)
(writer method).
neural-network-input-trans%
(reader method).
(setf neural-network-input-trans%)
(writer method).
neural-network-label-trans
(reader method).
(setf neural-network-label-trans)
(writer method).
neural-network-layout
(reader method).
neural-network-output-trans
(reader method).
(setf neural-network-output-trans)
(writer method).
declare-optimizations
(function).
neural-network-activation-funcs
(reader method).
(setf neural-network-activation-funcs)
(writer method).
neural-network-biases
(reader method).
(setf neural-network-biases)
(writer method).
neural-network-weights
(reader method).
(setf neural-network-weights)
(writer method).
neural-classifier/utility.lisp
definitions.lisp
(file).
neural-classifier
(system).
idx-abs-max
(function).
nrandom-generator
(function).
sasum
(function).
standard-random
(function).
neural-classifier/activation.lisp
utility.lisp
(file).
neural-classifier
(system).
%identity
(class).
%tanh
(class).
activation
(class).
hidden-layer-activation
(class).
leaky-relu
(class).
output-layer-activation
(class).
sigmoid
(class).
softmax
(class).
activate
(generic function).
activate'
(generic function).
leaky-relu-coeff
(reader method).
σ
(function).
neural-classifier/optimizers.lisp
activation.lisp
(file).
neural-classifier
(system).
adagrad-optimizer
(class).
adam-optimizer
(class).
initialize-instance
(method).
initialize-instance
(method).
make-load-form
(method).
momentum-optimizer
(class).
nesterov-optimizer
(class).
print-object
(method).
rmsprop-optimizer
(class).
sgd-optimizer
(class).
%constructor=
(method).
constructor-values/generic
(method).
copy-memo
(function).
learn
(generic function).
make-memo
(function).
memo
(function).
memo
(structure).
memo-biases
(reader).
memo-weights
(reader).
momentum-memo-optimizer
(class).
optimizer
(class).
optimizer-corrected-momentum-coeff
(reader method).
(setf optimizer-corrected-momentum-coeff)
(writer method).
optimizer-corrected-rate-coeff
(reader method).
(setf optimizer-corrected-rate-coeff)
(writer method).
optimizer-decay-rate
(reader method).
optimizer-learning-rate
(reader method).
optimizer-minibatch-size
(reader method).
optimizer-momentum-coeff
(reader method).
optimizer-momentum-memo
(reader method).
(setf optimizer-momentum-memo)
(writer method).
optimizer-rate-coeff
(reader method).
optimizer-rate-memo
(reader method).
(setf optimizer-rate-memo)
(writer method).
rate-memo-optimizer
(class).
read-only-struct-slot-names
(method).
neural-classifier/neural-network.lisp
optimizers.lisp
(file).
neural-classifier
(system).
calculate
(function).
initialize-instance
(method).
make-neural-network
(function).
rate
(function).
train-epoch
(function).
calculate-delta
(function).
calculate-gradient
(function).
calculate-gradient-minibatch
(function).
calculate-z-and-out
(function).
Packages are listed by definition order.
neural-classifier
alexandria
.
common-lisp
.
%identity
(class).
%tanh
(class).
activation
(class).
adagrad-optimizer
(class).
adam-optimizer
(class).
calculate
(function).
hidden-layer-activation
(class).
idx-abs-max
(function).
leaky-relu
(class).
make-neural-network
(function).
momentum-optimizer
(class).
nesterov-optimizer
(class).
neural-network
(class).
neural-network-input-trans
(generic reader).
(setf neural-network-input-trans)
(generic writer).
neural-network-input-trans%
(generic reader).
(setf neural-network-input-trans%)
(generic writer).
neural-network-label-trans
(generic reader).
(setf neural-network-label-trans)
(generic writer).
neural-network-layout
(generic reader).
neural-network-output-trans
(generic reader).
(setf neural-network-output-trans)
(generic writer).
output-layer-activation
(class).
rate
(function).
rmsprop-optimizer
(class).
sgd-optimizer
(class).
sigmoid
(class).
softmax
(class).
train-epoch
(function).
activate
(generic function).
activate'
(generic function).
calculate-delta
(function).
calculate-gradient
(function).
calculate-gradient-minibatch
(function).
calculate-z-and-out
(function).
copy-memo
(function).
declare-optimizations
(function).
leaky-relu-coeff
(generic reader).
learn
(generic function).
make-memo
(function).
memo
(function).
memo
(structure).
memo-biases
(reader).
memo-weights
(reader).
momentum-memo-optimizer
(class).
neural-network-activation-funcs
(generic reader).
(setf neural-network-activation-funcs)
(generic writer).
neural-network-biases
(generic reader).
(setf neural-network-biases)
(generic writer).
neural-network-weights
(generic reader).
(setf neural-network-weights)
(generic writer).
nrandom-generator
(function).
optimizer
(class).
optimizer-corrected-momentum-coeff
(generic reader).
(setf optimizer-corrected-momentum-coeff)
(generic writer).
optimizer-corrected-rate-coeff
(generic reader).
(setf optimizer-corrected-rate-coeff)
(generic writer).
optimizer-decay-rate
(generic reader).
optimizer-learning-rate
(generic reader).
optimizer-minibatch-size
(generic reader).
optimizer-momentum-coeff
(generic reader).
optimizer-momentum-memo
(generic reader).
(setf optimizer-momentum-memo)
(generic writer).
optimizer-rate-coeff
(generic reader).
optimizer-rate-memo
(generic reader).
(setf optimizer-rate-memo)
(generic writer).
rate-memo-optimizer
(class).
sasum
(function).
standard-random
(function).
σ
(function).
Definitions are sorted by export status, category, package, and then by lexicographic order.
Calculate output from the network @c(neural-network) for the object @c(object). The input transformation function (specified by @c(:input-trans) when creating a network) is applied to the @c(object) and the output transformation function (specified by @c(:output-trans)) is applied to output Nx1 matrix from the network.
Returns index of first element with maximal absolute value by calling isamax() function from BLAS. Works only for rows or columns.
Create a new neural network.
@begin(list)
@item(@c(layout) is a list of positive integers which describes a
number of neurons in each layer (starting from input layer).)
@item(@c(activation-funcs) is a list all the elements of which are
objects of type @c(activation). The length of this list must be
equal to the length of @c(layout) minus one because the input
layer does not have an activation function. The last element
must be of type @c(output-layer-activation) and the all elements
but last must be of type @c(hidden-layer-activation).)
@item(@c(input-trans) is a function which is applied to an object
passed to @c(calculate) to transform it into an input column
(that is a matrix with the type @c(magicl:matrix/single-float)
and the shape @c(Nx1), where @c(N) is the first number in the
@c(layout)). For example, if we are recognizing digits from the
MNIST set, this function can take a number of an image in the
set and return @c(784x1) matrix.)
@item(@c(output-trans) is a function which is applied to the output of
@c(calculate) function (that is a matrix with the type
@c(magicl:matrix/single-float) and the shape Mx1, where M is the
last number in the @c(layout)) to return some object with
user-defined meaning (called a label). Again, if we are
recognizing digits, this function transforms @c(10x1) matrix to
a number from 0 to 9.)
@item(@c(input-trans%) is just like @c(input-trans), but is used while
training. It can include additional transformations to extend
your training set (e.g. it can add some noise to input data,
rotate an input picture by a small random angle, etc.).)
@item(@c(label-trans) is a function which is applied to a label to get
a column (that is a matrix with the type
@c(magicl:matrix/single-float) and the shape @c(Mx1), where
@c(M) is the last number in the @c(layout)) which is the optimal
output from the network for this object. With digits
recognition, this function may take a digit @c(n) and return
@c(10x1) matrix of all zeros with exception for @c(n)-th element
which would be @c(1f0).)
@end(list)
Default value for all transformation functions is @c(identity).
Calculate accuracy of the @c(neural-network) (ratio of correctly guessed samples to all samples) using testing data from the generator @c(generator). Each item returned by @c(generator) must be a cons pair in the form @c((data-object . label)), as with @c(train-epoch) function. @c(test) is a function used to compare the expected label with the label returned by the network.
Perform training of @c(neural-network) on every object returned
by the generator @c(generator). Each item returned by @c(generator)
must be in the form @c((data-object . label)) cons
pair. @c(input-trans%) and @c(label-trans) functions passes to
@c(make-neural-network) are applied to @c(car) and @c(cdr) of each
pair respectively.
neural-network
)) ¶neural-network
)) ¶Function which translates an input object to a vector
neural-network
)) ¶neural-network
)) ¶Function which translates an input object to a vector (used for training)
neural-network
)) ¶neural-network
)) ¶Function which translates a label to a vector
neural-network
)) ¶Number of neurons in each layer of the network
neural-network
)) ¶neural-network
)) ¶Function which translates an output vector to a label.
neural-network
) &rest initargs) ¶momentum-memo-optimizer
) &rest initargs &key &allow-other-keys) ¶rate-memo-optimizer
) &rest initargs &key &allow-other-keys) ¶Identity activation function (just returns its input).
Hyberbolic tangent activation function. Has output
in the range \([-1, 1]\), so it’s a rescaled sigmoid. Neural
networks which use tanh in place of sigmoid are believed to be more
trainable.
Generic class for activation functions. Not to be instantiated.
Adagrad optimizer: an optimizer with decaying
learning rate. A parameter \(w\) of a neural network is updated as
follows:
\(s_{n+1} = s_n + (\nabla f(w_n))^2\)
\(w_{n+1} = w_n - \frac{\eta}{\sqrt{s_{n+1} + \epsilon}} \nabla f(w_n)\)
Initarg | Value |
---|---|
:η | 0.01 |
ADAM optimizer: an optimizer with adaptive learning
rate and momentum. A parameter \(w\) of a neural network is updated
as follows:
\(m_{n+1} = \beta_1 m_n + (1 - \beta_1) \nabla f(w_n)\)
\(s_{n+1} = \beta_2 s_n + (1 - \beta_2) (\nabla f(w_n))^2\)
\(\hat{m} = m_{n+1} / (1 - \beta_1^n) \)
\(\hat{s} = s_{n+1} / (1 - \beta_2^n) \)
\(w_{n+1} = w_n - \frac{\eta}{\sqrt{\hat{s} + \epsilon}} \hat{m}\)
Initarg | Value |
---|---|
:η | 0.001 |
:β1 | 0.9 |
:β2 | 0.999 |
Corrected \(\beta_1\) parameter
single-float
1.0
Corrected \(\beta_2\) parameter
single-float
1.0
Generic class for activation functions associated with hidden layers. Not to be instantiated.
Leaky ReLU activation function. It returns its
argument when it is greater than zero or the argument multiplied by
@c(coeff) otherwise. Usually this is an activation function of choice
for hidden layers.
Coefficient of leaky ReLU. A value of 0 means just an ordinary ReLU.
single-float
0.0
:coeff
This slot is read-only.
Stochastic gradient descent optimizer with
momentum. A parameter \(w\) of a neural network is updated with
respect to an accumulated momentum \(m\):
\(m_{n+1} = \beta_1 m_{n} + \eta \nabla f(w_n)\)
\(w_{n+1} = w_n - m_{n+1}\)
Initarg | Value |
---|---|
:η | 0.01 |
:β1 | 0.9 |
Nesterov optimizer: a stochastic gradient descent
with momentum and ’look-ahead’. A parameter \(w\) of a neural
network is updated with respect to an accumulated momentum \(m\):
\(m_{n+1} = \beta_1 m_{n} + \eta \nabla f(w_n - \beta_1 m_n)\)
\(w_{n+1} = w_n - m_{n+1}\)
Initarg | Value |
---|---|
:η | 0.01 |
:β1 | 0.9 |
Class for neural networks
initialize-instance
.
(setf neural-network-activation-funcs)
.
neural-network-activation-funcs
.
(setf neural-network-biases)
.
neural-network-biases
.
(setf neural-network-input-trans)
.
neural-network-input-trans
.
(setf neural-network-input-trans%)
.
neural-network-input-trans%
.
(setf neural-network-label-trans)
.
neural-network-label-trans
.
neural-network-layout
.
(setf neural-network-output-trans)
.
neural-network-output-trans
.
(setf neural-network-weights)
.
neural-network-weights
.
Number of neurons in each layer of the network
list
(error "specify number of neurons in each layer")
:layout
This slot is read-only.
List of activation functions.
list
:activation-funcs
Weight matrices for each layer
list
Bias vectors for each layer
list
Function which translates an input object to a vector
function
(function identity)
:input-trans
Function which translates an output vector to a label.
function
(function identity)
:output-trans
Function which translates an input object to a vector (used for training)
function
(function identity)
:input-trans%
Function which translates a label to a vector
function
(function identity)
:label-trans
Generic class for activation functions associated with an output layer. Not to be instantiated.
RMSprop optimizer: an optimizer with adaptive
learning rate. A parameter \(w\) of a neural network is updated as
follows:
\(s_{n+1} = \beta_2 s_n + (1 - \beta_2) (\nabla f(w_n))^2\)
\(w_{n+1} = w_n - \frac{\eta}{\sqrt{s_{n+1} + \epsilon}} \nabla f(w_n)\)
Initarg | Value |
---|---|
:η | 0.001 |
:β2 | 0.99 |
A basic stochastic gradient optimizer. A parameter
\(w\) of a neural network is updated as \(w_{n+1} = w_n - \eta
\nabla f(w_n)\).
Initarg | Value |
---|---|
:η | 0.01 |
Sigmoid activation function:
\(f(x) = \frac{1}{1 + \exp(-x)}\)
Has output in the range \([0, 1]\), so it’s most suited for describing ’intensity’ of some property.
Softmax activation function: \(f(x_i) =
\frac{\exp(x_i)}{\sum_i \exp(x_i)}\).
It’s output range is \([0, 1]\) and a sum of all elements in the
output vector is 1.
Calculate partial derivative of the cost function by z for all layers
Calculate gradient of the cost function
Calculate gradient of the cost function based on multiple input samples
Calculate argument and value of activation function for all layers
Copy an instance of MEMO, optionally overriding some or all of its slots.
Return a function which generates random values from a distibution N(μ, σ).
Return a random value sampled from a distribution N(0, 1).
Sigmoid activation function
Apply activation function ACTIVATION to a
VECTOR. VECTOR is an output vector from a layer of a neural network.
leaky-relu
)) ¶Apply derivative of activation function ACTIVATION
to a VECTOR. VECTOR is an output vector from a layer of a neural
network.
leaky-relu
)) ¶leaky-relu
)) ¶Coefficient of leaky ReLU. A value of 0 means just an ordinary ReLU.
Update network parameters using SAMPLES for training.
adam-optimizer
) neural-network samples) ¶rmsprop-optimizer
) neural-network samples) ¶adagrad-optimizer
) neural-network samples) ¶nesterov-optimizer
) neural-network samples) ¶momentum-optimizer
) neural-network samples) ¶sgd-optimizer
) neural-network samples) ¶neural-network
)) ¶neural-network
)) ¶List of activation functions.
neural-network
)) ¶neural-network
)) ¶Bias vectors for each layer
neural-network
)) ¶neural-network
)) ¶Weight matrices for each layer
adam-optimizer
)) ¶adam-optimizer
)) ¶Corrected \(\beta_1\) parameter
adam-optimizer
)) ¶adam-optimizer
)) ¶Corrected \(\beta_2\) parameter
momentum-memo-optimizer
)) ¶Coefficient responsible for momentum decay
momentum-memo-optimizer
)) ¶automatically generated reader method
momentum-memo-optimizer
)) ¶automatically generated writer method
rate-memo-optimizer
)) ¶Coefficient responsible to increase in learning rate
rate-memo-optimizer
)) ¶automatically generated reader method
rate-memo-optimizer
)) ¶automatically generated writer method
%read-only-struct
.
list
(alexandria:required-argument (quote neural-classifier::weights))
This slot is read-only.
list
(alexandria:required-argument (quote neural-classifier::biases))
This slot is read-only.
Optimizer based on momentum. Not to be instantiated.
Generic optimizer class. Not to be instantiated
Parameter which controls learning
speed of the neural network. Must be a small positive value.
single-float
:η
This slot is read-only.
Number of samples in a
minibatch. An integer in the range 10-100 is good for this
parameter.
alexandria:positive-fixnum
40
:minibatch-size
This slot is read-only.
A parameter used for L²
regularization. 0.0 is no regularization. Good values are 1-10 divided
by the dataset size.
single-float
0.0
:decay-rate
This slot is read-only.
Optimizer based on adaptive learning rate. Not to be instantiated.
Jump to: | %
(
A C D F G I L M N O P R S T Σ |
---|
Jump to: | %
(
A C D F G I L M N O P R S T Σ |
---|
Jump to: | A B C D I L M O R S W |
---|
Jump to: | A B C D I L M O R S W |
---|
Jump to: | %
A C D F H L M N O P R S U |
---|
Jump to: | %
A C D F H L M N O P R S U |
---|