The neural-classifier Reference Manual

This is the neural-classifier Reference Manual, version 0.2, generated automatically by Declt version 4.0 beta 2 "William Riker" on Sun Dec 15 07:08:47 2024 GMT+0.

Table of Contents


1 Introduction


2 Systems

The main system appears first, followed by any subsystem dependency.


2.1 neural-classifier

Classification of samples based on neural network.

Author

Vasily Postnicov

License

2-clause BSD

Version

0.2

Dependencies
  • alexandria (system).
  • serapeum (system).
  • magicl/ext-blas (system).
  • magicl/ext-lapack (system).
  • snakes (system).
Source

neural-classifier.asd.

Child Components

3 Files

Files are sorted by type and then listed depth-first from the systems components trees.


3.1 Lisp


3.1.1 neural-classifier/neural-classifier.asd

Source

neural-classifier.asd.

Parent Component

neural-classifier (system).

ASDF Systems

neural-classifier.


3.1.2 neural-classifier/package.lisp

Source

neural-classifier.asd.

Parent Component

neural-classifier (system).

Packages

neural-classifier.


3.1.3 neural-classifier/magicl-blas.lisp

Dependency

package.lisp (file).

Source

neural-classifier.asd.

Parent Component

neural-classifier (system).


3.1.4 neural-classifier/definitions.lisp

Dependency

magicl-blas.lisp (file).

Source

neural-classifier.asd.

Parent Component

neural-classifier (system).

Public Interface
Internals

3.1.5 neural-classifier/utility.lisp

Dependency

definitions.lisp (file).

Source

neural-classifier.asd.

Parent Component

neural-classifier (system).

Public Interface

idx-abs-max (function).

Internals

3.1.6 neural-classifier/activation.lisp

Dependency

utility.lisp (file).

Source

neural-classifier.asd.

Parent Component

neural-classifier (system).

Public Interface
Internals

3.1.7 neural-classifier/optimizers.lisp

Dependency

activation.lisp (file).

Source

neural-classifier.asd.

Parent Component

neural-classifier (system).

Public Interface
Internals

3.1.8 neural-classifier/neural-network.lisp

Dependency

optimizers.lisp (file).

Source

neural-classifier.asd.

Parent Component

neural-classifier (system).

Public Interface
Internals

4 Packages

Packages are listed by definition order.


4.1 neural-classifier

Source

package.lisp.

Use List
  • alexandria.
  • common-lisp.
Public Interface
Internals

5 Definitions

Definitions are sorted by export status, category, package, and then by lexicographic order.


5.1 Public Interface


5.1.1 Ordinary functions

Function: calculate (neural-network object)

Calculate output from the network @c(neural-network) for the object @c(object). The input transformation function (specified by @c(:input-trans) when creating a network) is applied to the @c(object) and the output transformation function (specified by @c(:output-trans)) is applied to output Nx1 matrix from the network.

Package

neural-classifier.

Source

neural-network.lisp.

Function: idx-abs-max (matrix)

Returns index of first element with maximal absolute value by calling isamax() function from BLAS. Works only for rows or columns.

Package

neural-classifier.

Source

utility.lisp.

Function: make-neural-network (layout &key input-trans output-trans input-trans% label-trans activation-funcs)

Create a new neural network.
@begin(list)
@item(@c(layout) is a list of positive integers which describes a number of neurons in each layer (starting from input layer).) @item(@c(activation-funcs) is a list all the elements of which are objects of type @c(activation). The length of this list must be equal to the length of @c(layout) minus one because the input layer does not have an activation function. The last element must be of type @c(output-layer-activation) and the all elements but last must be of type @c(hidden-layer-activation).) @item(@c(input-trans) is a function which is applied to an object passed to @c(calculate) to transform it into an input column (that is a matrix with the type @c(magicl:matrix/single-float) and the shape @c(Nx1), where @c(N) is the first number in the @c(layout)). For example, if we are recognizing digits from the MNIST set, this function can take a number of an image in the set and return @c(784x1) matrix.)
@item(@c(output-trans) is a function which is applied to the output of @c(calculate) function (that is a matrix with the type @c(magicl:matrix/single-float) and the shape Mx1, where M is the last number in the @c(layout)) to return some object with user-defined meaning (called a label). Again, if we are recognizing digits, this function transforms @c(10x1) matrix to a number from 0 to 9.)
@item(@c(input-trans%) is just like @c(input-trans), but is used while training. It can include additional transformations to extend your training set (e.g. it can add some noise to input data, rotate an input picture by a small random angle, etc.).) @item(@c(label-trans) is a function which is applied to a label to get a column (that is a matrix with the type @c(magicl:matrix/single-float) and the shape @c(Mx1), where @c(M) is the last number in the @c(layout)) which is the optimal output from the network for this object. With digits recognition, this function may take a digit @c(n) and return @c(10x1) matrix of all zeros with exception for @c(n)-th element which would be @c(1f0).)
@end(list)
Default value for all transformation functions is @c(identity).

Package

neural-classifier.

Source

neural-network.lisp.

Function: rate (neural-network generator &key test)

Calculate accuracy of the @c(neural-network) (ratio of correctly guessed samples to all samples) using testing data from the generator @c(generator). Each item returned by @c(generator) must be a cons pair in the form @c((data-object . label)), as with @c(train-epoch) function. @c(test) is a function used to compare the expected label with the label returned by the network.

Package

neural-classifier.

Source

neural-network.lisp.

Function: train-epoch (neural-network generator &key optimizer)

Perform training of @c(neural-network) on every object returned by the generator @c(generator). Each item returned by @c(generator) must be in the form @c((data-object . label)) cons
pair. @c(input-trans%) and @c(label-trans) functions passes to @c(make-neural-network) are applied to @c(car) and @c(cdr) of each pair respectively.

Package

neural-classifier.

Source

neural-network.lisp.


5.1.2 Generic functions

Generic Reader: neural-network-input-trans (object)
Generic Writer: (setf neural-network-input-trans) (object)
Package

neural-classifier.

Methods
Reader Method: neural-network-input-trans ((neural-network neural-network))
Writer Method: (setf neural-network-input-trans) ((neural-network neural-network))

Function which translates an input object to a vector

Source

definitions.lisp.

Target Slot

input-trans.

Generic Reader: neural-network-input-trans% (object)
Generic Writer: (setf neural-network-input-trans%) (object)
Package

neural-classifier.

Methods
Reader Method: neural-network-input-trans% ((neural-network neural-network))
Writer Method: (setf neural-network-input-trans%) ((neural-network neural-network))

Function which translates an input object to a vector (used for training)

Source

definitions.lisp.

Target Slot

input-trans%.

Generic Reader: neural-network-label-trans (object)
Generic Writer: (setf neural-network-label-trans) (object)
Package

neural-classifier.

Methods
Reader Method: neural-network-label-trans ((neural-network neural-network))
Writer Method: (setf neural-network-label-trans) ((neural-network neural-network))

Function which translates a label to a vector

Source

definitions.lisp.

Target Slot

label-trans.

Generic Reader: neural-network-layout (object)
Package

neural-classifier.

Methods
Reader Method: neural-network-layout ((neural-network neural-network))

Number of neurons in each layer of the network

Source

definitions.lisp.

Target Slot

layout.

Generic Reader: neural-network-output-trans (object)
Generic Writer: (setf neural-network-output-trans) (object)
Package

neural-classifier.

Methods
Reader Method: neural-network-output-trans ((neural-network neural-network))
Writer Method: (setf neural-network-output-trans) ((neural-network neural-network))

Function which translates an output vector to a label.

Source

definitions.lisp.

Target Slot

output-trans.


5.1.3 Standalone methods

Method: initialize-instance :after ((neural-network neural-network) &rest initargs)
Source

neural-network.lisp.

Method: initialize-instance :after ((optimizer momentum-memo-optimizer) &rest initargs &key &allow-other-keys)
Source

optimizers.lisp.

Method: initialize-instance :after ((optimizer rate-memo-optimizer) &rest initargs &key &allow-other-keys)
Source

optimizers.lisp.

Method: make-load-form ((self memo) &optional env)
Source

optimizers.lisp.

Method: print-object ((object memo) stream)
Source

optimizers.lisp.


5.1.4 Classes

Class: %identity

Identity activation function (just returns its input).

Package

neural-classifier.

Source

activation.lisp.

Direct superclasses

output-layer-activation.

Direct methods

activate.

Class: %tanh

Hyberbolic tangent activation function. Has output
in the range \([-1, 1]\), so it’s a rescaled sigmoid. Neural networks which use tanh in place of sigmoid are believed to be more trainable.

Package

neural-classifier.

Source

activation.lisp.

Direct superclasses
Direct methods
Class: activation

Generic class for activation functions. Not to be instantiated.

Package

neural-classifier.

Source

activation.lisp.

Direct subclasses
Class: adagrad-optimizer

Adagrad optimizer: an optimizer with decaying
learning rate. A parameter \(w\) of a neural network is updated as follows:

\(s_{n+1} = s_n + (\nabla f(w_n))^2\)

\(w_{n+1} = w_n - \frac{\eta}{\sqrt{s_{n+1} + \epsilon}} \nabla f(w_n)\)

Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses

rate-memo-optimizer.

Direct methods

learn.

Direct Default Initargs
InitargValue
0.01
Class: adam-optimizer

ADAM optimizer: an optimizer with adaptive learning
rate and momentum. A parameter \(w\) of a neural network is updated as follows:

\(m_{n+1} = \beta_1 m_n + (1 - \beta_1) \nabla f(w_n)\)

\(s_{n+1} = \beta_2 s_n + (1 - \beta_2) (\nabla f(w_n))^2\)

\(\hat{m} = m_{n+1} / (1 - \beta_1^n) \)

\(\hat{s} = s_{n+1} / (1 - \beta_2^n) \)

\(w_{n+1} = w_n - \frac{\eta}{\sqrt{\hat{s} + \epsilon}} \hat{m}\)

Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses
Direct methods
Direct Default Initargs
InitargValue
0.001
:β10.9
:β20.999
Direct slots
Slot: corrected-momentum-coeff

Corrected \(\beta_1\) parameter

Type

single-float

Initform

1.0

Readers

optimizer-corrected-momentum-coeff.

Writers

(setf optimizer-corrected-momentum-coeff).

Slot: corrected-rate-coeff

Corrected \(\beta_2\) parameter

Type

single-float

Initform

1.0

Readers

optimizer-corrected-rate-coeff.

Writers

(setf optimizer-corrected-rate-coeff).

Class: hidden-layer-activation

Generic class for activation functions associated with hidden layers. Not to be instantiated.

Package

neural-classifier.

Source

activation.lisp.

Direct superclasses

activation.

Direct subclasses
Class: leaky-relu

Leaky ReLU activation function. It returns its
argument when it is greater than zero or the argument multiplied by @c(coeff) otherwise. Usually this is an activation function of choice for hidden layers.

Package

neural-classifier.

Source

activation.lisp.

Direct superclasses

hidden-layer-activation.

Direct methods
Direct slots
Slot: coeff

Coefficient of leaky ReLU. A value of 0 means just an ordinary ReLU.

Type

single-float

Initform

0.0

Initargs

:coeff

Readers

leaky-relu-coeff.

Writers

This slot is read-only.

Class: momentum-optimizer

Stochastic gradient descent optimizer with
momentum. A parameter \(w\) of a neural network is updated with respect to an accumulated momentum \(m\):

\(m_{n+1} = \beta_1 m_{n} + \eta \nabla f(w_n)\)

\(w_{n+1} = w_n - m_{n+1}\)

Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses

momentum-memo-optimizer.

Direct methods

learn.

Direct Default Initargs
InitargValue
0.01
:β10.9
Class: nesterov-optimizer

Nesterov optimizer: a stochastic gradient descent
with momentum and ’look-ahead’. A parameter \(w\) of a neural network is updated with respect to an accumulated momentum \(m\):

\(m_{n+1} = \beta_1 m_{n} + \eta \nabla f(w_n - \beta_1 m_n)\)

\(w_{n+1} = w_n - m_{n+1}\)

Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses

momentum-memo-optimizer.

Direct methods

learn.

Direct Default Initargs
InitargValue
0.01
:β10.9
Class: neural-network

Class for neural networks

Package

neural-classifier.

Source

definitions.lisp.

Direct methods
Direct slots
Slot: layout

Number of neurons in each layer of the network

Type

list

Initform

(error "specify number of neurons in each layer")

Initargs

:layout

Readers

neural-network-layout.

Writers

This slot is read-only.

Slot: activation-funcs

List of activation functions.

Type

list

Initargs

:activation-funcs

Readers

neural-network-activation-funcs.

Writers

(setf neural-network-activation-funcs).

Slot: weights

Weight matrices for each layer

Type

list

Readers

neural-network-weights.

Writers

(setf neural-network-weights).

Slot: biases

Bias vectors for each layer

Type

list

Readers

neural-network-biases.

Writers

(setf neural-network-biases).

Slot: input-trans

Function which translates an input object to a vector

Type

function

Initform

(function identity)

Initargs

:input-trans

Readers

neural-network-input-trans.

Writers

(setf neural-network-input-trans).

Slot: output-trans

Function which translates an output vector to a label.

Type

function

Initform

(function identity)

Initargs

:output-trans

Readers

neural-network-output-trans.

Writers

(setf neural-network-output-trans).

Slot: input-trans%

Function which translates an input object to a vector (used for training)

Type

function

Initform

(function identity)

Initargs

:input-trans%

Readers

neural-network-input-trans%.

Writers

(setf neural-network-input-trans%).

Slot: label-trans

Function which translates a label to a vector

Type

function

Initform

(function identity)

Initargs

:label-trans

Readers

neural-network-label-trans.

Writers

(setf neural-network-label-trans).

Class: output-layer-activation

Generic class for activation functions associated with an output layer. Not to be instantiated.

Package

neural-classifier.

Source

activation.lisp.

Direct superclasses

activation.

Direct subclasses
Class: rmsprop-optimizer

RMSprop optimizer: an optimizer with adaptive
learning rate. A parameter \(w\) of a neural network is updated as follows:

\(s_{n+1} = \beta_2 s_n + (1 - \beta_2) (\nabla f(w_n))^2\)

\(w_{n+1} = w_n - \frac{\eta}{\sqrt{s_{n+1} + \epsilon}} \nabla f(w_n)\)

Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses

rate-memo-optimizer.

Direct methods

learn.

Direct Default Initargs
InitargValue
0.001
:β20.99
Class: sgd-optimizer

A basic stochastic gradient optimizer. A parameter
\(w\) of a neural network is updated as \(w_{n+1} = w_n - \eta \nabla f(w_n)\).

Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses

optimizer.

Direct methods

learn.

Direct Default Initargs
InitargValue
0.01
Class: sigmoid

Sigmoid activation function:
\(f(x) = \frac{1}{1 + \exp(-x)}\)

Has output in the range \([0, 1]\), so it’s most suited for describing ’intensity’ of some property.

Package

neural-classifier.

Source

activation.lisp.

Direct superclasses
Direct methods
Class: softmax

Softmax activation function: \(f(x_i) = \frac{\exp(x_i)}{\sum_i \exp(x_i)}\).
It’s output range is \([0, 1]\) and a sum of all elements in the output vector is 1.

Package

neural-classifier.

Source

activation.lisp.

Direct superclasses

output-layer-activation.

Direct methods

activate.


5.2 Internals


5.2.1 Ordinary functions

Function: calculate-delta (neural-network z network-output expected-output)

Calculate partial derivative of the cost function by z for all layers

Package

neural-classifier.

Source

neural-network.lisp.

Function: calculate-gradient (neural-network sample)

Calculate gradient of the cost function

Package

neural-classifier.

Source

neural-network.lisp.

Function: calculate-gradient-minibatch (neural-network samples decay-rate)

Calculate gradient of the cost function based on multiple input samples

Package

neural-classifier.

Source

neural-network.lisp.

Function: calculate-z-and-out (neural-network input)

Calculate argument and value of activation function for all layers

Package

neural-classifier.

Source

neural-network.lisp.

Function: copy-memo (memo1 &key weights biases)

Copy an instance of MEMO, optionally overriding some or all of its slots.

Package

neural-classifier.

Source

optimizers.lisp.

Function: declare-optimizations ()
Package

neural-classifier.

Source

definitions.lisp.

Function: make-memo (network)
Package

neural-classifier.

Source

optimizers.lisp.

Function: memo (weights biases)
Package

neural-classifier.

Source

optimizers.lisp.

Reader: memo-biases (instance)
Package

neural-classifier.

Source

optimizers.lisp.

Target Slot

biases.

Reader: memo-weights (instance)
Package

neural-classifier.

Source

optimizers.lisp.

Target Slot

weights.

Function: nrandom-generator (&key μ σ)

Return a function which generates random values from a distibution N(μ, σ).

Package

neural-classifier.

Source

utility.lisp.

Function: sasum (matrix)
Package

neural-classifier.

Source

utility.lisp.

Function: standard-random ()

Return a random value sampled from a distribution N(0, 1).

Package

neural-classifier.

Source

utility.lisp.

Function: σ (z)

Sigmoid activation function

Package

neural-classifier.

Source

activation.lisp.


5.2.2 Generic functions

Generic Function: activate (vector activation)

Apply activation function ACTIVATION to a
VECTOR. VECTOR is an output vector from a layer of a neural network.

Package

neural-classifier.

Source

activation.lisp.

Methods
Method: activate (vector (activation %identity))
Method: activate (vector (activation softmax))
Method: activate (vector (activation leaky-relu))
Method: activate (vector (activation %tanh))
Method: activate (vector (activation sigmoid))
Generic Function: activate' (vector type)

Apply derivative of activation function ACTIVATION
to a VECTOR. VECTOR is an output vector from a layer of a neural network.

Package

neural-classifier.

Source

activation.lisp.

Methods
Method: activate' (vector (activation leaky-relu))
Method: activate' (vector (activation %tanh))
Method: activate' (vector (activation sigmoid))
Generic Reader: leaky-relu-coeff (object)
Package

neural-classifier.

Methods
Reader Method: leaky-relu-coeff ((leaky-relu leaky-relu))

Coefficient of leaky ReLU. A value of 0 means just an ordinary ReLU.

Source

activation.lisp.

Target Slot

coeff.

Generic Function: learn (optimizer neural-network samples)

Update network parameters using SAMPLES for training.

Package

neural-classifier.

Source

optimizers.lisp.

Methods
Method: learn ((optimizer adam-optimizer) neural-network samples)
Method: learn ((optimizer rmsprop-optimizer) neural-network samples)
Method: learn ((optimizer adagrad-optimizer) neural-network samples)
Method: learn ((optimizer nesterov-optimizer) neural-network samples)
Method: learn ((optimizer momentum-optimizer) neural-network samples)
Method: learn ((optimizer sgd-optimizer) neural-network samples)
Generic Reader: neural-network-activation-funcs (object)
Generic Writer: (setf neural-network-activation-funcs) (object)
Package

neural-classifier.

Methods
Reader Method: neural-network-activation-funcs ((neural-network neural-network))
Writer Method: (setf neural-network-activation-funcs) ((neural-network neural-network))

List of activation functions.

Source

definitions.lisp.

Target Slot

activation-funcs.

Generic Reader: neural-network-biases (object)
Generic Writer: (setf neural-network-biases) (object)
Package

neural-classifier.

Methods
Reader Method: neural-network-biases ((neural-network neural-network))
Writer Method: (setf neural-network-biases) ((neural-network neural-network))

Bias vectors for each layer

Source

definitions.lisp.

Target Slot

biases.

Generic Reader: neural-network-weights (object)
Generic Writer: (setf neural-network-weights) (object)
Package

neural-classifier.

Methods
Reader Method: neural-network-weights ((neural-network neural-network))
Writer Method: (setf neural-network-weights) ((neural-network neural-network))

Weight matrices for each layer

Source

definitions.lisp.

Target Slot

weights.

Generic Reader: optimizer-corrected-momentum-coeff (object)
Generic Writer: (setf optimizer-corrected-momentum-coeff) (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-corrected-momentum-coeff ((adam-optimizer adam-optimizer))
Writer Method: (setf optimizer-corrected-momentum-coeff) ((adam-optimizer adam-optimizer))

Corrected \(\beta_1\) parameter

Source

optimizers.lisp.

Target Slot

corrected-momentum-coeff.

Generic Reader: optimizer-corrected-rate-coeff (object)
Generic Writer: (setf optimizer-corrected-rate-coeff) (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-corrected-rate-coeff ((adam-optimizer adam-optimizer))
Writer Method: (setf optimizer-corrected-rate-coeff) ((adam-optimizer adam-optimizer))

Corrected \(\beta_2\) parameter

Source

optimizers.lisp.

Target Slot

corrected-rate-coeff.

Generic Reader: optimizer-decay-rate (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-decay-rate ((optimizer optimizer))

A parameter used for L²
regularization. 0.0 is no regularization. Good values are 1-10 divided by the dataset size.

Source

optimizers.lisp.

Target Slot

decay-rate.

Generic Reader: optimizer-learning-rate (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-learning-rate ((optimizer optimizer))

Parameter which controls learning
speed of the neural network. Must be a small positive value.

Source

optimizers.lisp.

Target Slot

learning-rate.

Generic Reader: optimizer-minibatch-size (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-minibatch-size ((optimizer optimizer))

Number of samples in a
minibatch. An integer in the range 10-100 is good for this parameter.

Source

optimizers.lisp.

Target Slot

minibatch-size.

Generic Reader: optimizer-momentum-coeff (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-momentum-coeff ((momentum-memo-optimizer momentum-memo-optimizer))

Coefficient responsible for momentum decay

Source

optimizers.lisp.

Target Slot

momentum-coeff.

Generic Reader: optimizer-momentum-memo (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-momentum-memo ((momentum-memo-optimizer momentum-memo-optimizer))

automatically generated reader method

Source

optimizers.lisp.

Target Slot

momentum-memo.

Generic Writer: (setf optimizer-momentum-memo) (object)
Package

neural-classifier.

Methods
Writer Method: (setf optimizer-momentum-memo) ((momentum-memo-optimizer momentum-memo-optimizer))

automatically generated writer method

Source

optimizers.lisp.

Target Slot

momentum-memo.

Generic Reader: optimizer-rate-coeff (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-rate-coeff ((rate-memo-optimizer rate-memo-optimizer))

Coefficient responsible to increase in learning rate

Source

optimizers.lisp.

Target Slot

rate-coeff.

Generic Reader: optimizer-rate-memo (object)
Package

neural-classifier.

Methods
Reader Method: optimizer-rate-memo ((rate-memo-optimizer rate-memo-optimizer))

automatically generated reader method

Source

optimizers.lisp.

Target Slot

rate-memo.

Generic Writer: (setf optimizer-rate-memo) (object)
Package

neural-classifier.

Methods
Writer Method: (setf optimizer-rate-memo) ((rate-memo-optimizer rate-memo-optimizer))

automatically generated writer method

Source

optimizers.lisp.

Target Slot

rate-memo.


5.2.3 Standalone methods

Method: %constructor= ((o1 memo) (o2 memo))
Package

serapeum.

Source

optimizers.lisp.

Method: constructor-values/generic ((x memo))
Package

serapeum.

Source

optimizers.lisp.

Method: read-only-struct-slot-names append ((self memo))
Package

serapeum.

Source

optimizers.lisp.


5.2.4 Structures

Structure: memo
Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses

%read-only-struct.

Direct methods
Direct slots
Slot: weights
Type

list

Initform

(alexandria:required-argument (quote neural-classifier::weights))

Readers

memo-weights.

Writers

This slot is read-only.

Slot: biases
Type

list

Initform

(alexandria:required-argument (quote neural-classifier::biases))

Readers

memo-biases.

Writers

This slot is read-only.


5.2.5 Classes

Class: momentum-memo-optimizer

Optimizer based on momentum. Not to be instantiated.

Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses

optimizer.

Direct subclasses
Direct methods
Direct slots
Slot: momentum-memo
Type

neural-classifier::memo

Readers

optimizer-momentum-memo.

Writers

(setf optimizer-momentum-memo).

Slot: momentum-coeff

Coefficient responsible for momentum decay

Type

single-float

Initargs

:β1

Readers

optimizer-momentum-coeff.

Writers

This slot is read-only.

Class: optimizer

Generic optimizer class. Not to be instantiated

Package

neural-classifier.

Source

optimizers.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: learning-rate

Parameter which controls learning
speed of the neural network. Must be a small positive value.

Type

single-float

Initargs

Readers

optimizer-learning-rate.

Writers

This slot is read-only.

Slot: minibatch-size

Number of samples in a
minibatch. An integer in the range 10-100 is good for this parameter.

Type

alexandria:positive-fixnum

Initform

40

Initargs

:minibatch-size

Readers

optimizer-minibatch-size.

Writers

This slot is read-only.

Slot: decay-rate

A parameter used for L²
regularization. 0.0 is no regularization. Good values are 1-10 divided by the dataset size.

Type

single-float

Initform

0.0

Initargs

:decay-rate

Readers

optimizer-decay-rate.

Writers

This slot is read-only.

Class: rate-memo-optimizer

Optimizer based on adaptive learning rate. Not to be instantiated.

Package

neural-classifier.

Source

optimizers.lisp.

Direct superclasses

optimizer.

Direct subclasses
Direct methods
Direct slots
Slot: rate-memo
Type

neural-classifier::memo

Readers

optimizer-rate-memo.

Writers

(setf optimizer-rate-memo).

Slot: rate-coeff

Coefficient responsible to increase in learning rate

Type

single-float

Initargs

:β2

Readers

optimizer-rate-coeff.

Writers

This slot is read-only.


Appendix A Indexes


A.1 Concepts


A.2 Functions

Jump to:   %   (  
A   C   D   F   G   I   L   M   N   O   P   R   S   T   Σ  
Index Entry  Section

%
%constructor=: Private standalone methods

(
(setf neural-network-activation-funcs): Private generic functions
(setf neural-network-activation-funcs): Private generic functions
(setf neural-network-biases): Private generic functions
(setf neural-network-biases): Private generic functions
(setf neural-network-input-trans%): Public generic functions
(setf neural-network-input-trans%): Public generic functions
(setf neural-network-input-trans): Public generic functions
(setf neural-network-input-trans): Public generic functions
(setf neural-network-label-trans): Public generic functions
(setf neural-network-label-trans): Public generic functions
(setf neural-network-output-trans): Public generic functions
(setf neural-network-output-trans): Public generic functions
(setf neural-network-weights): Private generic functions
(setf neural-network-weights): Private generic functions
(setf optimizer-corrected-momentum-coeff): Private generic functions
(setf optimizer-corrected-momentum-coeff): Private generic functions
(setf optimizer-corrected-rate-coeff): Private generic functions
(setf optimizer-corrected-rate-coeff): Private generic functions
(setf optimizer-momentum-memo): Private generic functions
(setf optimizer-momentum-memo): Private generic functions
(setf optimizer-rate-memo): Private generic functions
(setf optimizer-rate-memo): Private generic functions

A
activate: Private generic functions
activate: Private generic functions
activate: Private generic functions
activate: Private generic functions
activate: Private generic functions
activate: Private generic functions
activate': Private generic functions
activate': Private generic functions
activate': Private generic functions
activate': Private generic functions

C
calculate: Public ordinary functions
calculate-delta: Private ordinary functions
calculate-gradient: Private ordinary functions
calculate-gradient-minibatch: Private ordinary functions
calculate-z-and-out: Private ordinary functions
constructor-values/generic: Private standalone methods
copy-memo: Private ordinary functions

D
declare-optimizations: Private ordinary functions

F
Function, calculate: Public ordinary functions
Function, calculate-delta: Private ordinary functions
Function, calculate-gradient: Private ordinary functions
Function, calculate-gradient-minibatch: Private ordinary functions
Function, calculate-z-and-out: Private ordinary functions
Function, copy-memo: Private ordinary functions
Function, declare-optimizations: Private ordinary functions
Function, idx-abs-max: Public ordinary functions
Function, make-memo: Private ordinary functions
Function, make-neural-network: Public ordinary functions
Function, memo: Private ordinary functions
Function, memo-biases: Private ordinary functions
Function, memo-weights: Private ordinary functions
Function, nrandom-generator: Private ordinary functions
Function, rate: Public ordinary functions
Function, sasum: Private ordinary functions
Function, standard-random: Private ordinary functions
Function, train-epoch: Public ordinary functions
Function, σ: Private ordinary functions

G
Generic Function, (setf neural-network-activation-funcs): Private generic functions
Generic Function, (setf neural-network-biases): Private generic functions
Generic Function, (setf neural-network-input-trans%): Public generic functions
Generic Function, (setf neural-network-input-trans): Public generic functions
Generic Function, (setf neural-network-label-trans): Public generic functions
Generic Function, (setf neural-network-output-trans): Public generic functions
Generic Function, (setf neural-network-weights): Private generic functions
Generic Function, (setf optimizer-corrected-momentum-coeff): Private generic functions
Generic Function, (setf optimizer-corrected-rate-coeff): Private generic functions
Generic Function, (setf optimizer-momentum-memo): Private generic functions
Generic Function, (setf optimizer-rate-memo): Private generic functions
Generic Function, activate: Private generic functions
Generic Function, activate': Private generic functions
Generic Function, leaky-relu-coeff: Private generic functions
Generic Function, learn: Private generic functions
Generic Function, neural-network-activation-funcs: Private generic functions
Generic Function, neural-network-biases: Private generic functions
Generic Function, neural-network-input-trans: Public generic functions
Generic Function, neural-network-input-trans%: Public generic functions
Generic Function, neural-network-label-trans: Public generic functions
Generic Function, neural-network-layout: Public generic functions
Generic Function, neural-network-output-trans: Public generic functions
Generic Function, neural-network-weights: Private generic functions
Generic Function, optimizer-corrected-momentum-coeff: Private generic functions
Generic Function, optimizer-corrected-rate-coeff: Private generic functions
Generic Function, optimizer-decay-rate: Private generic functions
Generic Function, optimizer-learning-rate: Private generic functions
Generic Function, optimizer-minibatch-size: Private generic functions
Generic Function, optimizer-momentum-coeff: Private generic functions
Generic Function, optimizer-momentum-memo: Private generic functions
Generic Function, optimizer-rate-coeff: Private generic functions
Generic Function, optimizer-rate-memo: Private generic functions

I
idx-abs-max: Public ordinary functions
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods

L
leaky-relu-coeff: Private generic functions
leaky-relu-coeff: Private generic functions
learn: Private generic functions
learn: Private generic functions
learn: Private generic functions
learn: Private generic functions
learn: Private generic functions
learn: Private generic functions
learn: Private generic functions

M
make-load-form: Public standalone methods
make-memo: Private ordinary functions
make-neural-network: Public ordinary functions
memo: Private ordinary functions
memo-biases: Private ordinary functions
memo-weights: Private ordinary functions
Method, %constructor=: Private standalone methods
Method, (setf neural-network-activation-funcs): Private generic functions
Method, (setf neural-network-biases): Private generic functions
Method, (setf neural-network-input-trans%): Public generic functions
Method, (setf neural-network-input-trans): Public generic functions
Method, (setf neural-network-label-trans): Public generic functions
Method, (setf neural-network-output-trans): Public generic functions
Method, (setf neural-network-weights): Private generic functions
Method, (setf optimizer-corrected-momentum-coeff): Private generic functions
Method, (setf optimizer-corrected-rate-coeff): Private generic functions
Method, (setf optimizer-momentum-memo): Private generic functions
Method, (setf optimizer-rate-memo): Private generic functions
Method, activate: Private generic functions
Method, activate: Private generic functions
Method, activate: Private generic functions
Method, activate: Private generic functions
Method, activate: Private generic functions
Method, activate': Private generic functions
Method, activate': Private generic functions
Method, activate': Private generic functions
Method, constructor-values/generic: Private standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, leaky-relu-coeff: Private generic functions
Method, learn: Private generic functions
Method, learn: Private generic functions
Method, learn: Private generic functions
Method, learn: Private generic functions
Method, learn: Private generic functions
Method, learn: Private generic functions
Method, make-load-form: Public standalone methods
Method, neural-network-activation-funcs: Private generic functions
Method, neural-network-biases: Private generic functions
Method, neural-network-input-trans: Public generic functions
Method, neural-network-input-trans%: Public generic functions
Method, neural-network-label-trans: Public generic functions
Method, neural-network-layout: Public generic functions
Method, neural-network-output-trans: Public generic functions
Method, neural-network-weights: Private generic functions
Method, optimizer-corrected-momentum-coeff: Private generic functions
Method, optimizer-corrected-rate-coeff: Private generic functions
Method, optimizer-decay-rate: Private generic functions
Method, optimizer-learning-rate: Private generic functions
Method, optimizer-minibatch-size: Private generic functions
Method, optimizer-momentum-coeff: Private generic functions
Method, optimizer-momentum-memo: Private generic functions
Method, optimizer-rate-coeff: Private generic functions
Method, optimizer-rate-memo: Private generic functions
Method, print-object: Public standalone methods
Method, read-only-struct-slot-names: Private standalone methods

N
neural-network-activation-funcs: Private generic functions
neural-network-activation-funcs: Private generic functions
neural-network-biases: Private generic functions
neural-network-biases: Private generic functions
neural-network-input-trans: Public generic functions
neural-network-input-trans: Public generic functions
neural-network-input-trans%: Public generic functions
neural-network-input-trans%: Public generic functions
neural-network-label-trans: Public generic functions
neural-network-label-trans: Public generic functions
neural-network-layout: Public generic functions
neural-network-layout: Public generic functions
neural-network-output-trans: Public generic functions
neural-network-output-trans: Public generic functions
neural-network-weights: Private generic functions
neural-network-weights: Private generic functions
nrandom-generator: Private ordinary functions

O
optimizer-corrected-momentum-coeff: Private generic functions
optimizer-corrected-momentum-coeff: Private generic functions
optimizer-corrected-rate-coeff: Private generic functions
optimizer-corrected-rate-coeff: Private generic functions
optimizer-decay-rate: Private generic functions
optimizer-decay-rate: Private generic functions
optimizer-learning-rate: Private generic functions
optimizer-learning-rate: Private generic functions
optimizer-minibatch-size: Private generic functions
optimizer-minibatch-size: Private generic functions
optimizer-momentum-coeff: Private generic functions
optimizer-momentum-coeff: Private generic functions
optimizer-momentum-memo: Private generic functions
optimizer-momentum-memo: Private generic functions
optimizer-rate-coeff: Private generic functions
optimizer-rate-coeff: Private generic functions
optimizer-rate-memo: Private generic functions
optimizer-rate-memo: Private generic functions

P
print-object: Public standalone methods

R
rate: Public ordinary functions
read-only-struct-slot-names: Private standalone methods

S
sasum: Private ordinary functions
standard-random: Private ordinary functions

T
train-epoch: Public ordinary functions

Σ
σ: Private ordinary functions


A.3 Variables

Jump to:   A   B   C   D   I   L   M   O   R   S   W  
Index Entry  Section

A
activation-funcs: Public classes

B
biases: Public classes
biases: Private structures

C
coeff: Public classes
corrected-momentum-coeff: Public classes
corrected-rate-coeff: Public classes

D
decay-rate: Private classes

I
input-trans: Public classes
input-trans%: Public classes

L
label-trans: Public classes
layout: Public classes
learning-rate: Private classes

M
minibatch-size: Private classes
momentum-coeff: Private classes
momentum-memo: Private classes

O
output-trans: Public classes

R
rate-coeff: Private classes
rate-memo: Private classes

S
Slot, activation-funcs: Public classes
Slot, biases: Public classes
Slot, biases: Private structures
Slot, coeff: Public classes
Slot, corrected-momentum-coeff: Public classes
Slot, corrected-rate-coeff: Public classes
Slot, decay-rate: Private classes
Slot, input-trans: Public classes
Slot, input-trans%: Public classes
Slot, label-trans: Public classes
Slot, layout: Public classes
Slot, learning-rate: Private classes
Slot, minibatch-size: Private classes
Slot, momentum-coeff: Private classes
Slot, momentum-memo: Private classes
Slot, output-trans: Public classes
Slot, rate-coeff: Private classes
Slot, rate-memo: Private classes
Slot, weights: Public classes
Slot, weights: Private structures

W
weights: Public classes
weights: Private structures


A.4 Data types

Jump to:   %  
A   C   D   F   H   L   M   N   O   P   R   S   U  
Index Entry  Section

%
%identity: Public classes
%tanh: Public classes

A
activation: Public classes
activation.lisp: The neural-classifier/activation․lisp file
adagrad-optimizer: Public classes
adam-optimizer: Public classes

C
Class, %identity: Public classes
Class, %tanh: Public classes
Class, activation: Public classes
Class, adagrad-optimizer: Public classes
Class, adam-optimizer: Public classes
Class, hidden-layer-activation: Public classes
Class, leaky-relu: Public classes
Class, momentum-memo-optimizer: Private classes
Class, momentum-optimizer: Public classes
Class, nesterov-optimizer: Public classes
Class, neural-network: Public classes
Class, optimizer: Private classes
Class, output-layer-activation: Public classes
Class, rate-memo-optimizer: Private classes
Class, rmsprop-optimizer: Public classes
Class, sgd-optimizer: Public classes
Class, sigmoid: Public classes
Class, softmax: Public classes

D
definitions.lisp: The neural-classifier/definitions․lisp file

F
File, activation.lisp: The neural-classifier/activation․lisp file
File, definitions.lisp: The neural-classifier/definitions․lisp file
File, magicl-blas.lisp: The neural-classifier/magicl-blas․lisp file
File, neural-classifier.asd: The neural-classifier/neural-classifier․asd file
File, neural-network.lisp: The neural-classifier/neural-network․lisp file
File, optimizers.lisp: The neural-classifier/optimizers․lisp file
File, package.lisp: The neural-classifier/package․lisp file
File, utility.lisp: The neural-classifier/utility․lisp file

H
hidden-layer-activation: Public classes

L
leaky-relu: Public classes

M
magicl-blas.lisp: The neural-classifier/magicl-blas․lisp file
memo: Private structures
momentum-memo-optimizer: Private classes
momentum-optimizer: Public classes

N
nesterov-optimizer: Public classes
neural-classifier: The neural-classifier system
neural-classifier: The neural-classifier package
neural-classifier.asd: The neural-classifier/neural-classifier․asd file
neural-network: Public classes
neural-network.lisp: The neural-classifier/neural-network․lisp file

O
optimizer: Private classes
optimizers.lisp: The neural-classifier/optimizers․lisp file
output-layer-activation: Public classes

P
Package, neural-classifier: The neural-classifier package
package.lisp: The neural-classifier/package․lisp file

R
rate-memo-optimizer: Private classes
rmsprop-optimizer: Public classes

S
sgd-optimizer: Public classes
sigmoid: Public classes
softmax: Public classes
Structure, memo: Private structures
System, neural-classifier: The neural-classifier system

U
utility.lisp: The neural-classifier/utility․lisp file