The mgl Reference Manual

This is the mgl Reference Manual, version 0.1.0, generated automatically by Declt version 4.0 beta 2 "William Riker" on Mon Feb 26 17:19:47 2024 GMT+0.

Table of Contents


1 Introduction


2 Systems

The main system appears first, followed by any subsystem dependency.


2.1 mgl

MGL is a machine learning library for backpropagation
neural networks, boltzmann machines, gaussian processes and more.

Author

Gábor Melis <>

Contact

Home Page

http://melisgl.github.io/mgl

Source Control

(GIT https://github.com/melisgl/mgl.git)

Bug Tracker

https://github.com/melisgl/mgl/issues

License

MIT, see COPYING.

Version

0.1.0

Dependencies
  • alexandria (system).
  • closer-mop (system).
  • array-operations (system).
  • lla (system).
  • cl-reexport (system).
  • mgl-gnuplot (system).
  • mgl-mat (system).
  • mgl-pax (system).
  • num-utils (system).
  • named-readtables (system).
  • pythonic-string-reader (system).
  • swank (system).
Source

mgl.asd.

Child Component

src (module).


2.2 mgl-gnuplot

Author

Gabor Melis

License

MIT

Dependencies
  • external-program (system).
  • alexandria (system).
Source

mgl-gnuplot.asd.

Child Component

src (module).


3 Modules

Modules are listed depth-first from the system components tree.


3.1 mgl/src

Source

mgl.asd.

Parent Component

mgl (system).

Child Components

3.2 mgl-gnuplot/src

Source

mgl-gnuplot.asd.

Parent Component

mgl-gnuplot (system).

Child Components

4 Files

Files are sorted by type and then listed depth-first from the systems components trees.


4.1 Lisp


4.1.1 mgl/mgl.asd

Source

mgl.asd.

Parent Component

mgl (system).

ASDF Systems

mgl.


4.1.2 mgl-gnuplot/mgl-gnuplot.asd

Source

mgl-gnuplot.asd.

Parent Component

mgl-gnuplot (system).

ASDF Systems

mgl-gnuplot.


4.1.3 mgl/src/package.lisp

Source

mgl.asd.

Parent Component

src (module).

Packages

4.1.4 mgl/src/common.lisp

Dependency

package.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

@mgl-common (special variable).


4.1.5 mgl/src/resample.lisp

Dependency

common.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.6 mgl/src/util.lisp

Dependency

resample.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.7 mgl/src/log.lisp

Dependency

util.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.8 mgl/src/dataset.lisp

Dependency

log.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.9 mgl/src/copy.lisp

Dependency

dataset.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.10 mgl/src/core.lisp

Dependency

copy.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.11 mgl/src/feature.lisp

Dependency

core.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.12 mgl/src/monitor.lisp

Dependency

feature.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.13 mgl/src/counter.lisp

Dependency

monitor.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.14 mgl/src/measure.lisp

Dependency

counter.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Internals

@mgl-measurer (special variable).


4.1.15 mgl/src/classification.lisp

Dependency

measure.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.16 mgl/src/optimize.lisp

Dependency

classification.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.17 mgl/src/gradient-descent.lisp

Dependency

optimize.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.18 mgl/src/conjugate-gradient.lisp

Dependency

gradient-descent.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.19 mgl/src/differentiable-function.lisp

Dependency

conjugate-gradient.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.20 mgl/src/boltzmann-machine.lisp

Dependency

differentiable-function.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.21 mgl/src/deep-belief-network.lisp

Dependency

boltzmann-machine.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.22 mgl/src/backprop.lisp

Dependency

deep-belief-network.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.23 mgl/src/lumps.lisp

Dependency

backprop.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.24 mgl/src/unroll.lisp

Dependency

lumps.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.25 mgl/src/gaussian-process.lisp

Dependency

unroll.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.26 mgl/src/nlp.lisp

Dependency

gaussian-process.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Public Interface
Internals

4.1.27 mgl/src/mgl.lisp

Dependency

nlp.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Internals

4.1.28 mgl/src/doc.lisp

Dependency

mgl.lisp (file).

Source

mgl.asd.

Parent Component

src (module).

Internals

4.1.29 mgl-gnuplot/src/package.lisp

Source

mgl-gnuplot.asd.

Parent Component

src (module).

Packages

mgl-gnuplot.


4.1.30 mgl-gnuplot/src/gnuplot.lisp

Dependency

package.lisp (file).

Source

mgl-gnuplot.asd.

Parent Component

src (module).

Public Interface
Internals

5 Packages

Packages are listed by definition order.


5.1 mgl-gd

See MGL-GD::@MGL-GD.

Source

package.lisp.

Use List
Used By List
Public Interface
Internals

5.2 mgl-resample

See MGL-RESAMPLE::@MGL-RESAMPLE.

Source

package.lisp.

Use List
  • common-lisp.
  • mgl-pax.
Used By List

mgl.

Public Interface
Internals

5.3 mgl-dataset

See MGL-DATASET::@MGL-DATASET.

Source

package.lisp.

Use List
Used By List
Public Interface
Internals

5.4 mgl-opt

See MGL-OPT::@MGL-OPT.

Source

package.lisp.

Use List
Used By List
Public Interface
Internals

5.5 mgl-nlp

See MGL-NLP::@MGL-NLP.

Source

package.lisp.

Use List
Used By List

mgl.

Public Interface
Internals

5.6 mgl-bp

See MGL-BP::@MGL-BP.

Source

package.lisp.

Use List
Used By List
Public Interface
Internals

5.7 mgl-cg

See MGL-CG::@MGL-CG.

Source

package.lisp.

Use List
Used By List
Public Interface
Internals

5.8 mgl-unroll

Translating Boltzmann Machines to a Backprop networks, aka ‘unrolling’.

Source

package.lisp.

Use List
Used By List

mgl.

Public Interface
Internals

5.9 mgl-gnuplot

Minimalistic, interactive or batch mode gnuplot interface that supports multiplots and inline data.

Source

package.lisp.

Use List

common-lisp.

Public Interface
Internals

5.10 mgl-log

See MGL-LOG::@MGL-LOG.

Source

package.lisp.

Use List
Used By List
Public Interface
Internals

5.11 mgl-diffun

See MGL-DIFFUN::@MGL-DIFFUN.

Source

package.lisp.

Use List
Used By List

mgl.

Public Interface
Internals

5.12 mgl-util

Simple utilities, types.

Source

package.lisp.

Use List
Used By List
Public Interface
Internals

5.13 mgl-gp

Gaussian processes with support for training with backpropagation.

Source

package.lisp.

Use List
Used By List

mgl.

Public Interface
Internals

5.14 mgl-common

The only purpose of this package is to avoid conflicts between other packages.

Source

package.lisp.

Use List
  • common-lisp.
  • mgl-pax.
Used By List
Public Interface
Internals

@mgl-common (special variable).


5.15 mgl-bm

Fully General Boltzmann Machines, Restricted Boltzmann Machines and their stacks called Deep Belief Networks (DBN).

Source

package.lisp.

Nickname

mgl-rbm

Use List
Used By List
Public Interface
Internals

5.16 mgl

See MGL::@MGL-MANUAL. This package reexports
everything from other packages defined here plus MGL-MAT.

Source

package.lisp.

Use List
Internals

5.17 mgl-core

See MGL-CORE::@MGL-MODEL, MGL-CORE::@MGL-MONITOR, MGL-CORE::@MGL-CLASSIFICATION.

Source

package.lisp.

Use List
Used By List
Public Interface
Internals

6 Definitions

Definitions are sorted by export status, category, package, and then by lexicographic order.


6.1 Public Interface


6.1.1 Constants

Constant: flt-ctype
Package

mgl-util.

Source

util.lisp.

Constant: least-negative-flt
Package

mgl-util.

Source

util.lisp.

Constant: least-positive-flt
Package

mgl-util.

Source

util.lisp.

Constant: most-negative-flt
Package

mgl-util.

Source

util.lisp.

Constant: most-positive-flt
Package

mgl-util.

Source

util.lisp.


6.1.2 Special variables

Special Variable: *command-stream*

The default stream to which commands and inline data are written by WRITE-COMMAND.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Special Variable: *cuda-window-start-time*

The default for CUDA-WINDOW-START-TIME.

Package

mgl-bp.

Source

backprop.lisp.

Special Variable: *default-ext*

Extrapolate maximum EXT times the current step-size.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: *default-int*

Don’t reevaluate within INT of the limit of the current bracket.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: *default-max-n-evaluations*
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: *default-max-n-evaluations-per-line-search*
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: *default-max-n-line-searches*
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: *default-ratio*

Maximum allowed slope ratio.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: *default-rho*

RHO is the minimum allowed fraction of the expected (from the slope at the initial point in the linesearch). Constants must satisfy 0 < RHO < SIG < 1.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: *default-sig*

SIG and RHO are the constants controlling the Wolfe-Powell conditions. SIG is the maximum allowed absolute ratio between previous and new slopes (derivatives in the search direction), thus setting SIG to low (positive) values forces higher precision in the line-searches.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: *experiment-random-seed*
Package

mgl-util.

Source

util.lisp.

Special Variable: *infinitely-empty-dataset*

This is the default dataset for MGL-OPT:MINIMIZE. It’s an infinite stream of NILs.

Package

mgl-dataset.

Source

dataset.lisp.

Special Variable: *log-file*
Package

mgl-log.

Source

log.lisp.

Special Variable: *log-time*
Package

mgl-log.

Source

log.lisp.

Special Variable: *no-array-bounds-check*
Package

mgl-util.

Source

util.lisp.

Special Variable: *warp-time*

Controls whether warping is enabled (see @MGL-RNN-TIME-WARP). Don’t enable it for training, as it would make backprop impossible.

Package

mgl-bp.

Source

backprop.lisp.


6.1.3 Macros

Macro: apply-key (key object)
Package

mgl-util.

Source

util.lisp.

Macro: build-fnn ((&key fnn class initargs max-n-stripes name) &body clumps)

Syntactic sugar to assemble FNNs from CLUMPs. Like LET*, it is a sequence of bindings (of symbols to CLUMPs). The names of the clumps created default to the symbol of the binding. In case a clump is not bound to a symbol (because it was created in a nested expression), the local function CLUMP can be used to find the clump with the given name in the fnn being built. Example:

(build-fnn ()
(features (->input :size n-features))
(biases (->weight :size n-features))
(weights (->weight :size (* n-hiddens n-features))) (activations0 (->v*m :weights weights :x (clump ’features))) (activations (->+ :args (list biases activations0))) (output (->sigmoid :x activations)))

Package

mgl-bp.

Source

backprop.lisp.

Macro: build-rnn ((&key rnn class name initargs max-n-stripes max-lag) &body body)

Create an RNN with MAX-N-STRIPES and MAX-LAG whose UNFOLDER is BODY wrapped in a lambda. Bind symbol given as the RNN argument to the RNN object so that BODY can see it.

Package

mgl-bp.

Source

backprop.lisp.

Macro: data (data &rest options)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Macro: defclass-now (name direct-superclasses direct-slots &rest options)
Package

mgl-util.

Source

util.lisp.

Macro: define-descriptions ((object class &key inheritp) &body descriptions)
Package

mgl-util.

Source

util.lisp.

Macro: define-slots-not-to-be-copied (context class &body slot-names)
Package

mgl-util.

Source

copy.lisp.

Macro: define-slots-to-be-shallow-copied (context class &body slot-names)
Package

mgl-util.

Source

copy.lisp.

Macro: defmaker ((name &key unkeyword-args extra-keyword-args make-instance-args) &body body)
Package

mgl-util.

Source

util.lisp.

Macro: do-batches-for-model ((batch (dataset model)) &body body)

Convenience macro over MAP-BATCHES-FOR-MODEL.

Package

mgl-core.

Source

core.lisp.

Macro: do-clouds ((cloud bm) &body body)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Macro: do-executors ((instances object) &body body)

Convenience macro on top of MAP-OVER-EXECUTORS.

Package

mgl-core.

Source

core.lisp.

Macro: do-gradient-sink (((segment accumulator) sink) &body body)

A convenience macro on top of MAP-GRADIENT-SINK.

Package

mgl-opt.

Source

optimize.lisp.

Macro: do-segment-set ((segment &optional start) segment-set &body body)

Iterate over SEGMENTS in SEGMENT-SET. If START is specified, the it is bound to the start index of SEGMENT within SEGMENT-SET. The start index is the sum of the sizes of previous segments.

Package

mgl-opt.

Source

optimize.lisp.

Macro: file (filename &rest options)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Macro: fn (expression &rest options)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Macro: plot (() &body mappings)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Macro: push-all (list place)
Package

mgl-util.

Source

util.lisp.

Macro: repeatably ((&key seed) &body body)
Package

mgl-util.

Source

util.lisp.

Macro: repeatedly (&body body)

Like CONSTANTLY but evaluates BODY it for each time.

Package

mgl-util.

Source

util.lisp.

Macro: special-case (test &body body)

Let the compiler compile BODY for the case when TEST is true and also when it’s false. The purpose is to allow different constraints to propagate to the two branches allowing them to be more optimized.

Package

mgl-util.

Source

util.lisp.

Macro: splot (() &body mappings)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Macro: the! (&rest args)
Package

mgl-util.

Source

util.lisp.

Macro: while (test &body body)
Package

mgl-util.

Source

util.lisp.

Macro: with-command-stream ((stream) &body body)

Binds *COMMAND-STREAM* to STREAM routing all command output to STREAM by default.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Macro: with-copying (&body body)
Package

mgl-util.

Source

copy.lisp.

Macro: with-logging-entry ((stream) &body body)
Package

mgl-log.

Source

log.lisp.

Macro: with-padded-attribute-printing ((attributeds) &body body)

Note the width of values for each attribute key which is the number of characters in the value’s PRINC-TO-STRING’ed representation. In BODY, if attributes with they same key are printed they are forced to be at least this wide. This allows for nice, table-like output:

(let ((attributeds
(list (make-instance ’basic-counter
:attributes ’(:a 1 :b 23 :c 456)) (make-instance ’basic-counter
:attributes ’(:a 123 :b 45 :c 6))))) (with-padded-attribute-printing (attributeds)
(map nil (lambda (attributed)
(format t "~A~%" attributed)) attributeds)))
;; 1 23 456: 0.000e+0 (0)
;; 123 45 6 : 0.000e+0 (0)

Package

mgl-core.

Source

counter.lisp.

Macro: with-session ((&key display geometry persistp output error) &body body)

Start gnuplot, bind STREAM and *COMMAND-STREAM* to its standard input. The stream is closed when BODY exits.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Macro: with-stripes (specs &body body)

Bind start and optionally end indices belonging to stripes in striped objects.

(WITH-STRIPES ((STRIPE1 OBJECT1 START1 END1)
(STRIPE2 OBJECT2 START2) ...)
...)

This is how one’s supposed to find the index range corresponding to the Nth input in an input lump of a bpn:

(with-stripes ((n input-lump start end))
(loop for i upfrom start below end
do (setf (mref (nodes input-lump) i) 0d0)))

Note how the input lump is striped, but the matrix into which we are indexing (NODES) is not known to WITH-STRIPES. In fact, for lumps the same stripe indices work with NODES and MGL-BP:DERIVATIVES.

Package

mgl-core.

Source

core.lisp.

Macro: with-weights-copied ((from-bpn) &body body)

In BODY ->WEIGHT will first look up if a weight lump of the same name exists in FROM-BPN and return that, or else create a weight lump normally. If FROM-BPN is NIL, then no weights are copied.

Package

mgl-bp.

Source

lumps.lisp.

Macro: with-zero-on-underflow ((prototype) &body body)
Package

mgl-util.

Source

util.lisp.


6.1.4 Setf expanders

Setf Expander: (setf max-n-stripes) (object)
Package

mgl-core.

Source

core.lisp.

Reader

max-n-stripes (generic function).

Setf Expander: (setf n-stripes) (object)
Package

mgl-core.

Source

core.lisp.

Reader

n-stripes (generic function).


6.1.5 Ordinary functions

Function: ->* (x y &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->+ (args &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->abs (x &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->activation (inputs &key name size peepholes add-bias-p)

Create a subnetwork of class ->ACTIVATION that computes the over activation from dense connection from lumps in INPUTS, and elementwise connection from lumps in PEEPHOLES. Create new ->WEIGHT lumps as necessary. INPUTS and PEEPHOLES can be a single lump or a list of lumps. Finally, if ADD-BIAS-P, then add an elementwise bias too. SIZE must be specified explicitly, because it is not possible to determine it unless there are peephole connections.

“‘cl-transcript
(->activation (->input :size 10 :name ’input) :name ’h1 :size 4) ==> #<->ACTIVATION (H1 :ACTIVATION) :STRIPES 1/1 :CLUMPS 4> “‘

This is the basic workhorse of neural networks which takes care of the linear transformation whose results and then fed to some non-linearity (->SIGMOID, ->TANH, etc).

The name of the subnetwork clump is ‘(,NAME :ACTIVATION)‘. The bias weight lump (if any) is named ‘(:BIAS ,NAME)‘. Dense connection weight lumps are named are named after the input and NAME: ‘(,(NAME INPUT) ,NAME)‘, while peepholes weight lumps are named ‘(,(NAME INPUT) ,NAME :PEEPHOLE)‘. This is useful to know if, for example, they are to be initialized differently.

Package

mgl-bp.

Source

lumps.lisp.

Function: ->batch-normalization (scale shift &key name size default-value shared-with-clump dimensions batch-size variance-adjustment population-decay)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->batch-normalized (x &key name size default-value shared-with-clump normalization batch-size variance-adjustment population-decay)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->batch-normalized-activation (inputs &key name size peepholes batch-size variance-adjustment population-decay)

A utility functions that creates and wraps an ->ACTIVATION in ->BATCH-NORMALIZED and with its BATCH-NORMALIZATION the two weight
lumps for the scale and shift
parameters. ‘(->BATCH-NORMALIZED-ACTIVATION INPUTS :NAME ’H1 :SIZE
10)‘ is equivalent to:

“‘commonlisp
(->batch-normalized (->activation inputs :name ’h1 :size 10 :add-bias-p nil) :name ’(h1 :batch-normalized-activation))
“‘

Note how biases are turned off since normalization will cancel them anyway (but a shift is added which amounts to the same effect).

Package

mgl-bp.

Source

lumps.lisp.

Function: ->dropout (x &key name size default-value shared-with-clump dropout)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->embedding (&key name size default-value shared-with-clump weights input-row-indices)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->exp (x &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->gaussian-random (&key name size default-value shared-with-clump mean variance variance-for-prediction)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->gp (&key name size default-value shared-with-clump means covariances)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: ->input (&key name size default-value shared-with-clump x dropout)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->loss (x &key name size default-value shared-with-clump importance)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->lstm (inputs &key name cell-init output-init size activation-fn gate-fn input-fn output-fn peepholes)

Create an LSTM layer consisting of input, forget, output gates with which input, cell state and output are scaled. Lots of lumps are created, the final one representing to output of the LSTM has NAME. The rest of the lumps are named automatically based on NAME. This function returns only the output lump (‘m‘), but all created lumps are added automatically to the BPN being built.

There are many papers and tutorials on LSTMs. This version is well described in "Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling" (2014, Hasim Sak, Andrew Senior, Francoise Beaufays). Using the notation from that paper:

$$
i_t = s(W\_{ix} x\_t + W\_{im} m\_{t-1} + W\_{ic} \odot
c\_{t-1} + b\_i)
$$

$$
f\_t = s(W\_{fx} x\_t + W\_{fm} m\_{t-1} + W\_{fc} \odot c\_{t-1} + b\_f)
$$

$$
c\_t = f\_t \odot c\_{t-1} + i\_t \odot g(W\_{cx} x\_t +
W\_{cm} m\_{t-1} + b\_c)
$$

$$
o\_t = s(W\_{ox} x\_t + W\_{om} m\_{t-1} + W\_{oc} \odot
c\_t + b\_o)
$$

$$
m\_t = o\_t \odot h(c\_t),
$$

where ‘i‘, ‘f‘, and ‘o‘ are the input, forget and output gates. ‘c‘ is the cell state and ‘m‘ is the actual output.

Weight matrices for connections from ‘c‘ (‘W_ic‘, ‘W_fc‘ and ‘W_oc‘) are diagonal and represented by just the vector of diagonal values. These connections are only added if PEEPHOLES is true.

A notable difference from the paper is that in addition to being a single lump, ‘x_t‘ (INPUTS) can also be a list of lumps. Whenever some activation is to be calculated based on ‘x_t‘, it is going to be the sum of individual activations. For example, ‘W_ix * x_t‘ is really ‘sum_j W_ijx * inputs_j‘.

If CELL-INIT is non-NIL, then it must be a CLUMP of SIZE form which stands for the initial state of the value cell (‘c_{-1}‘). CELL-INIT being NIL is equivalent to the state of all zeros.

ACTIVATION-FN defaults to ->ACTIVATION, but it can be for example ->BATCH-NORMALIZED-ACTIVATION. In general, functions like the aforementioned two with signature like (INPUTS &KEY NAME SIZE PEEPHOLES) can be passed as ACTIVATION-FN.

Package

mgl-bp.

Source

lumps.lisp.

Function: ->max (x &key name size default-value shared-with-clump group-size)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->max-channel (x &key name size default-value shared-with-clump group-size)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->min (x &key name size default-value shared-with-clump group-size)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->normalized (x &key name size default-value shared-with-clump group-size scale)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->periodic (x &key name size default-value shared-with-clump period)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: ->ref (&key name size default-value shared-with-clump index into drop-negative-index-p)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: ->relu (x &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->rep (x n &key name size default-value shared-with-clump)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: ->rough-exponential (x &key name size default-value shared-with-clump signal-variance length-scale roughness)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: ->sample-binary (x &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->scaled-tanh (x &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->seq-barrier (&key name size default-value shared-with-clump seq-elt-fn)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->sigmoid (x &key name size default-value shared-with-clump dropout)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->sin (x &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->softmax-xe-loss (x &key name size default-value shared-with-clump group-size target)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->squared-difference (x y &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->stretch (x n &key name size default-value shared-with-clump)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: ->sum (x &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->tanh (x &key name size default-value shared-with-clump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->v*m (x weights &key name size default-value shared-with-clump transpose-weights-p)
Package

mgl-bp.

Source

lumps.lisp.

Function: ->weight (&key name size default-value shared-with-clump dimensions)
Package

mgl-bp.

Source

lumps.lisp.

Function: add-clump (clump bpn)

Add CLUMP to BPN. MAX-N-STRIPES of CLUMP gets set to that of BPN. It is an error to add a clump with a name already used by one of the CLUMPS of BPN.

Package

mgl-bp.

Source

backprop.lisp.

Function: add-confusion-matrix (matrix result-matrix)

Add MATRIX into RESULT-MATRIX.

Package

mgl-core.

Source

classification.lisp.

Function: add-to-running-stat (x stat &key weight)
Package

mgl-util.

Source

util.lisp.

Function: append1 (list obj)
Package

mgl-util.

Source

util.lisp.

Function: applies-to-p (generic-function &rest args)
Package

mgl-util.

Source

util.lisp.

Function: apply-monitors (monitors &rest arguments)

Call APPLY-MONITOR on each monitor in MONITORS and ARGUMENTS. This is how an event is fired.

Package

mgl-core.

Source

monitor.lisp.

Function: arrange-for-clipping-gradients (batch-gd-optimizer l2-upper-bound &key callback)

Make it so that the norm of the batch normalized gradients accumulated by BATCH-GD-OPTIMIZER is clipped to L2-UPPER-BOUND before every update. See CLIP-L2-NORM.

Package

mgl-gd.

Source

gradient-descent.lisp.

Function: arrange-for-renormalizing-activations (bpn optimizer l2-upper-bound)

By pushing a lambda to AFTER-UPDATE-HOOK of OPTIMIZER arrange for all weights beings trained by OPTIMIZER to be renormalized (as in RENORMALIZE-ACTIVATIONS with L2-UPPER-BOUND).

It is assumed that if the weights either belong to an activation lump or are simply added to the activations (i.e. they are biases).

Package

mgl-bp.

Source

lumps.lisp.

Function: as-column-vector (a)
Package

mgl-util.

Source

util.lisp.

Function: bag (seq fn &key ratio n weight replacement key test random-state)

Sample from SEQ with SAMPLE-FROM (passing RATIO, WEIGHT, REPLACEMENT), or SAMPLE-STRATIFIED if KEY is not NIL. Call FN with the sample. If N is NIL then keep repeating this until FN performs a non-local exit. Else N must be a non-negative integer, N iterations will be performed, the primary values returned by FN collected into a list and returned. See SAMPLE-FROM and SAMPLE-STRATIFIED for examples.

Package

mgl-resample.

Source

resample.lisp.

Function: bag-cv (data fn &key n n-folds folds split-fn pass-fold random-state)

Perform cross-validation on different shuffles of DATA N times and collect the results. Since CROSS-VALIDATE collects the return values of FN, the return value of this function is a list of lists of FN results. If N is NIL, don’t collect anything just keep doing repeated CVs until FN performs a non-local exit.

The following example simply collects the test and training sets for 2-fold CV repeated 3 times with shuffled data:

“‘commonlisp
;;; This is non-deterministic.
(bag-cv ’(0 1 2 3 4) #’list :n 3 :n-folds 2)
=> ((((2 3 4) (1 0))
((1 0) (2 3 4)))
(((2 1 0) (4 3))
((4 3) (2 1 0)))
(((1 0 3) (2 4))
((2 4) (1 0 3))))
“‘

CV bagging is useful when a single CV is not producing stable results. As an ensemble method, CV bagging has the advantage over bagging that each example will occur the same number of times and after the first CV is complete there is a complete but less reliable estimate for each example which gets refined by further CVs.

Package

mgl-resample.

Source

resample.lisp.

Function: binarize-randomly (x)

Return 1 with X probability and 0 otherwise.

Package

mgl-util.

Source

util.lisp.

Function: binomial-log-likelihood-ratio (k1 n1 k2 n2)

See "Accurate Methods for the Statistics of Surprise and Coincidence" by Ted Dunning (http://citeseer.ist.psu.edu/29096.html).

All classes must have non-zero counts, that is, K1, N1-K1, K2, N2-K2 are positive integers. To ensure this - and also as kind of prior - add a small number such as 1 to K1, K2 and 2 to N1, N2 before calling.

Package

mgl-util.

Source

util.lisp.

Function: bleu (candidates references &key candidate-key reference-key n)

Compute the [BLEU score](http://en.wikipedia.org/wiki/BLEU) for
bilingual CORPUS. BLEU measures how good a translation is compared
to human reference translations.

CANDIDATES (keyed by CANDIDATE-KEY) and REFERENCES (keyed by
REFERENCE-KEY) are sequences of sentences. A sentence is a sequence
of words. Words are compared with EQUAL, and may be any kind of
object (not necessarily strings).

Currently there is no support for multiple reference translations. N
determines the largest n-grams to consider.

The first return value is the BLEU score (between 0 and 1, not as a
percentage). The second value is the sum of the lengths of
CANDIDATES divided by the sum of the lengths of REFERENCES (or NIL,
if the denominator is 0). The third is a list of n-gram
precisions (also between 0 and 1 or NIL), one for each element in
\[1..‘N‘].

This is basically a reimplementation of [multi-bleu.perl](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl).

“‘cl-transcript
(bleu ’((1 2 3 4) (a b))
’((1 2 3 4) (1 2)))
=> 0.8408964
=> 1
=> (;; 1-gram precision: 4/6
2/3
;; 2-gram precision: 3/4
3/4
;; 3-gram precision: 2/2
1
;; 4-gram precision: 1/1
1)
“‘

Package

mgl-nlp.

Source

nlp.lisp.

Function: call-periodic-fn (n fn &rest args)
Package

mgl-util.

Source

util.lisp.

Function: call-periodic-fn! (n fn &rest args)
Package

mgl-util.

Source

util.lisp.

Function: call-repeatably (fn &key seed)
Package

mgl-util.

Source

util.lisp.

Function: cg (fn w &key max-n-line-searches max-n-evaluations-per-line-search max-n-evaluations sig rho int ext ratio spare-vectors)

CG-OPTIMIZER passes each batch of data to this function with its CG-ARGS passed on.

Minimize a differentiable multivariate function with conjugate gradient. The Polak-Ribiere flavour of conjugate gradients is used to compute search directions, and a line search using quadratic and cubic polynomial approximations and the Wolfe-Powell stopping criteria is used together with the slope ratio method for guessing initial step sizes. Additionally a bunch of checks are made to make sure that exploration is taking place and that extrapolation will not be unboundedly large.

FN is a function of two parameters: WEIGHTS and DERIVATIVES. WEIGHTS is a MAT of the same size as W that is where the search start from. DERIVATIVES is also a MAT of that size and it is where FN shall place the partial derivatives. FN returns the value of the function that is being minimized.

CG performs a number of line searches and invokes FN at each step. A line search invokes FN at most MAX-N-EVALUATIONS-PER-LINE-SEARCH number of times and can succeed in improving the minimum by the sufficient margin or it can fail. Note, the even a failed line search may improve further and hence change the weights it’s just that the improvement was deemed too small. CG stops when either:

- two line searches fail in a row
- MAX-N-LINE-SEARCHES is reached
- MAX-N-EVALUATIONS is reached

CG returns a MAT that contains the best weights, the minimum, the number of line searches performed, the number of succesful line searches and the number of evaluations.

When using MAX-N-EVALUATIONS remember that there is an extra evaluation of FN before the first line search.

SPARE-VECTORS is a list of preallocated MATs of the same size as W. Passing 6 of them covers the current need of the algorithm and it will not cons up vectors of size W at all.

NOTE: If the function terminates within a few iterations, it could be an indication that the function values and derivatives are not consistent (ie, there may be a bug in the implementation of FN function).

SIG and RHO are the constants controlling the Wolfe-Powell conditions. SIG is the maximum allowed absolute ratio between previous and new slopes (derivatives in the search direction), thus setting SIG to low (positive) values forces higher precision in the line-searches. RHO is the minimum allowed fraction of the expected (from the slope at the initial point in the linesearch). Constants must satisfy 0 < RHO < SIG < 1. Tuning of SIG (depending on the nature of the function to be optimized) may speed up the minimization; it is probably not worth playing much with RHO.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Function: chunk-lump-name (chunk-name kind)

The name of the lump that represents CHUNK.

Package

mgl-unroll.

Source

unroll.lisp.

Function: clear-running-stat (stat)
Package

mgl-util.

Source

util.lisp.

Function: clip-l2-norm (mats l2-upper-bound &key callback)

Scale MATS so that their $L_2$ norm does not exceed L2-UPPER-BOUND.

Compute the norm of of MATS as if they were a single vector. If the norm is greater than L2-UPPER-BOUND, then scale each matrix destructively by the norm divided by L2-UPPER-BOUND and if non-NIL call the function CALLBACK with the scaling factor.

Package

mgl-gd.

Source

gradient-descent.lisp.

Function: cloud-chunk-among-chunks (cloud chunks)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: command (command)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: conditioning-cloud-p (cloud)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: confusion-matrix-accuracy (matrix &key filter)

Return the overall accuracy of the results in MATRIX. It’s computed as the number of correctly classified cases (hits) divided by the name of cases. Return the number of hits and the number of cases as the second and third value. If FILTER function is given, then call it with the target and the prediction of the cell. Disregard cell for which FILTER returns NIL.

Precision and recall can be easily computed by giving the right filter, although those are provided in separate convenience functions.

Package

mgl-core.

Source

classification.lisp.

Function: confusion-matrix-precision (matrix prediction)

Return the accuracy over the cases when the classifier said PREDICTION.

Package

mgl-core.

Source

classification.lisp.

Function: confusion-matrix-recall (matrix target)

Return the accuracy over the cases when the correct class is TARGET.

Package

mgl-core.

Source

classification.lisp.

Function: count-features (documents mapper &key key)

Return scored features as an EQUAL hash table whose keys are features of DOCUMENTS and values are counts of occurrences of features. MAPPER takes a function and a document and calls function with features of the document.

“‘cl-transcript
(sort (alexandria:hash-table-alist
(count-features ’(("hello" "world")
("this" "is" "our" "world"))
(lambda (fn document)
(map nil fn document))))
#’string< :key #’car)
=> (("hello" . 1) ("is" . 1) ("our" . 1) ("this" . 1) ("world" . 2)) “‘

Package

mgl-core.

Source

feature.lisp.

Function: cross-validate (data fn &key n-folds folds split-fn pass-fold)

Map FN over the FOLDS of DATA split with SPLIT-FN and collect the results in a list. The simplest demonstration is:

“‘cl-transcript
(cross-validate ’(0 1 2 3 4)
(lambda (test training)
(list test training))
:n-folds 5)
=> (((0) (1 2 3 4))
((1) (0 2 3 4))
((2) (0 1 3 4))
((3) (0 1 2 4))
((4) (0 1 2 3)))
“‘

Of course, in practice one would typically train a model and return the trained model and/or its score on TEST. Also, sometimes one may want to do only some of the folds and remember which ones they were:

“‘cl-transcript
(cross-validate ’(0 1 2 3 4)
(lambda (fold test training)
(list :fold fold test training))
:folds ’(2 3)
:pass-fold t)
=> ((:fold 2 (2) (0 1 3 4))
(:fold 3 (3) (0 1 2 4)))
“‘

Finally, the way the data is split can be customized. By default SPLIT-FOLD/MOD is called with the arguments DATA, the fold (from among FOLDS) and N-FOLDS. SPLIT-FOLD/MOD returns two values which are then passed on to FN. One can use SPLIT-FOLD/CONT or SPLIT-STRATIFIED or any other function that works with these arguments. The only real constraint is that FN has to take as many arguments (plus the fold argument if PASS-FOLD) as SPLIT-FN returns.

Package

mgl-resample.

Source

resample.lisp.

Function: data* (data options)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: dbm->dbn (dbm &key rbm-class dbn-class dbn-initargs)

Convert DBM to a DBN by discarding intralayer connections and doubling activations of clouds where necessary. If a chunk does not have input from below then scale its input from above by 2; similarly, if a chunk does not have input from above then scale its input from below by 2. By default, weights are shared between clouds and their copies.

For now, unrolling the resulting DBN to a BPN is not supported.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: decay-rate-to-half-life (decay-rate)
Package

mgl-util.

Source

util.lisp.

Function: decay-to-half-life (decay)
Package

mgl-util.

Source

util.lisp.

Function: down-dbm (dbm)

Do a single downward pass in DBM, propagating the mean-field much like performing approximate inference, but in the other direction. Disregard intralayer and upward connections, double activations to chunks having downward connections.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: down-mean-field (dbn)

Propagate the means down from the means of DBN.

Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: end-session (stream)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: ensure-softmax-target-matrix (softmax-xe-loss n)

Set TARGET of SOFTMAX-XE-LOSS to a MAT capable of holding the dense target values for N stripes.

Package

mgl-bp.

Source

lumps.lisp.

Function: feature-disambiguities (documents mapper class-fn &key classes)

Return scored features as an EQUAL hash table whose keys are features of DOCUMENTS and values are their _disambiguities_. MAPPER takes a function and a document and calls function with features of the document.

From the paper ’Using Ambiguity Measure Feature Selection Algorithm for Support Vector Machine Classifier’.

Package

mgl-core.

Source

feature.lisp.

Function: feature-llrs (documents mapper class-fn &key classes)

Return scored features as an EQUAL hash table whose keys are features of DOCUMENTS and values are their log likelihood ratios. MAPPER takes a function and a document and calls function with features of the document.

“‘cl-transcript
(sort (alexandria:hash-table-alist
(feature-llrs ’((:a "hello" "world")
(:b "this" "is" "our" "world")) (lambda (fn document)
(map nil fn (rest document))) #’first))
#’string< :key #’car)
=> (("hello" . 2.6032386) ("is" . 2.6032386) ("our" . 2.6032386) ("this" . 2.6032386) ("world" . 4.8428774e-8))
“‘

Package

mgl-core.

Source

feature.lisp.

Function: file* (filename options)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: find-clump (name bpn &key errorp)

Find the clump with NAME among CLUMPS of BPN. As always, names are compared with EQUAL. If not found, then return NIL or signal and error depending on ERRORP.

Package

mgl-bp.

Source

backprop.lisp.

Function: flt (x)
Package

mgl-util.

Source

util.lisp.

Function: flt-vector (&rest args)
Package

mgl-util.

Source

util.lisp.

Function: fn* (expression options)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: fracture (fractions seq &key weight)

Partition SEQ into a number of subsequences. FRACTIONS is either a positive integer or a list of non-negative real numbers. WEIGHT is NIL or a function that returns a non-negative real number when called with an element from SEQ. If FRACTIONS is a positive integer then return a list of that many subsequences with equal sum of weights bar rounding errors, else partition SEQ into subsequences, where the sum of weights of subsequence I is proportional to element I of FRACTIONS. If WEIGHT is NIL, then it’s element is assumed to have the same weight.

To split into 5 sequences:

“‘cl-transcript
(fracture 5 ’(0 1 2 3 4 5 6 7 8 9))
=> ((0 1) (2 3) (4 5) (6 7) (8 9))
“‘

To split into two sequences whose lengths are proportional to 2 and 3:

“‘cl-transcript
(fracture ’(2 3) ’(0 1 2 3 4 5 6 7 8 9))
=> ((0 1 2 3) (4 5 6 7 8 9))
“‘

Package

mgl-resample.

Source

resample.lisp.

Function: fracture-stratified (fractions seq &key key test weight)

Similar to FRACTURE, but also makes sure that keys are evenly distributed among the partitions (see STRATIFY). It can be useful for classification tasks to partition the data set while keeping the distribution of classes the same.

Note that the sets returned are not in random order. In fact, they are sorted internally by KEY.

For example, to make two splits with approximately the same number of even and odd numbers:

“‘cl-transcript
(fracture-stratified 2 ’(0 1 2 3 4 5 6 7 8 9) :key #’evenp)
=> ((0 2 1 3) (4 6 8 5 7 9))
“‘

Package

mgl-resample.

Source

resample.lisp.

Function: gaussian-random-1 ()
Package

mgl-util.

Source

util.lisp.

Function: gp-confidences-as-plot-data (gp inputs &key means covariances levels-and-options)

Return a list of MGL-GNUPLOT:DATA-MAPPINGs, one for each level in LEVELS-AND-OPTIONS (a list of (LEVEL OPTIONS)). Each mapping contains INPUTS in its first column, and MEANS + LEVEL*VARIANCES in the second.

Package

mgl-gp.

Source

gaussian-process.lisp.

Function: gp-covariances (gp x1 &optional x2)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: gp-means-and-covariances (gp x1 &optional x2)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: gp-samples-as-plot-data (gp inputs &key means covariances options)

Returns a matrix that contains INPUTS in its first column, and a sample taken with SAMPLE-GP in its second.

Package

mgl-gp.

Source

gaussian-process.lisp.

Function: group (seq group-size &key start end)
Package

mgl-util.

Source

util.lisp.

Function: half-life-to-decay (half-life)

b^h=0.5, b=0.5^(1/h)

Package

mgl-util.

Source

util.lisp.

Function: half-life-to-decay-rate (half-life)
Package

mgl-util.

Source

util.lisp.

Function: hash-table->vector (hash-table)
Package

mgl-util.

Source

util.lisp.

Function: initialize-fnn-from-bm (fnn bm inits)

Initialize FNN from the weights of BM according to cloud INITS that was returned by UNROLL-DBN or UNROLL-DBM.

Package

mgl-unroll.

Source

unroll.lisp.

Function: inputs->nodes (bm)

Copy the previously clamped INPUTS to NODES as if SET-INPUT were called with the same parameters.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: insert-into-sorted-vector (item vector pred &key key max-length)

Insert ITEM into VECTOR while keeping it sorted by PRED. Extend the vector if needed while respecting MAX-LENGTH.

Package

mgl-util.

Source

util.lisp.

Function: invert-permutation (permutation)
Package

mgl-util.

Source

util.lisp.

Function: lag (name &key lag rnn path)

In RNN or if it’s NIL the RNN being extended with another
BPN (called _unfolding_), look up the CLUMP with NAME in the BPN that’s LAG number of time steps before the BPN being added. If this function is called from UNFOLDER of an RNN (which is what happens behind the scene in the body of BUILD-RNN), then it returns an opaque object representing a lagged connection to a clump, else it returns the CLUMP itself.

FIXDOC: PATH

Package

mgl-bp.

Source

backprop.lisp.

Function: last1 (seq)
Package

mgl-util.

Source

util.lisp.

Function: list-samples (sampler max-size)

Return a list of samples of length at most MAX-SIZE or less if SAMPLER runs out.

Package

mgl-dataset.

Source

dataset.lisp.

Function: list-segments (gradient-source)

A utility function that returns the list of segments from MAP-SEGMENTS on GRADIENT-SOURCE.

Package

mgl-opt.

Source

optimize.lisp.

Function: load-state (filename object)

Load weights of OBJECT from FILENAME. Return OBJECT.

Package

mgl-core.

Source

core.lisp.

Function: log-mat-room (&key verbose)
Package

mgl-log.

Source

log.lisp.

Function: log-msg (format &rest args)
Package

mgl-log.

Source

log.lisp.

Function: log-padded (attributeds)

Log (see LOG-MSG) ATTRIBUTEDS non-escaped (as in PRINC or ~A) with the output being as table-like as possible.

Package

mgl-core.

Source

counter.lisp.

Function: make-classification-accuracy-monitors (model &key operation-mode attributes label-index-fn)

Return a list of MONITOR objects associated with [CLASSIFICATION-ACCURACY-COUNTER][]s. LABEL-INDEX-FN is a function like LABEL-INDEX. See that function for more.

Implemented in terms of MAKE-CLASSIFICATION-ACCURACY-MONITORS*.

Package

mgl-core.

Source

classification.lisp.

Function: make-confusion-matrix (&key test)

Classes are compared with TEST.

Package

mgl-core.

Source

classification.lisp.

Function: make-cost-monitors (model &key operation-mode attributes)

Return a list of MONITOR objects, each associated with one BASIC-COUNTER with attribute :TYPE "cost". Implemented in terms of MAKE-COST-MONITORS*.

Package

mgl-opt.

Source

optimize.lisp.

Function: make-cross-entropy-monitors (model &key operation-mode attributes label-index-distribution-fn)

Return a list of MONITOR objects associated with [CROSS-ENTROPY-COUNTER][]s. LABEL-INDEX-DISTRIBUTION-FN is a function like LABEL-INDEX-DISTRIBUTION. See that function for more.

Implemented in terms of MAKE-CROSS-ENTROPY-MONITORS*.

Package

mgl-core.

Source

classification.lisp.

Function: make-indexer (scored-features n &key start class)

Take the top N features from SCORED-FEATURES (see @MGL-FEATURE-SELECTION), assign indices to them starting from START. Return an ENCODER/DECODER (or another CLASS) that converts between objects and indices.

Package

mgl-core.

Source

feature.lisp.

Function: make-label-monitors (model &key operation-mode attributes label-index-fn label-index-distribution-fn)

Return classification accuracy and cross-entropy monitors. See MAKE-CLASSIFICATION-ACCURACY-MONITORS and MAKE-CROSS-ENTROPY-MONITORS for a description of paramters.

Package

mgl-core.

Source

classification.lisp.

Function: make-n-gram-mappee (function n)

Make a function of a single argument that’s suitable as the function argument to a mapper function. It calls FUNCTION with every N element.

“‘cl-transcript
(map nil (make-n-gram-mappee #’print 3) ’(a b c d e))
..
.. (A B C)
.. (B C D)
.. (C D E)
“‘

Package

mgl-nlp.

Source

nlp.lisp.

Function: make-random-generator (seq &key reorder)

Return a function that returns elements of VECTOR in random order without end. When there are no more elements, start over with a different random order.

Package

mgl-util.

Source

util.lisp.

Function: make-random-sampler (seq &key max-n-samples reorder)

Create a sampler that returns elements of SEQ in random order. If MAX-N-SAMPLES is non-nil, then at most MAX-N-SAMPLES are sampled. The first pass over a shuffled copy of SEQ, and this copy is reshuffled whenever the sampler reaches the end of it. Shuffling is performed by calling the REORDER function.

Package

mgl-dataset.

Source

dataset.lisp.

Function: make-reconstruction-monitors (model &key operation-mode attributes)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: make-sequence-generator (seq)

Return a function that returns elements of SEQ in order without end. When there are no more elements, start over.

Package

mgl-util.

Source

util.lisp.

Function: make-sequence-sampler (seq &key max-n-samples)

Create a sampler that returns elements of SEQ in their original order. If MAX-N-SAMPLES is non-nil, then at most MAX-N-SAMPLES are sampled.

Package

mgl-dataset.

Source

dataset.lisp.

Function: make-sorted-group-generator (generator pred group-size &key key randomize-size)
Package

mgl-util.

Source

util.lisp.

Function: make-step-monitor-monitors (rnn &key counter-values-fn make-counter)

Return a list of monitors, one for every monitor in STEP-MONITORS
of RNN. These monitors extract the results from their warp counterpairs with COUNTER-VALUES-FN and add them to their own counter that’s created by MAKE-COUNTER. Wow. Ew. The idea is that one does something like this do monitor warped prediction:

“‘commonlisp
(let ((*warp-time* t))
(setf (step-monitors rnn)
(make-cost-monitors rnn :attributes ’(:event "warped pred."))) (monitor-bpn-results dataset rnn
;; Just collect and reset the warp
;; monitors after each batch of
;; instances.
(make-step-monitor-monitors rnn)))
“‘

Package

mgl-bp.

Source

backprop.lisp.

Function: map-batches-for-model (fn dataset model)

Call FN with batches of instances from DATASET suitable for MODEL. The number of instances in a batch is MAX-N-STRIPES of MODEL or less if there are no more instances left.

Package

mgl-core.

Source

core.lisp.

Function: map-dataset (fn dataset)

Call FN with each instance in DATASET. This is basically equivalent to iterating over the elements of a sequence or a sampler (see @MGL-SAMPLER).

Package

mgl-dataset.

Source

dataset.lisp.

Function: map-datasets (fn datasets &key impute)

Call FN with a list of instances, one from each dataset in
DATASETS. Return nothing. If IMPUTE is specified then iterate until the largest dataset is consumed imputing IMPUTE for missing values. If IMPUTE is not specified then iterate until the smallest dataset runs out.

“‘cl-transcript
(map-datasets #’prin1 ’((0 1 2) (:a :b)))
.. (0 :A)(1 :B)

(map-datasets #’prin1 ’((0 1 2) (:a :b)) :impute nil)
.. (0 :A)(1 :B)(2 NIL)
“‘

It is of course allowed to mix sequences with samplers:

“‘cl-transcript
(map-datasets #’prin1
(list ’(0 1 2)
(make-sequence-sampler ’(:a :b) :max-n-samples 2))) .. (0 :A)(1 :B)
“‘

Package

mgl-dataset.

Source

dataset.lisp.

Function: mark-everything-present (object)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: max-position (seq start end)
Package

mgl-util.

Source

util.lisp.

Function: max-row-positions (mat &key start end)

Find the colums with the maximum in each row of the 2d MAT and return them as a list.

Package

mgl-util.

Source

util.lisp.

Function: measure-classification-accuracy (truths predictions &key test truth-key prediction-key weight)

Return the number of correct classifications and as the second value the number of instances (equal to length of TRUTHS in the non-weighted case). TRUTHS (keyed by TRUTH-KEY) is a sequence of opaque class labels compared with TEST to another sequence of classes labels in PREDICTIONS (keyed by PREDICTION-KEY). If WEIGHT is non-nil, then it is a function that returns the weight of an element of TRUTHS. Weighted cases add their weight to both counts (returned as the first and second values) instead of 1 as in the non-weighted case.

Note how the returned values are suitable for MULTIPLE-VALUE-CALL with #’ADD-TO-COUNTER and a CLASSIFICATION-ACCURACY-COUNTER.

Package

mgl-core.

Source

classification.lisp.

Function: measure-confusion (truths predictions &key test truth-key prediction-key weight)

Create a CONFUSION-MATRIX from TRUTHS and PREDICTIONS.
TRUTHS (keyed by TRUTH-KEY) is a sequence of class labels compared with TEST to another sequence of class labels in PREDICTIONS (keyed by PREDICTION-KEY). If WEIGHT is non-nil, then it is a function that returns the weight of an element of TRUTHS. Weighted cases add their weight to both counts (returned as the first and second values).

Note how the returned confusion matrix can be added to another with ADD-TO-COUNTER.

Package

mgl-core.

Source

classification.lisp.

Function: measure-cross-entropy (truths predictions &key truth-key prediction-key min-prediction-pr)

Return the sum of the cross-entropy between pairs of elements with the same index of TRUTHS and PREDICTIONS. TRUTH-KEY is a function that’s when applied to an element of TRUTHS returns a sequence representing some kind of discrete target distribution (P in the definition below). TRUTH-KEY may be NIL which is equivalent to the IDENTITY function. PREDICTION-KEY is the same kind of key for PREDICTIONS, but the sequence it returns represents a distribution that approximates (Q below) the true one.

Cross-entropy of the true and approximating distributions is defined as:

cross-entropy(p,q) = - sum_i p(i) * log(q(i))

of which this function returns the sum over the pairs of elements of TRUTHS and PREDICTIONS keyed by TRUTH-KEY and PREDICTION-KEY.

Due to the logarithm, if q(i) is close to zero, we run into numerical problems. To prevent this, all q(i) that are less than MIN-PREDICTION-PR are treated as if they were MIN-PREDICTION-PR.

The second value returned is the sum of p(i) over all TRUTHS and all ‘I‘. This is normally equal to ‘(LENGTH TRUTHS)‘, since elements of TRUTHS represent a probability distribution, but this is not enforced which allows relative importance of elements to be controlled.

The third value returned is a plist that maps each index occurring in the distribution sequences to a list of two elements:

sum_j p_j(i) * log(q_j(i))

and

sum_j p_j(i)

where ‘J‘ indexes into TRUTHS and PREDICTIONS.

(measure-cross-entropy ’((0 1 0)) ’((0.1 0.7 0.2)))
=> 0.35667497
1
(2 (0.0 0)
1 (0.35667497 1)
0 (0.0 0))

Note how the returned values are suitable for MULTIPLE-VALUE-CALL with #’ADD-TO-COUNTER and a CROSS-ENTROPY-COUNTER.

Package

mgl-core.

Source

classification.lisp.

Function: measure-roc-auc (predictions pred &key key weight)

Return the area under the ROC curve for PREDICTIONS representing predictions for a binary classification problem. PRED is a predicate function for deciding whether a prediction belongs to the so called positive class. KEY returns a number for each element which is the predictor’s idea of how much that element is likely to belong to the class, although it’s not necessarily a probability.

If WEIGHT is NIL, then all elements of PREDICTIONS count as 1 towards the unnormalized sum within AUC. Else WEIGHT must be a function like KEY, but it should return the importance (a positive real number) of elements. If the weight of an prediction is 2 then it’s as if there were another identical copy of that prediction in PREDICTIONS.

The algorithm is based on algorithm 2 in the paper ’An introduction to ROC analysis’ by Tom Fawcett.

ROC AUC is equal to the probability of a randomly chosen positive having higher KEY (score) than a randomly chosen negative element. With equal scores in mind, a more precise version is: AUC is the expectation of the above probability over all possible sequences sorted by scores.

Package

mgl-core.

Source

classification.lisp.

Function: merge-cloud-specs (specs default-specs)

Combine cloud SPECS and DEFAULT-SPECS. If the first element of SPECS is :MERGE then merge them else return SPECS. Merging concatenates them but removes those specs from DEFAULT-SPECS that are between chunks that have a spec in SPECS. If a spec has CLASS NIL then it is removed as well. A cloud spec at minimum specifies the name of the chunks it connects:

(:chunk1 inputs :chunk2 features)

in which case it defaults to be a FULL-CLOUD. If that is not desired then the class can be specified:

(:chunk1 inputs :chunk2 features :class factored-cloud)

To remove a cloud from DEFAULT-SPECS use :CLASS NIL:

(:chunk1 inputs :chunk2 features :class nil)

Other initargs are passed as is to MAKE-INSTANCE:

(:chunk1 inputs :chunk2 features :class factored-cloud :rank 10)

You may also pass a CLOUD object as a spec.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: minimize (optimizer gradient-source &key weights dataset)

Minimize the value of the real valued function represented by GRADIENT-SOURCE by updating some of its parameters in WEIGHTS (a MAT or a sequence of MATs). Return WEIGHTS. DATASET (see MGL-DATASET::@MGL-DATASET) is a set of unoptimized parameters of the same function. For example, WEIGHTS may be the weights of a neural network while DATASET is the training set consisting of inputs suitable for SET-INPUT. The default
DATASET, (*INFINITELY-EMPTY-DATASET*) is suitable for when all parameters are optimized, so there is nothing left to come from the environment.

Optimization terminates if DATASET is a sampler and it runs out or when some other condition met (see TERMINATION, for example). If DATASET is a SEQUENCE, then it is reused over and over again.

Examples for various optimizers are provided in MGL-GD::@MGL-GD and MGL-CG::@MGL-CG.

Package

mgl-opt.

Source

optimize.lisp.

Function: monitor-bm-mean-field-bottom-up (dataset bm monitors)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: monitor-bm-mean-field-reconstructions (dataset bm monitors &key set-visible-p)

Like COLLECT-BM-MEAN-FIELD-ERRORS but reconstruct the labels even if they were missing.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: monitor-bpn-results (dataset bpn monitors)

For every batch (of size MAX-N-STRIPES of BPN) of instances in DATASET, set the batch as the next input with SET-INPUT, perform a FORWARD pass and apply MONITORS to the BPN (with APPLY-MONITORS). Finally, return the counters of MONITORS. This is built on top of MONITOR-MODEL-RESULTS.

Package

mgl-bp.

Source

backprop.lisp.

Function: monitor-dbn-mean-field-bottom-up (dataset dbn monitors)

Run the mean field up to RBM then down to the bottom and collect the errors with COLLECT-BATCH-ERRORS. By default, return the rmse at each level in the DBN.

Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: monitor-dbn-mean-field-reconstructions (dataset dbn monitors &key set-visible-p)

Run the mean field up to RBM then down to the bottom and collect the errors with COLLECT-BATCH-ERRORS. By default, return the rmse at each level in the DBN.

Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: monitor-model-results (fn dataset model monitors)

Call FN with batches of instances from DATASET until it runs
out (as in DO-BATCHES-FOR-MODEL). FN is supposed to apply MODEL to the batch and return some kind of result (for neural networks, the result is the model state itself). Apply MONITORS to each batch and the result returned by FN for that batch. Finally, return the list of counters of MONITORS.

The purpose of this function is to collect various results and statistics (such as error measures) efficiently by applying the model only once, leaving extraction of quantities of interest from the model’s results to MONITORS.

See the model specific versions of this functions such as MGL-BP:MONITOR-BPN-RESULTS.

Package

mgl-core.

Source

monitor.lisp.

Function: monitor-optimization-periodically (optimizer periodic-fns)

For each periodic function in the list of PERIODIC-FNS, add a
monitor to OPTIMIZER’s ON-OPTIMIZATION-STARTED, ON-OPTIMIZATION-FINISHED and ON-N-INSTANCES-CHANGED hooks. The monitors are simple functions that just call each periodic function with the event parameters (OPTIMIZER GRADIENT-SOURCE N-INSTANCES). Return OPTIMIZER.

To log and reset the monitors of the gradient source after every
1000 instances seen by OPTIMIZER:

(monitor-optimization-periodically optimizer
’((:fn log-my-test-error :period 2000)
(:fn reset-optimization-monitors :period 1000
:last-eval 0)))

Note how we don’t pass it’s allowed to just pass the initargs for a PERIODIC-FN instead of PERIODIC-FN itself. The :LAST-EVAL 0 bit prevents RESET-OPTIMIZATION-MONITORS from being called at the start of the optimization when the monitors are empty anyway.

Package

mgl-opt.

Source

optimize.lisp.

Function: multinomial-log-likelihood-ratio (k1 k2)

See "Accurate Methods for the Statistics of Surprise and Coincidence" by Ted Dunning (http://citeseer.ist.psu.edu/29096.html).

K1 is the number of outcomes in each class. K2 is the same in a possibly different process.

All elements in K1 and K2 are positive integers. To ensure this - and also as kind of prior - add a small number such as 1 each element in K1 and K2 before calling.

Package

mgl-util.

Source

util.lisp.

Function: n-rbms (dbn)
Setf Expander: (setf n-rbms) (dbn)
Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: name= (x y)

Return T if X and Y are EQL or if they are structured components whose elements are EQUAL. Strings and bit-vectors are EQUAL if they are the same length and have identical components. Other arrays must be EQ to be EQUAL.

Package

mgl-common.

Alias for

equal.

Function: nodes->inputs (bm)

Copy NODES to INPUTS.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: permute (seq permutation)
Package

mgl-util.

Source

util.lisp.

Function: plot* (mappings)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: poisson-random (mean)
Package

mgl-util.

Source

util.lisp.

Function: populate-map-cache (fnn dbm samples &key key convert-to-dbm-sample-fn if-exists periodic-fn)

Populate the CLAMPING-CACHE of the MAP lumps of FNN unrolled from DBM. The values for the MAP lumps are taken from mean field of the correspending chunk of the DBM. What happens when the cache already has an entry for a sample is determined by IF-EXISTS: if :SKIP, the default, the cache is unchanged; if :SUPERSEDE, the cache entry is replaced by the calculated contents; if :APPEND, the new (lump array) entries are appended to the existing ones; if :ERROR, an error is signalled.

Package

mgl-unroll.

Source

unroll.lisp.

Function: print-table (list &key stream empty-value repeat-marker compactp new-line-prefix)
Package

mgl-util.

Source

util.lisp.

Function: rank (cloud)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: read-state (object stream)

Read the weights of OBJECT from the bivalent STREAM where weights mean the learnt parameters. There is currently no sanity checking of data which will most certainly change in the future together with the serialization format. Return OBJECT.

Package

mgl-core.

Source

core.lisp.

Function: reconstruction-error (bm)

Return the squared norm of INPUTS - NODES not considering constant or conditioning chunks that aren’t reconstructed in any case. The second value returned is the number of nodes that contributed to the error.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: reconstruction-rmse (chunks)

Return the squared norm of INPUTS - NODES not considering constant or conditioning chunks that aren’t reconstructed in any case. The second value returned is the number of nodes that contributed to the error.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: renormalize-activations (->v*m-lumps l2-upper-bound)

If the l2 norm of the incoming weight vector of the a unit is larger than L2-UPPER-BOUND then renormalize it to L2-UPPER-BOUND. The list of ->V*M-LUMPS is assumed to be eventually fed to the same lump.

To use it, group the activation clumps into the same GD-OPTIMIZER and hang this function on AFTER-UPDATE-HOOK, that latter of which is done for you ARRANGE-FOR-RENORMALIZING-ACTIVATIONS.

See "Improving neural networks by preventing co-adaptation of feature detectors (Hinton, 2012)", <http://arxiv.org/pdf/1207.0580.pdf>.

Package

mgl-bp.

Source

lumps.lisp.

Function: rows-to-arrays (mat)
Package

mgl-util.

Source

util.lisp.

Function: running-stat-mean (stat)
Package

mgl-util.

Source

util.lisp.

Function: running-stat-variance (stat)
Package

mgl-util.

Source

util.lisp.

Function: sample-from (ratio seq &key weight replacement random-state)

Return a sequence constructed by sampling with or without REPLACEMENT from SEQ. The sum of weights in the result sequence will approximately be the sum of weights of SEQ times RATIO. If WEIGHT is NIL then elements are assumed to have equal weights, else WEIGHT should return a non-negative real number when called with an element of SEQ.

To randomly select half of the elements:

“‘common-lisp
(sample-from 1/2 ’(0 1 2 3 4 5))
=> (5 3 2)
“‘

To randomly select some elements such that the sum of their weights constitute about half of the sum of weights across the whole sequence:

“‘common-lisp
(sample-from 1/2 ’(0 1 2 3 4 5 6 7 8 9) :weight #’identity)
=> ;; sums to 28 that’s near 45/2
(9 4 1 6 8)
“‘

To sample with replacement (that is, allowing the element to be sampled multiple times):

“‘common-lisp
(sample-from 1 ’(0 1 2 3 4 5) :replacement t)
=> (1 1 5 1 4 4)
“‘

Package

mgl-resample.

Source

resample.lisp.

Function: sample-hidden (bm)

Generate samples from the probability distribution defined by the chunk type and the mean that resides in NODES.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: sample-stratified (ratio seq &key weight replacement key test random-state)

Like SAMPLE-FROM but makes sure that the weighted proportion of classes in the result is approximately the same as the proportion in SEQ. See STRATIFY for the description of KEY and TEST.

Package

mgl-resample.

Source

resample.lisp.

Function: sample-visible (bm)

Generate samples from the probability distribution defined by the chunk type and the mean that resides in NODES.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: save-state (filename object &key if-exists ensure)

Save weights of OBJECT to FILENAME. If ENSURE, then ENSURE-DIRECTORIES-EXIST is called on FILENAME. IF-EXISTS is passed on to OPEN. Return OBJECT.

Package

mgl-core.

Source

core.lisp.

Function: scaled-tanh (x)
Package

mgl-util.

Source

util.lisp.

Function: sech (x)
Package

mgl-util.

Source

util.lisp.

Function: segment-set->mat (segment-set mat)

Copy the values of SEGMENT-SET to MAT as if they were concatenated into a single MAT.

Package

mgl-opt.

Source

optimize.lisp.

Function: segment-set<-mat (segment-set mat)

Copy the values of MAT to the weight matrices of SEGMENT-SET as if they were concatenated into a single MAT.

Package

mgl-opt.

Source

optimize.lisp.

Function: select-random-element (seq)
Package

mgl-util.

Source

util.lisp.

Function: set-dropout-and-rescale-activation-weights (lump dropout fnn)

Set the dropout of LUMP to DROPOUT. Find the activation lump to which LUMP is fed and rescale its weights to compensate. There must be exactly on such activation lump or this function will fail.

Package

mgl-unroll.

Source

unroll.lisp.

Function: set-hidden-mean/1 (bm)

Set NODES of the chunks in the hidden layer to the means of their respective probability distributions.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: set-n-instances (optimizer gradient-source n-instances)

Set [N-INSTANCES][(reader iterative-optimizer)] of OPTIMIZER and fire ON-N-INSTANCES-CHANGED. ITERATIVE-OPTIMIZER subclasses must call this to increment [N-INSTANCES][(reader iterative-optimizer)].

Package

mgl-opt.

Source

optimize.lisp.

Function: set-visible-mean/1 (bm)

Set NODES of the chunks in the visible layer to the means of their respective probability distributions.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: settle-hidden-mean-field (bm &key supervisor)

Convenience function on top of SETTLE-MEAN-FIELD.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: settle-mean-field (chunks bm &key other-chunks supervisor)

Do possibly damped mean field updates on CHUNKS until convergence. Compute V’_{t+1}, what would normally be the means, but average it with the previous value: V_{t+1} = k * V_t + (1 - k) * V’{t+1} where K is the damping factor (an FLT between 0 and 1).

Call SUPERVISOR with CHUNKS BM and the iteration. Settling is finished when SUPERVISOR returns NIL. If SUPERVISOR returns a non-nil value then it’s taken to be a damping factor. For no damping return 0.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: settle-visible-mean-field (bm &key supervisor)

Convenience function on top of SETTLE-MEAN-FIELD.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: shuffle (seq)

Copy of SEQ and shuffle it using Fisher-Yates algorithm.

Package

mgl-resample.

Source

resample.lisp.

Function: shuffle! (seq)

Shuffle SEQ using Fisher-Yates algorithm.

Package

mgl-resample.

Source

resample.lisp.

Function: shuffle-groups (seq group-size &key start end)
Package

mgl-util.

Source

util.lisp.

Function: sigmoid (x)
Package

mgl-util.

Source

util.lisp.

Function: sign (x)
Package

mgl-util.

Source

util.lisp.

Function: sorting-permutation (seq pred &key key)
Package

mgl-util.

Source

util.lisp.

Function: split-fold/cont (seq fold n-folds)

Imagine dividing SEQ into N-FOLDS subsequences of the same
size (bar rounding). Return the subsequence of index FOLD as the first value and the all the other subsequences concatenated into one as the second value. The order of elements remains stable. This function is suitable as the SPLIT-FN argument of CROSS-VALIDATE.

Package

mgl-resample.

Source

resample.lisp.

Function: split-fold/mod (seq fold n-folds)

Partition SEQ into two sequences: one with elements of SEQ with indices whose remainder is FOLD when divided with N-FOLDS, and a second one with the rest. The second one is the larger set. The order of elements remains stable. This function is suitable as the SPLIT-FN argument of CROSS-VALIDATE.

Package

mgl-resample.

Source

resample.lisp.

Function: split-plist (list keys)
Package

mgl-util.

Source

util.lisp.

Function: split-stratified (seq fold n-folds &key key test weight)

Split SEQ into N-FOLDS partitions (as in FRACTURE-STRATIFIED). Return the partition of index FOLD as the first value, and the concatenation of the rest as the second value. This function is suitable as the SPLIT-FN argument of CROSS-VALIDATE (mostly likely as a closure with KEY, TEST, WEIGHT bound).

Package

mgl-resample.

Source

resample.lisp.

Function: splot* (mappings)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: spread-strata (seq &key key test)

Return a sequence that’s a reordering of SEQ such that elements belonging to different strata (under KEY and TEST, see STRATIFY) are distributed evenly. The order of elements belonging to the same stratum is unchanged.

For example, to make sure that even and odd numbers are distributed evenly:

“‘cl-transcript
(spread-strata ’(0 2 4 6 8 1 3 5 7 9) :key #’evenp)
=> (0 1 2 3 4 5 6 7 8 9)
“‘

Same thing with unbalanced classes:

“‘cl-transcript
(spread-strata (vector 0 2 3 5 6 1 4)
:key (lambda (x)
(if (member x ’(1 4)) t
nil)))
=> #(0 1 2 3 4 5 6)
“‘

Package

mgl-resample.

Source

resample.lisp.

Function: start-session (&key binary display geometry persistp output error)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: stratify (seq &key key test)

Return the list of strata of SEQ. SEQ is a sequence of elements for which the function KEY returns the class they belong to. Such classes are opaque objects compared for equality with TEST. A stratum is a sequence of elements with the same (under TEST) KEY.

“‘cl-transcript
(stratify ’(0 1 2 3 4 5 6 7 8 9) :key #’evenp)
=> ((0 2 4 6 8) (1 3 5 7 9))
“‘

Package

mgl-resample.

Source

resample.lisp.

Function: subseq* (sequence start &optional end)
Package

mgl-util.

Source

util.lisp.

Function: supervise-mean-field/default (chunks bm iteration &key node-change-limit n-undamped-iterations n-damped-iterations damping-factor)

A supervisor for SETTLE-MEAN-FIELD. Return NIL if the average of the absolute value of change in nodes is below NODE-CHANGE-LIMIT, else return 0 damping for N-UNDAMPED-ITERATIONS then DAMPING-FACTOR for another N-DAMPED-ITERATIONS, then NIL.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: terminate-optimization-p (n-instances termination)

Utility function for subclasses of ITERATIVE-OPTIMIZER. It returns whether optimization is to be terminated based on N-INSTANCES and TERMINATION that are values of the respective accessors of ITERATIVE-OPTIMIZER.

Package

mgl-opt.

Source

optimize.lisp.

Function: time-step (&key rnn)

Return the time step RNN is currently executing or being unfolded for. It is 0 when the RNN is being unfolded for the first time.

Package

mgl-bp.

Source

backprop.lisp.

Function: try-chance (chance)
Package

mgl-util.

Source

util.lisp.

Function: uninterned-symbol-p (object)
Package

mgl-util.

Source

util.lisp.

Function: unroll-dbm (dbm &key chunks map reconstruction)
Package

mgl-unroll.

Source

unroll.lisp.

Function: unroll-dbn (dbn &key bottom-up-only)

Unroll DBN recursively and turn it into a feed-forward backpropagation network. A single RBM in DBN of the form VISIBLE <-> HIDDEN is transformed into a VISIBLE -> HIDDEN -> RECONSTRUCTION-OF-VISIBLE network. While the undirected connection <-> has a common weight matrix for both directions, in the backprop network the weights pertaining to ->’s are distinct but are initialized from the same <-> (with one being the tranpose of it).

If BOTTOM-UP-ONLY then don’t generate the part of the network that represents the top-down flow, that is, skip the reconstructions.

Return backprop network lump definition forms, as the second value ‘inits’: initialization specifications suitable for INITIALIZE-FNN-FROM-BM.

If there is no corresponding chunk in the layer below or there is no rbm below then the chunk is translated into an INPUT lump. Desired outputs and error node are not added. The first element of RMBS is the topmost one (last of the DBN), the one that goes into the middle of the backprop network.

Package

mgl-unroll.

Source

unroll.lisp.

Function: up-dbm (dbm)

Do a single upward pass in DBM, performing approximate inference. Disregard intralayer and downward connections, double activations to chunks having upward connections.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: update-gp (gp inputs outputs &key means covariances)

Update GP with the evidence embodied by INPUTS and the corresponding OUTPUTS. Return a new POSTERIOR-GP. If MEANS and COVARIANCES are given, then GP-MEANS-AND-COVARIANCES is not called.

Package

mgl-gp.

Source

gaussian-process.lisp.

Function: warped-time (&key rnn time lag)

Return the index of the BPN in CLUMPS of RNN whose task it is to execute computation at ‘(- (TIME-STEP RNN) LAG)‘. This is normally the same as TIME-STEP (disregarding LAG). That is, CLUMPS can be indexed by TIME-STEP to get the BPN. However, when *WARP-TIME* is true, execution proceeds in a cycle as the structure of the network allows.

Suppose we have a typical RNN that only ever references the previous time step so its MAX-LAG is 1. Its UNFOLDER returns ‘BPN‘s of identical structure bar a shift in their time lagged connections except for the very first, so WARP-START and WARP-LENGTH are both 1. If *WARP-TIME* is NIL, then the mapping from TIME-STEP to the BPN in CLUMPS is straightforward:

time: | 0 | 1 | 2 | 3 | 4 | 5 ——–+—-+—-+—-+—-+—-+—-
warped: | 0 | 1 | 2 | 3 | 4 | 5 ——–+—-+—-+—-+—-+—-+—-
bpn: | b0 | b1 | b2 | b3 | b4 | b5

When *WARP-TIME* is true, we reuse the ‘B1‘ - ‘B2‘ bpns in a loop:

time: | 0 | 1 | 2 | 3 | 4 | 5 ——–+—-+—-+—-+—-+—-+—-
warped: | 0 | 1 | 2 | 1 | 2 | 1 ——–+—-+—-+—-+—-+—-+—-
bpn: | b0 | b1 | b2 | b1*| b2 | b1*

‘B1*‘ is the same BPN as ‘B1‘, but its connections created by LAG go through warped time and end up referencing ‘B2‘. This way, memory consumption is independent of the number time steps needed to process a sequence or make predictions.

To be able to pull this trick off WARP-START and WARP-LENGTH must be specified when the RNN is instantiated. In general, with *WARP-TIME* ‘(+ WARP-START (MAX 2 WARP-LENGTH))‘ bpns are needed. The 2 comes from the fact that with cycle length 1 a bpn would need to takes its input from itself which is problematic because it has NODES for only one set of values.

Package

mgl-bp.

Source

backprop.lisp.

Function: write-state (object stream)

Write weight of OBJECT to the bivalent STREAM. Return OBJECT.

Package

mgl-core.

Source

core.lisp.

Function: zip-evenly (seqs &key result-type)

Make a single sequence out of the sequences in SEQS so that in the returned sequence indices of elements belonging to the same source sequence are spread evenly across the whole range. The result is a list is RESULT-TYPE is LIST, it’s a vector if RESULT-TYPE is VECTOR. If RESULT-TYPE is NIL, then it’s determined by the type of the first sequence in SEQS.

“‘cl-transcript
(zip-evenly ’((0 2 4) (1 3)))
=> (0 1 2 3 4)
“‘

Package

mgl-resample.

Source

resample.lisp.


6.1.6 Generic functions

Generic Function: accumulate-gradients* (gradient-source sink batch multiplier valuep)

Add MULTIPLIER times the sum of first-order
gradients to accumulators of SINK (normally accessed with DO-GRADIENT-SINK) and if VALUEP, return the sum of values of the function being optimized for a BATCH of instances. GRADIENT-SOURCE is the object representing the function being optimized, SINK is gradient sink.

Note the number of instances in BATCH may be larger than what GRADIENT-SOURCE process in one go (in the sense of say, MAX-N-STRIPES), so DO-BATCHES-FOR-MODEL or something like (GROUP BATCH MAX-N-STRIPES) can be handy.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: accumulate-gradients* ((learner bp-learner) gradient-sink batch multiplier valuep)
Source

backprop.lisp.

Method: accumulate-gradients* ((learner bm-pcd-learner) gradient-sink batch multiplier valuep)
Source

boltzmann-machine.lisp.

Method: accumulate-gradients* ((learner rbm-cd-learner) gradient-sink batch multiplier valuep)
Source

boltzmann-machine.lisp.

Method: accumulate-gradients* ((learner sparse-bm-learner) sink batch multiplier valuep)
Source

boltzmann-machine.lisp.

Method: accumulate-gradients* ((diff-fn diffun) gradient-sink batch multiplier valuep)
Source

differentiable-function.lisp.

Method: accumulate-gradients* ((optimizer decayed-cg-optimizer-mixin) gradient-source batch multiplier valuep)
Source

conjugate-gradient.lisp.

Generic Function: add-to-counter (counter &rest args)

Add ARGS to COUNTER in some way. See specialized
methods for type specific documentation. The kind of arguments to be supported is the what the measurer functions (see @MGL-MEASURER) intended to be paired with the counter return as multiple values.

Package

mgl-core.

Source

counter.lisp.

Methods
Method: add-to-counter ((counter cross-entropy-counter) &rest args)
Source

classification.lisp.

Method: add-to-counter ((counter concat-counter) &rest args)
Method: add-to-counter ((counter basic-counter) &rest args)
Generic Reader: after-update-hook (object)
Generic Writer: (setf after-update-hook) (object)
Package

mgl-gd.

Methods
Reader Method: after-update-hook ((gd-optimizer gd-optimizer))
Writer Method: (setf after-update-hook) ((gd-optimizer gd-optimizer))

A list of functions with no arguments called after each weight update.

Source

gradient-descent.lisp.

Target Slot

after-update-hook.

Generic Function: apply-monitor (monitor &rest arguments)

Apply MONITOR to ARGUMENTS. This sound fairly
generic, because it is. MONITOR can be anything, even a simple function or symbol, in which case this is just CL:APPLY. See @MGL-MONITOR for more.

Package

mgl-core.

Source

monitor.lisp.

Methods
Method: apply-monitor ((monitor monitor) &rest arguments)
Method: apply-monitor ((monitor function) &rest arguments)
Method: apply-monitor ((monitor symbol) &rest arguments)
Generic Reader: attributes (object)
Generic Writer: (setf attributes) (object)
Package

mgl-core.

Methods
Reader Method: attributes ((attributed attributed))
Writer Method: (setf attributes) ((attributed attributed))

A plist of attribute keys and values.

Source

counter.lisp.

Target Slot

attributes.

Generic Function: backward (clump)

Compute the partial derivatives of the function
represented by CLUMP and add them to DERIVATIVES of the corresponding argument clumps. The DERIVATIVES of CLUMP contains the sum of partial derivatives of all clumps by the corresponding output. This function is intended to be called after a FORWARD pass.

Take the ->SIGMOID clump for example when the network is being applied to a batch of two instances ‘x1‘ and ‘x2‘. ‘x1‘ and ‘x2‘ are set in the ->INPUT lump X. The sigmoid computes ‘1/(1+exp(-x))‘ where ‘X‘ is its only argument clump.

f(x) = 1/(1+exp(-x))

When BACKWARD is called on the sigmoid lump, its DERIVATIVES is a 2x1 MAT object that contains the partial derivatives of the loss function:

dL(x1)/df
dL(x2)/df

Now the BACKWARD method of the sigmoid needs to add ‘dL(x1)/dx1‘ and ‘dL(x2)/dx2‘ to DERIVATIVES of ‘X‘. Now, ‘dL(x1)/dx1 = dL(x1)/df * df(x1)/dx1‘ and the first term is what we have in DERIVATIVES of the sigmoid so it only needs to calculate the second term.

Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: backward ((lump ->periodic))
Source

gaussian-process.lisp.

Method: backward ((lump ->rough-exponential))
Source

gaussian-process.lisp.

Method: backward ((lump ->stretch))
Source

gaussian-process.lisp.

Method: backward ((lump ->rep))
Source

gaussian-process.lisp.

Method: backward ((lump ->ref))
Source

gaussian-process.lisp.

Method: backward ((lump ->gp))
Source

gaussian-process.lisp.

Method: backward ((lump ->constant))
Source

unroll.lisp.

Method: backward ((lump ->seq-barrier))
Source

lumps.lisp.

Method: backward ((lump ->normalized))
Source

lumps.lisp.

Method: backward ((lump ->exp))
Source

lumps.lisp.

Method: backward ((lump ->sin))
Source

lumps.lisp.

Method: backward ((lump ->abs))
Source

lumps.lisp.

Method: backward ((lump ->*))
Source

lumps.lisp.

Method: backward ((lump ->+))
Source

lumps.lisp.

Method: backward ((lump ->v*m))
Source

lumps.lisp.

Method: backward ((lump ->sample-binary))
Source

lumps.lisp.

Method: backward ((lump ->gaussian-random))
Source

lumps.lisp.

Method: backward ((lump ->softmax-xe-loss))
Source

lumps.lisp.

Method: backward ((lump ->squared-difference))
Source

lumps.lisp.

Method: backward :around ((lump ->loss))
Source

lumps.lisp.

Method: backward ((lump ->sum))
Source

lumps.lisp.

Method: backward ((lump ->max-channel))
Source

lumps.lisp.

Method: backward ((lump ->min))
Source

lumps.lisp.

Method: backward ((lump ->max))
Source

lumps.lisp.

Method: backward ((lump ->relu))
Source

lumps.lisp.

Method: backward ((lump ->scaled-tanh))
Source

lumps.lisp.

Method: backward ((lump ->tanh))
Source

lumps.lisp.

Method: backward ((lump ->sigmoid))
Source

lumps.lisp.

Method: backward ((lump ->embedding))
Source

lumps.lisp.

Method: backward ((lump ->batch-normalized))
Source

lumps.lisp.

Method: backward ((lump ->input))
Source

lumps.lisp.

Method: backward ((lump ->dropout))
Source

lumps.lisp.

Method: backward ((lump ->weight))
Source

lumps.lisp.

Method: backward ((bpn bpn))
Generic Reader: bag-of-words-kind (object)
Package

mgl-nlp.

Methods
Reader Method: bag-of-words-kind ((bag-of-words-encoder bag-of-words-encoder))

automatically generated reader method

Source

nlp.lisp.

Target Slot

kind.

Generic Reader: batch-normalization (object)
Package

mgl-bp.

Methods
Reader Method: batch-normalization ((->batch-normalized ->batch-normalized))

The ->BATCH-NORMALIZATION of this lump. May be
shared between multiple ->BATCH-NORMALIZED lumps.

Batch normalization is special in that it has state apart from the computed results (NODES) and its derivatives (DERIVATIVES). This state is the estimated mean and variance of its inputs and they are encapsulated by ->BATCH-NORMALIZATION.

If NORMALIZATION is not given at instantiation, then a new ->BATCH-NORMALIZATION object will be created automatically, passing :BATCH-SIZE, :VARIANCE-ADJUSTMENT, and :POPULATION-DECAY arguments on to ->BATCH-NORMALIZATION. See [BATCH-SIZE][(reader ->batch-normalization)], [VARIANCE-ADJUSTMENT][(reader ->batch-normalization)] and [POPULATION-DECAY][(reader ->batch-normalization)]. New scale and shift weight lumps will be created with names:

‘(,name :scale)
‘(,name :shift)

where ‘\NAME‘ is the NAME of this lump.

This default behavior covers the use-case where the statistics kept by ->BATCH-NORMALIZATION are to be shared only between time steps of an RNN.

Source

lumps.lisp.

Target Slot

normalization.

Generic Reader: batch-size (object)
Package

mgl-common.

Source

common.lisp.

Methods
Reader Method: batch-size ((->batch-normalization ->batch-normalization))

Normally all stripes participate in the batch.
Lowering the number of stripes may increase the regularization effect, but it also makes the computation less efficient. By setting BATCH-SIZE to a divisor of N-STRIPES one can decouple the concern of efficiency from that of regularization. The default value, NIL, is equivalent to N-STRIPES. BATCH-SIZE only affects training.

With the special value :USE-POPULATION, instead of the mean and the variance of the current batch, use the population statistics for normalization. This effectively cancels the regularization effect, leaving only the faster learning.

Source

lumps.lisp.

Target Slot

batch-size.

Reader Method: batch-size ((cg-optimizer cg-optimizer))

After having gone through BATCH-SIZE number of
instances, weights are updated. Normally, CG operates on all available data, but it may be useful to introduce some noise into the optimization to reduce overfitting by using smaller batch sizes. If BATCH-SIZE is not set, it is initialized to the size of the dataset at the start of optimization.

Source

conjugate-gradient.lisp.

Target Slot

batch-size.

Reader Method: batch-size ((gd-optimizer gd-optimizer))

After having gone through BATCH-SIZE number of
inputs, weights are updated. With BATCH-SIZE 1, one gets Stochastics Gradient Descent. With BATCH-SIZE equal to the number of instances in the dataset, one gets standard, ’batch’ gradient descent. With BATCH-SIZE between these two extremes, one gets the most practical ’mini-batch’ compromise.

Source

gradient-descent.lisp.

Target Slot

batch-size.

Generic Writer: (setf batch-size) (object)
Package

mgl-common.

Methods
Writer Method: (setf batch-size) ((cg-optimizer cg-optimizer))

After having gone through BATCH-SIZE number of
instances, weights are updated. Normally, CG operates on all available data, but it may be useful to introduce some noise into the optimization to reduce overfitting by using smaller batch sizes. If BATCH-SIZE is not set, it is initialized to the size of the dataset at the start of optimization.

Source

conjugate-gradient.lisp.

Target Slot

batch-size.

Writer Method: (setf batch-size) ((gd-optimizer gd-optimizer))

After having gone through BATCH-SIZE number of
inputs, weights are updated. With BATCH-SIZE 1, one gets Stochastics Gradient Descent. With BATCH-SIZE equal to the number of instances in the dataset, one gets standard, ’batch’ gradient descent. With BATCH-SIZE between these two extremes, one gets the most practical ’mini-batch’ compromise.

Source

gradient-descent.lisp.

Target Slot

batch-size.

Generic Reader: before-update-hook (object)
Generic Writer: (setf before-update-hook) (object)
Package

mgl-gd.

Methods
Reader Method: before-update-hook ((batch-gd-optimizer batch-gd-optimizer))
Writer Method: (setf before-update-hook) ((batch-gd-optimizer batch-gd-optimizer))

A list of functions of no parameters. Each
function is called just before a weight update takes place (after accumulated gradients have been divided the length of the batch). Convenient to hang some additional gradient accumulating code on.

Source

gradient-descent.lisp.

Target Slot

before-update-hook.

Generic Reader: bm (object)
Package

mgl-bm.

Methods
Reader Method: bm ((bm-learner bm-learner))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

bm.

Generic Reader: bpn (object)
Package

mgl-bp.

Methods
Reader Method: bpn ((bp-learner bp-learner))

The BPN for which this BP-LEARNER provides the gradients.

Source

backprop.lisp.

Target Slot

bpn.

Generic Reader: cg-args (object)
Package

mgl-cg.

Methods
Reader Method: cg-args ((cg-optimizer cg-optimizer))

automatically generated reader method

Source

conjugate-gradient.lisp.

Target Slot

cg-args.

Generic Writer: (setf cg-args) (object)
Package

mgl-cg.

Methods
Writer Method: (setf cg-args) ((cg-optimizer cg-optimizer))

automatically generated writer method

Source

conjugate-gradient.lisp.

Target Slot

cg-args.

Generic Reader: chunk (object)
Package

mgl-bm.

Methods
Reader Method: chunk ((sparsity-gradient-source sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

chunk.

Generic Reader: chunk1 (object)
Package

mgl-bm.

Methods
Reader Method: chunk1 ((cloud cloud))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

chunk1.

Generic Reader: chunk2 (object)
Package

mgl-bm.

Methods
Reader Method: chunk2 ((cloud cloud))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

chunk2.

Generic Function: chunks (object)
Package

mgl-bm.

Methods
Method: chunks ((dbn dbn))
Source

deep-belief-network.lisp.

Reader Method: chunks ((bm bm))

A list of all the chunks in this BM. It’s VISIBLE-CHUNKS and HIDDEN-CHUNKS appended.

Source

boltzmann-machine.lisp.

Target Slot

chunks.

Generic Reader: clamping-cache (object)
Package

mgl-unroll.

Methods
Reader Method: clamping-cache ((fnn-clamping-cache fnn-clamping-cache))

automatically generated reader method

Source

unroll.lisp.

Target Slot

clamping-cache.

Generic Reader: cloud (object)
Package

mgl-bm.

Methods
Reader Method: cloud ((sparsity-gradient-source sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

cloud.

Generic Reader: cloud-a (object)
Package

mgl-bm.

Methods
Reader Method: cloud-a ((factored-cloud factored-cloud))

A full cloud whose visible chunk is the same as
the visible chunk of this cloud and whose hidden chunk is the same as the visible chunk of CLOUD-B.

Source

boltzmann-machine.lisp.

Target Slot

cloud-a.

Generic Reader: cloud-b (object)
Package

mgl-bm.

Methods
Reader Method: cloud-b ((factored-cloud factored-cloud))

A full cloud whose hidden chunk is the same as the
hidden chunk of this cloud and whose visible chunk is the same as the hidden chunk of CLOUD-A.

Source

boltzmann-machine.lisp.

Target Slot

cloud-b.

Generic Function: clouds (object)
Package

mgl-bm.

Methods
Method: clouds ((dbn dbn))
Source

deep-belief-network.lisp.

Reader Method: clouds ((bm bm))

Normally, a list of CLOUDS representing the
connections between chunks. During initialization cloud specs are allowed in the list.

Source

boltzmann-machine.lisp.

Target Slot

clouds.

Generic Reader: clouds-up-to-layers (object)
Package

mgl-bm.

Methods
Reader Method: clouds-up-to-layers ((dbm dbm))

Each element of this list is a list of clouds connected from below to the layer of the same index.

Source

boltzmann-machine.lisp.

Target Slot

clouds-up-to-layers.

Generic Reader: clumps (object)
Package

mgl-bp.

Methods
Reader Method: clumps ((bpn bpn))

A topological sorted adjustable array with a fill
pointer that holds the clumps that make up the network. Clumps are added to it by ADD-CLUMP or, more often, automatically when within a BUILD-FNN or BUILD-RNN. Rarely needed, FIND-CLUMP takes care of most uses.

Source

backprop.lisp.

Target Slot

clumps.

Generic Reader: concatenation-type (object)
Package

mgl-core.

Methods
Reader Method: concatenation-type ((concat-counter concat-counter))

A type designator suitable as the RESULT-TYPE argument to CONCATENATE.

Source

counter.lisp.

Target Slot

concatenation-type.

Generic Reader: conditioning-chunks (object)
Package

mgl-bm.

Methods
Reader Method: conditioning-chunks ((bm bm))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

conditioning-chunks.

Generic Function: confusion-class-name (matrix class)

Name of CLASS for presentation purposes.

Package

mgl-core.

Source

classification.lisp.

Methods
Method: confusion-class-name ((matrix confusion-matrix) class)
Generic Function: confusion-count (matrix target prediction)
Package

mgl-core.

Source

classification.lisp.

Methods
Method: confusion-count ((matrix confusion-matrix) target prediction)
Generic Function: (setf confusion-count) (matrix target prediction)
Package

mgl-core.

Source

classification.lisp.

Methods
Method: (setf confusion-count) ((matrix confusion-matrix) target prediction)
Generic Function: confusion-matrix-classes (matrix)

A list of all classes. The default is to collect
classes from the counts. This can be overridden if, for instance, some classes are not present in the results.

Package

mgl-core.

Source

classification.lisp.

Methods
Method: confusion-matrix-classes ((matrix confusion-matrix))
Generic Function: copy (context object)

Make a deepish copy of OBJECT in CONTEXT.

Package

mgl-util.

Source

copy.lisp.

Methods
Method: copy :around (context object)
Method: copy (context object)
Method: copy (context (cons cons))
Method: copy (context (object standard-object))
Generic Function: copy-object-extra-initargs (context original-object)

Return a list of

Package

mgl-util.

Source

copy.lisp.

Methods
Method: copy-object-extra-initargs ((context (eql mgl-bm:pcd)) (chunk chunk))
Source

boltzmann-machine.lisp.

Method: copy-object-extra-initargs ((context (eql mgl-bm:dbm->dbn)) (chunk chunk))
Source

boltzmann-machine.lisp.

Method: copy-object-extra-initargs (context original-object)
Generic Function: copy-object-slot (context original-object slot-name value)

Return the value of the slot in the copied object
and T, or NIL as the second value if the slot need not be initialized.

Package

mgl-util.

Source

copy.lisp.

Methods
Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original rbm) (slot-name (eql mgl-bm:dbn)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original dbm) (slot-name (eql mgl-bm:hidden-chunks)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original dbm) (slot-name (eql mgl-bm:visible-chunks)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original bm) (slot-name (eql mgl-core:max-n-stripes)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original bm) (slot-name (eql mgl-bm:chunks)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original full-cloud) (slot-name (eql mgl-common:weights)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original cloud) (slot-name (eql mgl-bm::cached-activations2)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original cloud) (slot-name (eql mgl-bm::cached-activations1)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original cloud) (slot-name (eql mgl-bm::cached-version2)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original cloud) (slot-name (eql mgl-bm::cached-version1)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original temporal-chunk) (slot-name (eql mgl-bm::has-inputs-p)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original temporal-chunk) (slot-name (eql mgl-bm::next-node-inputs)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original chunk) (slot-name (eql mgl-bm:indices-present)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original chunk) (slot-name (eql mgl-bm:inputs)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original chunk) (slot-name (eql mgl-bm::old-nodes)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original chunk) (slot-name (eql mgl-bm:means)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:pcd)) (original chunk) (slot-name (eql mgl-common:nodes)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original bm) (slot-name (eql mgl-core:max-n-stripes)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original bm) (slot-name (eql mgl-bm:chunks)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original full-cloud) (slot-name (eql mgl-common:weights)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original cloud) (slot-name (eql mgl-bm::cached-activations2)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original cloud) (slot-name (eql mgl-bm::cached-activations1)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original cloud) (slot-name (eql mgl-bm::cached-version2)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original cloud) (slot-name (eql mgl-bm::cached-version1)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original temporal-chunk) (slot-name (eql mgl-bm::has-inputs-p)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original temporal-chunk) (slot-name (eql mgl-bm::next-node-inputs)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original chunk) (slot-name (eql mgl-bm:indices-present)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original chunk) (slot-name (eql mgl-bm:inputs)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original chunk) (slot-name (eql mgl-bm::old-nodes)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original chunk) (slot-name (eql mgl-bm:means)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot ((context (eql mgl-bm:dbm->dbn)) (original chunk) (slot-name (eql mgl-common:nodes)) value)
Source

boltzmann-machine.lisp.

Method: copy-object-slot (context original-object slot-name value)
Generic Function: cost (model)

Return the value of the cost function being
minimized. Calling this only makes sense in the context of an ongoing optimization (see MINIMIZE). The cost is that of a batch of instances.

Package

mgl-common.

Source

optimize.lisp.

Methods
Method: cost ((lump ->softmax-xe-loss))
Source

lumps.lisp.

Method: cost ((lump ->loss))
Source

lumps.lisp.

Method: cost ((rnn rnn))
Source

backprop.lisp.

Method: cost ((bpn bpn))
Source

backprop.lisp.

Reader Method: cost ((sparsity-gradient-source sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

cost.

Generic Function: counter (monitor)

Return an object representing the state of MONITOR
or NIL, if it doesn’t have any (say because it’s a simple logging function). Most monitors have counters into which they accumulate results until they are printed and reset. See @MGL-COUNTER for more.

Package

mgl-core.

Source

monitor.lisp.

Methods
Reader Method: counter ((monitor monitor))

The COUNTER of a monitor carries out the
aggregation of results returned by MEASURER. The See @MGL-COUNTER for a library of counters.

Target Slot

counter.

Method: counter ((monitor function))
Method: counter ((monitor symbol))
Generic Function: counter-raw-values (counter)

Return any number of values representing the state
of COUNTER in such a way that passing the returned values as arguments ADD-TO-COUNTER on a fresh instance of the same type recreates the original state.

Package

mgl-core.

Source

counter.lisp.

Methods
Method: counter-raw-values ((counter concat-counter))
Method: counter-raw-values ((counter basic-counter))
Generic Function: counter-values (counter)

Return any number of values representing the state of COUNTER. See specialized methods for type specific documentation.

Package

mgl-core.

Source

counter.lisp.

Methods
Method: counter-values ((counter concat-counter))
Method: counter-values ((counter rmse-counter))
Method: counter-values ((counter basic-counter))
Generic Reader: covariance-lump-name (object)
Package

mgl-gp.

Methods
Reader Method: covariance-lump-name ((fnn-gp fnn-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

covariance-lump-name.

Generic Reader: cuda-window-start-time (object)
Generic Writer: (setf cuda-window-start-time) (object)
Package

mgl-bp.

Methods
Reader Method: cuda-window-start-time ((rnn rnn))
Writer Method: (setf cuda-window-start-time) ((rnn rnn))

Due to unfolding, the memory footprint of an RNN
is almost linear in the number of time steps (i.e. the max sequence length). For prediction, this is addressed by @MGL-RNN-TIME-WARP. For training, we cannot discard results of previous time steps because they are needed for backpropagation, but we can at least move them out of GPU memory if they are not going to be used for a while and copy them back before they are needed. Obviously, this is only relevant if CUDA is being used.

If CUDA-WINDOW-START-TIME is NIL, then this feature is turned off. Else, during training, at CUDA-WINDOW-START-TIME or later time steps, matrices belonging to non-weight lumps may be forced out of GPU memory and later brought back as neeeded.

This feature is implemented in terms of MGL-MAT:WITH-SYNCING-CUDA-FACETS that uses CUDA host memory (also known as _page-locked_ or _pinned memory_) to do asynchronous copies concurrently with normal computation. The consequence of this is that it is now main memory usage that’s unbounded which toghether with page-locking makes it a potent weapon to bring a machine to a halt. You were warned.

Source

backprop.lisp.

Target Slot

cuda-window-start-time.

Generic Reader: damping (object)
Package

mgl-bm.

Methods
Reader Method: damping ((sparsity-gradient-source sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

damping.

Generic Reader: dbn (object)
Package

mgl-bm.

Methods
Reader Method: dbn ((rbm rbm))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

dbn.

Generic Function: decode (decoder encoded)

Decode ENCODED with ENCODER. For an DECODER /
ENCODER pair, ‘(DECODE DECODER (ENCODE ENCODER OBJECT))‘ must be equal in some sense to ‘OBJECT‘.

If DECODER is a function designator, then it’s simply ‘FUNCALL‘ed with ENCODED.

Package

mgl-core.

Source

feature.lisp.

Methods
Method: decode ((indexer encoder/decoder) index)
Method: decode ((decoder symbol) encoded)
Method: decode ((decoder function) encoded)
Generic Function: default-mean-field-supervisor (bm)

Return a function suitable as the SUPERVISOR
argument for SETTLE-MEAN-FIELD. The default implementation

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: default-mean-field-supervisor ((bm bm))
Generic Function: default-size (lump)

Return a default for the [SIZE][(reader lump)] of
LUMP if one is not supplied at instantiation. The value is often computed based on the sizes of the inputs. This function is for implementing new lump types.

Package

mgl-bp.

Source

lumps.lisp.

Methods
Method: default-size ((lump ->periodic))
Source

gaussian-process.lisp.

Method: default-size ((lump ->rough-exponential))
Source

gaussian-process.lisp.

Method: default-size ((lump ->stretch))
Source

gaussian-process.lisp.

Method: default-size ((lump ->rep))
Source

gaussian-process.lisp.

Method: default-size ((lump ->ref))
Source

gaussian-process.lisp.

Method: default-size ((lump ->gp))
Source

gaussian-process.lisp.

Method: default-size ((lump ->constant))
Source

unroll.lisp.

Method: default-size ((lump ->seq-barrier))
Method: default-size ((lump ->normalized))
Method: default-size ((lump ->exp))
Method: default-size ((lump ->sin))
Method: default-size ((lump ->abs))
Method: default-size ((lump ->*))
Method: default-size ((lump ->+))
Method: default-size ((lump ->v*m))
Method: default-size ((lump ->sample-binary))
Method: default-size ((lump ->softmax-xe-loss))
Method: default-size ((lump ->squared-difference))
Method: default-size ((lump ->sum))
Method: default-size ((lump ->max-channel))
Method: default-size ((lump ->min))
Method: default-size ((lump ->max))
Method: default-size ((lump ->relu))
Method: default-size ((lump ->scaled-tanh))
Method: default-size ((lump ->tanh))
Method: default-size ((lump ->embedding))
Method: default-size ((lump ->batch-normalization))
Method: default-size ((lump ->batch-normalized))
Method: default-size ((lump ->dropout))
Method: default-size (lump)
Generic Reader: default-value (object)
Package

mgl-common.

Source

common.lisp.

Methods
Reader Method: default-value ((lump lump))

Upon creation or resize the lump’s nodes get filled with this value.

Source

lumps.lisp.

Target Slot

default-value.

Reader Method: default-value ((constant-chunk constant-chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

default-value.

Generic Function: derivatives (clump)

Return the MAT object representing the partial
derivatives of the function CLUMP computes. The returned partial derivatives were accumulated by previous BACKWARD calls.

This matrix is shaped like the matrix returned by NODES.

Package

mgl-bp.

Source

backprop.lisp.

Methods
Reader Method: derivatives ((lump lump))

The derivatives computed in the backward pass are
stored here. This matrix is very much like [NODES][(reader lump)] in shape and size.

Source

lumps.lisp.

Target Slot

derivatives.

Method: derivatives ((bpn bpn))
Generic Reader: dimensions (object)
Package

mgl-bp.

Methods
Reader Method: dimensions ((->weight ->weight))

NODES and DERIVATIVES of this lump will be allocated with these dimensions.

Source

lumps.lisp.

Target Slot

dimensions.

Generic Reader: dropout (object)
Generic Writer: (setf dropout) (object)
Package

mgl-bp.

Methods
Reader Method: dropout ((->sigmoid ->sigmoid))
Writer Method: (setf dropout) ((->sigmoid ->sigmoid))

See [DROPOUT][(ACCESSOR ->DROPOUT)].

Source

lumps.lisp.

Target Slot

dropout.

Reader Method: dropout ((->input ->input))
Writer Method: (setf dropout) ((->input ->input))

See [DROPOUT][(ACCESSOR ->DROPOUT)].

Source

lumps.lisp.

Target Slot

dropout.

Reader Method: dropout ((->dropout ->dropout))
Writer Method: (setf dropout) ((->dropout ->dropout))

If non-NIL, then in the forward pass zero out each node in this chunk with DROPOUT probability.

Source

lumps.lisp.

Target Slot

dropout.

Generic Function: encode (encoder decoded)

Encode DECODED with ENCODER. This interface is
generic enough to be almost meaningless. See ENCODER/DECODER for a simple, MGL-NLP:BAG-OF-WORDS-ENCODER for a slightly more involved example.

If ENCODER is a function designator, then it’s simply ‘FUNCALL‘ed with DECODED.

Package

mgl-core.

Source

feature.lisp.

Methods
Method: encode ((encoder bag-of-words-encoder) decoded)
Source

nlp.lisp.

Method: encode ((indexer encoder/decoder) object)
Method: encode ((encoder symbol) decoded)
Method: encode ((encoder function) decoded)
Generic Reader: encoded-feature-test (object)
Package

mgl-nlp.

Methods
Reader Method: encoded-feature-test ((bag-of-words-encoder bag-of-words-encoder))

automatically generated reader method

Source

nlp.lisp.

Target Slot

encoded-feature-test.

Generic Reader: encoded-feature-type (object)
Package

mgl-nlp.

Methods
Reader Method: encoded-feature-type ((bag-of-words-encoder bag-of-words-encoder))

automatically generated reader method

Source

nlp.lisp.

Target Slot

encoded-feature-type.

Generic Reader: feature-encoder (object)
Package

mgl-nlp.

Methods
Reader Method: feature-encoder ((bag-of-words-encoder bag-of-words-encoder))

automatically generated reader method

Source

nlp.lisp.

Target Slot

feature-encoder.

Generic Reader: feature-mapper (object)
Package

mgl-nlp.

Methods
Reader Method: feature-mapper ((bag-of-words-encoder bag-of-words-encoder))

automatically generated reader method

Source

nlp.lisp.

Target Slot

feature-mapper.

Generic Function: find-chunk (name object &key errorp)

Find the chunk in OBJECT whose name is EQUAL to NAME. Signal an error if not found and ERRORP.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: find-chunk (name (dbn dbn) &key errorp)
Source

deep-belief-network.lisp.

Method: find-chunk (name (bm bm) &key errorp)
Generic Function: find-cloud (name object &key errorp)

Find the cloud in OBJECT whose name is EQUAL to NAME. Signal an error if not found and ERRORP.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: find-cloud (name (dbn dbn) &key errorp)
Source

deep-belief-network.lisp.

Method: find-cloud (name (bm bm) &key errorp)
Generic Function: finishedp (sampler)

See if SAMPLER has run out of examples.

Package

mgl-dataset.

Source

dataset.lisp.

Methods
Method: finishedp ((sampler function-sampler))
Generic Reader: fn (object)
Package

mgl-common.

Methods
Reader Method: fn ((diffun diffun))

A real valued lisp function. It may have any number of parameters.

Source

differentiable-function.lisp.

Target Slot

fn.

Reader Method: fn ((periodic-fn periodic-fn))

automatically generated reader method

Source

util.lisp.

Target Slot

fn.

Generic Function: forward (clump)

Compute the values of the function represented by
CLUMP for all stripes and place the results into NODES of CLUMP.

Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: forward ((lump ->periodic))
Source

gaussian-process.lisp.

Method: forward ((lump ->rough-exponential))
Source

gaussian-process.lisp.

Method: forward ((lump ->stretch))
Source

gaussian-process.lisp.

Method: forward ((lump ->rep))
Source

gaussian-process.lisp.

Method: forward ((lump ->ref))
Source

gaussian-process.lisp.

Method: forward ((lump ->gp))
Source

gaussian-process.lisp.

Method: forward ((lump ->constant))
Source

unroll.lisp.

Method: forward ((lump ->seq-barrier))
Source

lumps.lisp.

Method: forward ((lump ->normalized))
Source

lumps.lisp.

Method: forward ((lump ->exp))
Source

lumps.lisp.

Method: forward ((lump ->sin))
Source

lumps.lisp.

Method: forward ((lump ->abs))
Source

lumps.lisp.

Method: forward ((lump ->*))
Source

lumps.lisp.

Method: forward ((lump ->+))
Source

lumps.lisp.

Method: forward ((lump ->v*m))
Source

lumps.lisp.

Method: forward ((lump ->sample-binary))
Source

lumps.lisp.

Method: forward ((lump ->gaussian-random))
Source

lumps.lisp.

Method: forward ((lump ->softmax-xe-loss))
Source

lumps.lisp.

Method: forward ((lump ->squared-difference))
Source

lumps.lisp.

Method: forward :around ((lump ->loss))
Source

lumps.lisp.

Method: forward ((lump ->sum))
Source

lumps.lisp.

Method: forward ((lump ->max-channel))
Source

lumps.lisp.

Method: forward ((lump ->min))
Source

lumps.lisp.

Method: forward ((lump ->max))
Source

lumps.lisp.

Method: forward ((lump ->relu))
Source

lumps.lisp.

Method: forward ((lump ->scaled-tanh))
Source

lumps.lisp.

Method: forward ((lump ->tanh))
Source

lumps.lisp.

Method: forward ((lump ->sigmoid))
Source

lumps.lisp.

Method: forward ((lump ->embedding))
Source

lumps.lisp.

Method: forward ((lump ->batch-normalized))
Source

lumps.lisp.

Method: forward ((lump ->input))
Source

lumps.lisp.

Method: forward ((lump ->dropout))
Source

lumps.lisp.

Method: forward ((lump ->weight))
Source

lumps.lisp.

Method: forward ((bpn bpn))
Method: forward :after (clump)
Generic Reader: generator (object)
Package

mgl-dataset.

Methods
Reader Method: generator ((function-sampler function-sampler))

A generator function of no arguments that returns the next sample.

Source

dataset.lisp.

Target Slot

generator.

Generic Function: gp-means (gp x)

Returns the vector of means for the vector of inputs X. X is a vector of arbitrary objects.

Package

mgl-gp.

Source

gaussian-process.lisp.

Methods
Method: gp-means ((fnn fnn-gp) x)
Method: gp-means ((gp posterior-gp) x)
Method: gp-means ((gp prior-gp) x)
Generic Function: gp-means-and-covariances* (gp x1 x2)

Returns two values: the means and the covariances as matrices.

Package

mgl-gp.

Source

gaussian-process.lisp.

Methods
Method: gp-means-and-covariances* ((fnn fnn-gp) x1 x2)
Method: gp-means-and-covariances* ((gp posterior-gp) x1 x2)
Method: gp-means-and-covariances* (gp x1 x2)
Generic Reader: group-size (object)
Package

mgl-common.

Source

common.lisp.

Methods
Reader Method: group-size ((->normalized ->normalized))

automatically generated reader method

Source

lumps.lisp.

Target Slot

group-size.

Reader Method: group-size ((->softmax-xe-loss ->softmax-xe-loss))

The number of elements in a softmax group. This is
the number of classes for classification. Often GROUP-SIZE is equal to SIZE (it is the default), but in general the only constraint is that SIZE is a multiple of GROUP-SIZE.

Source

lumps.lisp.

Target Slot

group-size.

Reader Method: group-size ((->max-channel ->max-channel))

The number of inputs in each group.

Source

lumps.lisp.

Target Slot

group-size.

Reader Method: group-size ((->min ->min))

The number of inputs in each group.

Source

lumps.lisp.

Target Slot

group-size.

Reader Method: group-size ((->max ->max))

The number of inputs in each group.

Source

lumps.lisp.

Target Slot

group-size.

Reader Method: group-size ((normalized-group-chunk normalized-group-chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

group-size.

Generic Function: hidden-chunks (object)
Package

mgl-bm.

Methods
Method: hidden-chunks ((dbn dbn))
Source

deep-belief-network.lisp.

Reader Method: hidden-chunks ((bm bm))

A list of CHUNKs that are not directly observed. Disjunct from VISIBLE-CHUNKS.

Source

boltzmann-machine.lisp.

Target Slot

hidden-chunks.

Generic Reader: hidden-sampling (object)
Generic Writer: (setf hidden-sampling) (object)
Package

mgl-bm.

Methods
Reader Method: hidden-sampling ((bm-mcmc-learner bm-mcmc-learner))
Writer Method: (setf hidden-sampling) ((bm-mcmc-learner bm-mcmc-learner))

Controls whether and how hidden nodes are sampled
during the learning or mean field is used instead. :HALF-HEARTED, the default value, samples the hiddens but uses the hidden means to calculate the effect of the positive and negative phases on the gradient. The default should almost always be preferable to T, as it is a less noisy estimate.

Source

boltzmann-machine.lisp.

Target Slot

hidden-sampling.

Generic Reader: importance (object)
Generic Writer: (setf importance) (object)
Package

mgl-bp.

Methods
Reader Method: importance ((->loss ->loss))
Writer Method: (setf importance) ((->loss ->loss))

This is to support weighted instances. That is
when not all training instances are equally important. If non-NIL, a 1d MAT with the importances of stripes of the batch. When IMPORTANCE is given (typically in SET-INPUT), then instead of adding 1 to the derivatives of all stripes, IMPORTANCE is added elemtwise.

Source

lumps.lisp.

Target Slot

importance.

Generic Reader: importances (object)
Package

mgl-bm.

Methods
Reader Method: importances ((bm bm))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

importances.

Generic Writer: (setf importances) (object)
Package

mgl-bm.

Methods
Writer Method: (setf importances) ((bm bm))

automatically generated writer method

Source

boltzmann-machine.lisp.

Target Slot

importances.

Generic Reader: index (object)
Package

mgl-util.

Methods
Reader Method: index ((->ref ->ref))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

index.

Generic Reader: indices-present (object)
Generic Writer: (setf indices-present) (object)
Package

mgl-bm.

Methods
Reader Method: indices-present ((chunk chunk))
Writer Method: (setf indices-present) ((chunk chunk))

NIL or a simple vector of array indices into the
layer’s NODES. Need not be ordered. SET-INPUT sets it. Note, that if it is non-NIL then N-STRIPES must be 1.

Source

boltzmann-machine.lisp.

Target Slot

indices-present.

Generic Function: initialize-gradient-source* (optimizer gradient-source weights dataset)

Called automatically before MINIMIZE* is called,
this function may be specialized if GRADIENT-SOURCE needs some kind of setup.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: initialize-gradient-source* (optimizer (learner bp-learner) weights dataset)
Source

backprop.lisp.

Method: initialize-gradient-source* (optimizer (learner sparse-bm-learner) weights dataset)
Source

boltzmann-machine.lisp.

Method: initialize-gradient-source* (optimizer gradient-source weights dataset)

The default method does nothing.

Generic Function: initialize-optimizer* (optimizer gradient-source weights dataset)

Called automatically before training starts, this function sets up OPTIMIZER to be suitable for optimizing GRADIENT-SOURCE. It typically creates appropriately sized accumulators for the gradients.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: initialize-optimizer* ((optimizer cg-optimizer) source weights dataset)
Source

conjugate-gradient.lisp.

Method: initialize-optimizer* ((optimizer segmented-gd-optimizer) source weights dataset)
Source

gradient-descent.lisp.

Method: initialize-optimizer* ((optimizer per-weight-batch-gd-optimizer) source weights dataset)
Source

gradient-descent.lisp.

Method: initialize-optimizer* ((optimizer normalized-batch-gd-optimizer) source weights dataset)
Source

gradient-descent.lisp.

Method: initialize-optimizer* ((optimizer adam-optimizer) source weights dataset)
Source

gradient-descent.lisp.

Method: initialize-optimizer* ((optimizer gd-optimizer) source weights dataset)
Source

gradient-descent.lisp.

Generic Reader: input-row-indices (object)
Generic Writer: (setf input-row-indices) (object)
Package

mgl-bp.

Methods
Reader Method: input-row-indices ((->embedding ->embedding))
Writer Method: (setf input-row-indices) ((->embedding ->embedding))

A sequence of batch size length of row indices. To be set in SET-INPUT.

Source

lumps.lisp.

Target Slot

input-row-indices.

Generic Reader: inputs (object)
Package

mgl-bm.

Methods
Reader Method: inputs ((chunk chunk))

This is where the after method of SET-INPUT saves
the input for later use by RECONSTRUCTION-ERROR, INPUTS->NODES. It is NIL in CONDITIONING-CHUNKS.

Source

boltzmann-machine.lisp.

Target Slot

inputs.

Generic Function: instance-to-executor-parameters (instance cache)

Return the parameters for an executor able to
handle INSTANCE. Called by MAP-OVER-EXECUTORS on CACHE (that’s a PARAMETERIZED-EXECUTOR-CACHE-MIXIN). The returned parameters are keys in an EQUAL parameters->executor hash table.

Package

mgl-core.

Source

core.lisp.

Methods
Method: instance-to-executor-parameters (sample (fnn-gp fnn-gp))
Source

gaussian-process.lisp.

Generic Function: label-index (instance)

Return the label of INSTANCE as a non-negative integer.

Package

mgl-core.

Source

classification.lisp.

Generic Function: label-index-distribution (instance)

Return a one dimensional array of probabilities
representing the distribution of labels. The probability of the label with LABEL-INDEX ‘I‘ is element at index ‘I‘ of the returned arrray.

Package

mgl-core.

Source

classification.lisp.

Generic Function: label-index-distributions (result)

Return a sequence of label index distributions for
RESULTS produced by some model for a batch of instances. This is akin to LABEL-INDEX-DISTRIBUTION.

Package

mgl-core.

Source

classification.lisp.

Methods
Method: label-index-distributions ((lump ->softmax-xe-loss))
Source

lumps.lisp.

Method: label-index-distributions ((chunk softmax-label-chunk))
Source

boltzmann-machine.lisp.

Generic Function: label-indices (results)

Return a sequence of label indices for RESULTS
produced by some model for a batch of instances. This is akin to LABEL-INDEX.

Package

mgl-core.

Source

classification.lisp.

Methods
Method: label-indices ((lump ->softmax-xe-loss))
Source

lumps.lisp.

Method: label-indices ((chunk softmax-label-chunk))
Source

boltzmann-machine.lisp.

Generic Reader: last-eval (object)
Package

mgl-util.

Methods
Reader Method: last-eval ((periodic-fn periodic-fn))

automatically generated reader method

Source

util.lisp.

Target Slot

last-eval.

Generic Writer: (setf last-eval) (object)
Package

mgl-util.

Methods
Writer Method: (setf last-eval) ((periodic-fn periodic-fn))

automatically generated writer method

Source

util.lisp.

Target Slot

last-eval.

Generic Reader: layers (object)
Package

mgl-bm.

Methods
Reader Method: layers ((dbm dbm))

A list of layers from bottom up. A layer is a list
of chunks. The layers partition the set of all chunks in the BM. Chunks with no connections to layers below are visible (including constant and conditioning) chunks. The layered structure is used in the single, bottom-up, approximate inference pass. When instantiating a DBM, VISIBLE-CHUNKS and HIDDEN-CHUNKS are inferred from LAYERS and CLOUDS.

Source

boltzmann-machine.lisp.

Target Slot

layers.

Generic Reader: learning-rate (object)
Generic Writer: (setf learning-rate) (object)
Package

mgl-gd.

Methods
Reader Method: learning-rate ((adam-optimizer adam-optimizer))
Writer Method: (setf learning-rate) ((adam-optimizer adam-optimizer))

Same thing as [LEARNING-RATE][(ACCESSOR
GD-OPTIMIZER)] but with the default suggested by the Adam paper.

Source

gradient-descent.lisp.

Target Slot

learning-rate.

Reader Method: learning-rate ((gd-optimizer gd-optimizer))
Writer Method: (setf learning-rate) ((gd-optimizer gd-optimizer))

This is the step size along the gradient. Decrease
it if optimization diverges, increase it if it doesn’t make progress.

Source

gradient-descent.lisp.

Target Slot

learning-rate.

Generic Function: log-cg-batch-done (optimizer gradient-source instances best-w best-f n-line-searches n-succesful-line-searches n-evaluations)

This is a function can be added to
ON-CG-BATCH-DONE. The default implementation simply logs the event arguments.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Methods
Method: log-cg-batch-done (optimizer gradient-source instances best-w best-f n-line-searches n-succesful-line-searches n-evaluations)
Generic Function: make-classification-accuracy-monitors* (model operation-mode label-index-fn attributes)

Identical to MAKE-CLASSIFICATION-ACCURACY-MONITORS
bar the keywords arguments. Specialize this to add to support for new model types. The default implementation also allows for some extensibility: if LABEL-INDICES is defined on MODEL, then it will be used to extract label indices from model results.

Package

mgl-core.

Source

classification.lisp.

Methods
Method: make-classification-accuracy-monitors* ((bpn bpn) operation-mode label-index-fn attributes)
Source

backprop.lisp.

Method: make-classification-accuracy-monitors* ((dbn dbn) operation-mode label-index-fn attributes)
Source

deep-belief-network.lisp.

Method: make-classification-accuracy-monitors* ((bm bm) operation-mode label-index-fn attributes)
Source

boltzmann-machine.lisp.

Method: make-classification-accuracy-monitors* (object operation-mode label-index-fn attributes)
Generic Function: make-cost-monitors* (model operation-mode attributes)

Identical to MAKE-COST-MONITORS bar the keywords
arguments. Specialize this to add to support for new model types.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: make-cost-monitors* (object operation-mode attributes)
Generic Function: make-cross-entropy-monitors* (model operation-mode label-index-distribution-fn attributes)

Identical to MAKE-CROSS-ENTROPY-MONITORS bar the
keywords arguments. Specialize this to add to support for new model types. The default implementation also allows for some extensibility: if LABEL-INDEX-DISTRIBUTIONS is defined on MODEL, then it will be used to extract label distributions from model results.

Package

mgl-core.

Source

classification.lisp.

Methods
Method: make-cross-entropy-monitors* ((bpn bpn) operation-mode label-index-distribution-fn attributes)
Source

backprop.lisp.

Method: make-cross-entropy-monitors* ((dbn dbn) operation-mode label-index-distribution-fn attributes)
Source

deep-belief-network.lisp.

Method: make-cross-entropy-monitors* ((bm bm) operation-mode label-index-distribution-fn attributes)
Source

boltzmann-machine.lisp.

Method: make-cross-entropy-monitors* (object operation-mode label-index-distribution-fn attributes)
Generic Function: make-executor-with-parameters (parameters cache)

Create a new executor for PARAMETERS. CACHE is a PARAMETERIZED-EXECUTOR-CACHE-MIXIN. In the BPN gaussian process example, PARAMETERS would be a list of input dimensions.

Package

mgl-core.

Source

core.lisp.

Generic Function: make-reconstruction-monitors* (model operation-mode attributes)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: make-reconstruction-monitors* ((dbn dbn) operation-mode attributes)
Source

deep-belief-network.lisp.

Method: make-reconstruction-monitors* ((chunk chunk) operation-mode attributes)
Method: make-reconstruction-monitors* ((bm bm) operation-mode attributes)
Generic Function: make-step-monitor-monitor-counter (step-counter)

In an RNN, STEP-COUNTER aggregates results of all
the time steps during the processing of instances in the current batch. Return a new counter into which results from STEP-COUNTER can be accumulated when the processing of the batch is finished. The default implementation creates a copy of STEP-COUNTER.

Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: make-step-monitor-monitor-counter (step-counter)
Generic Function: map-confusion-matrix (fn matrix)

Call FN with [‘TARGET‘][dislocated], PREDICTION,
COUNT paramaters for each cell in the confusion matrix. Cells with a zero count may be ommitted.

Package

mgl-core.

Source

classification.lisp.

Methods
Method: map-confusion-matrix (fn (matrix confusion-matrix))
Generic Function: map-gradient-sink (fn sink)

Call FN of lambda list (SEGMENT ACCUMULATOR) on
each segment and their corresponding accumulator MAT in SINK.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: map-gradient-sink (fn (optimizer cg-optimizer))
Source

conjugate-gradient.lisp.

Method: map-gradient-sink (fn (optimizer segmented-gd-optimizer))
Source

gradient-descent.lisp.

Method: map-gradient-sink (fn (optimizer gd-optimizer))
Source

gradient-descent.lisp.

Generic Function: map-over-executors (fn instances prototype-executor)

Divide INSTANCES between executors that perform the
same function as PROTOTYPE-EXECUTOR and call FN with the instances and the executor for which the instances are.

Some objects conflate function and call: the forward pass of a [MGL-BP:BPN][class] computes output from inputs so it is like a function but it also doubles as a function call in the sense that the bpn (function) object changes state during the computation of the output. Hence not even the forward pass of a bpn is thread safe. There is also the restriction that all inputs must be of the same size.

For example, if we have a function that builds bpn a for an input of a certain size, then we can create a factory that creates bpns for a particular call. The factory probably wants to keep the weights the same though. In @MGL-PARAMETERIZED-EXECUTOR-CACHE, MAKE-EXECUTOR-WITH-PARAMETERS is this factory.

Parallelization of execution is another possibility MAP-OVER-EXECUTORS allows, but there is no prebuilt solution for it, yet.

The default implementation simply calls FN with INSTANCES and PROTOTYPE-EXECUTOR.

Package

mgl-core.

Source

core.lisp.

Methods
Method: map-over-executors (fn instances (c parameterized-executor-cache-mixin))
Method: map-over-executors (fn instances object)
Generic Function: map-segment-runs (fn segment)

Call FN with start and end of intervals of
consecutive indices that are not missing in SEGMENT. Called by optimizers that support partial updates. The default implementation assumes that all weights are present. This only needs to be specialized if one plans to use an optimizer that knows how to deal unused/missing weights such as MGL-GD:NORMALIZED-BATCH-GD-OPTIMIZER and OPTIMIZER MGL-GD:PER-WEIGHT-BATCH-GD-OPTIMIZER.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: map-segment-runs (fn (cloud full-cloud))
Source

boltzmann-machine.lisp.

Method: map-segment-runs (fn segment)
Generic Function: map-segments (fn gradient-source)

Apply FN to each segment of GRADIENT-SOURCE.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: map-segments (fn (lump ->weight))
Source

lumps.lisp.

Method: map-segments (fn (lump lump))
Source

lumps.lisp.

Method: map-segments (fn (source bp-learner))
Source

backprop.lisp.

Method: map-segments (fn (rnn rnn))
Source

backprop.lisp.

Method: map-segments (fn (bpn bpn))
Source

backprop.lisp.

Method: map-segments (fn (source bm-learner))
Source

boltzmann-machine.lisp.

Method: map-segments (fn (bm bm))
Source

boltzmann-machine.lisp.

Method: map-segments (fn (cloud factored-cloud))
Source

boltzmann-machine.lisp.

Method: map-segments (fn (cloud full-cloud))
Source

boltzmann-machine.lisp.

Method: map-segments (fn (segment-list list))
Generic Reader: max-lag (object)
Package

mgl-bp.

Methods
Reader Method: max-lag ((rnn rnn))

The networks built by UNFOLDER may contain new
weights up to time step MAX-LAG. Beyond that point, all weight lumps must be reappearances of weight lumps with the same name at previous time steps. Most recurrent networks reference only the state of lumps at the previous time step (with the function LAG), hence the default of 1. But it is possible to have connections to arbitrary time steps. The maximum connection lag must be specified when creating the RNN.

Source

backprop.lisp.

Target Slot

max-lag.

Generic Reader: max-n-samples (object)
Package

mgl-dataset.

Methods
Reader Method: max-n-samples ((function-sampler function-sampler))

automatically generated reader method

Source

dataset.lisp.

Target Slot

max-n-samples.

Generic Writer: (setf max-n-samples) (object)
Package

mgl-dataset.

Methods
Writer Method: (setf max-n-samples) ((function-sampler function-sampler))

automatically generated writer method

Source

dataset.lisp.

Target Slot

max-n-samples.

Generic Function: max-n-stripes (object)

The number of stripes with which the OBJECT is capable of dealing simultaneously.

Package

mgl-core.

Source

core.lisp.

Setf expander for this generic function

(setf max-n-stripes).

Methods
Method: max-n-stripes ((weight ->weight))
Source

lumps.lisp.

Method: max-n-stripes ((lump lump))
Source

lumps.lisp.

Reader Method: max-n-stripes ((bpn bpn))

The maximum number of instances the network can
operate on in parallel. Within BUILD-FNN or BUILD-RNN, it defaults to MAX-N-STRIPES of that parent network, else it defaults to 1. When set MAX-N-STRIPES of all CLUMPS get set to the same value.

Source

backprop.lisp.

Target Slot

max-n-stripes.

Reader Method: max-n-stripes ((dbn dbn))

automatically generated reader method

Source

deep-belief-network.lisp.

Target Slot

max-n-stripes.

Reader Method: max-n-stripes ((bm bm))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

max-n-stripes.

Method: max-n-stripes ((chunk chunk))
Source

boltzmann-machine.lisp.

Generic Reader: mean (object)
Generic Writer: (setf mean) (object)
Package

mgl-bp.

Methods
Reader Method: mean ((->gaussian-random ->gaussian-random))
Writer Method: (setf mean) ((->gaussian-random ->gaussian-random))

The mean of the normal distribution.

Source

lumps.lisp.

Target Slot

mean.

Generic Reader: mean-decay (object)
Generic Writer: (setf mean-decay) (object)
Package

mgl-gd.

Methods
Reader Method: mean-decay ((adam-optimizer adam-optimizer))
Writer Method: (setf mean-decay) ((adam-optimizer adam-optimizer))

A number between 0 and 1 that determines how fast
the estimated mean of derivatives is updated. 0 basically gives you RMSPROP (if VARIANCE-DECAY is not too large) or AdaGrad (if VARIANCE-DECAY is close to 1 and the learning rate is annealed. This is $\beta_1$ in the paper.

Source

gradient-descent.lisp.

Target Slot

mean-decay.

Generic Reader: mean-decay-decay (object)
Generic Writer: (setf mean-decay-decay) (object)
Package

mgl-gd.

Methods
Reader Method: mean-decay-decay ((adam-optimizer adam-optimizer))
Writer Method: (setf mean-decay-decay) ((adam-optimizer adam-optimizer))

A value that should be close to 1. MEAN-DECAY is
multiplied by this value after each update. This is $\lambda$ in the paper.

Source

gradient-descent.lisp.

Target Slot

mean-decay-decay.

Generic Reader: mean-lump-name (object)
Package

mgl-gp.

Methods
Reader Method: mean-lump-name ((fnn-gp fnn-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

mean-lump-name.

Generic Reader: means (object)
Package

mgl-bm.

Methods
Reader Method: means ((chunk chunk))

Saved values of the means (see SET-MEAN) last computed.

Source

boltzmann-machine.lisp.

Target Slot

means.

Generic Reader: measurer (object)
Package

mgl-core.

Methods
Reader Method: measurer ((monitor monitor))

This must be a monitor itself which only means
that APPLY-MONITOR is defined on it (but see @MGL-MONITORING). The returned values are aggregated by [COUNTER][(READER MONITOR)]. See @MGL-MEASURER for a library of measurers.

Source

monitor.lisp.

Target Slot

measurer.

Generic Function: minimize* (optimizer gradient-source weights dataset)

Called by MINIMIZE after INITIALIZE-OPTIMIZER* and INITIALIZE-GRADIENT-SOURCE*, this generic function is the main extension point for writing optimizers.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: minimize* ((optimizer cg-optimizer) gradient-source weights dataset)
Source

conjugate-gradient.lisp.

Method: minimize* ((optimizer base-gd-optimizer) gradient-source weights dataset)
Source

gradient-descent.lisp.

Method: minimize* :around ((optimizer iterative-optimizer) gradient-source weights dataset)
Generic Reader: momentum (object)
Generic Writer: (setf momentum) (object)
Package

mgl-gd.

Methods
Reader Method: momentum ((gd-optimizer gd-optimizer))
Writer Method: (setf momentum) ((gd-optimizer gd-optimizer))

A value in the [0, 1) interval. MOMENTUM times the
previous weight change is added to the gradient. 0 means no momentum.

Source

gradient-descent.lisp.

Target Slot

momentum.

Generic Reader: momentum-type (object)
Package

mgl-gd.

Methods
Reader Method: momentum-type ((gd-optimizer gd-optimizer))

One of :NORMAL, :NESTEROV or :NONE. For pure
optimization Nesterov’s momentum may be better, but it may also increases chances of overfitting. Using :NONE is equivalent to 0 momentum, but it also uses less memory. Note that with :NONE, MOMENTUM is ignored even it it is non-zero.

Source

gradient-descent.lisp.

Target Slot

momentum-type.

Generic Function: monitors (object)

Return monitors associated with OBJECT. See various
methods such as [MONITORS][(accessor mgl-bp:bp-learner)] for more documentation.

Package

mgl-core.

Source

monitor.lisp.

Methods
Reader Method: monitors ((bp-learner bp-learner))

A list of ‘MONITOR‘s.

Source

backprop.lisp.

Target Slot

monitors.

Reader Method: monitors ((bm-learner bm-learner))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

monitors.

Method: monitors ((optimizer iterative-optimizer))
Source

optimize.lisp.

Generic Writer: (setf monitors) (object)
Package

mgl-core.

Methods
Writer Method: (setf monitors) ((bp-learner bp-learner))

A list of ‘MONITOR‘s.

Source

backprop.lisp.

Target Slot

monitors.

Generic Reader: n-gibbs (object)
Generic Writer: (setf n-gibbs) (object)
Package

mgl-bm.

Methods
Reader Method: n-gibbs ((bm-mcmc-learner bm-mcmc-learner))
Writer Method: (setf n-gibbs) ((bm-mcmc-learner bm-mcmc-learner))

The number of steps of Gibbs sampling to perform.
This is how many full (HIDDEN -> VISIBLE -> HIDDEN) steps are taken for CD learning, and how many times each chunk is sampled for PCD.

Source

boltzmann-machine.lisp.

Target Slot

n-gibbs.

Generic Reader: n-instances (object)
Package

mgl-opt.

Methods
Reader Method: n-instances ((iterative-optimizer iterative-optimizer))

The number of instances this optimizer has seen so far. Incremented automatically during optimization.

Source

optimize.lisp.

Target Slot

n-instances.

Generic Reader: n-particles (object)
Package

mgl-bm.

Methods
Reader Method: n-particles ((bm-pcd-learner bm-pcd-learner))

The number of persistent chains to run. Also known as the number of fantasy particles.

Source

boltzmann-machine.lisp.

Target Slot

n-particles.

Generic Reader: n-samples (object)
Package

mgl-dataset.

Methods
Reader Method: n-samples ((function-sampler function-sampler))
Source

dataset.lisp.

Target Slot

n-samples.

Generic Function: n-stripes (object)

The number of stripes currently present in OBJECT. This is at most MAX-N-STRIPES.

Package

mgl-core.

Source

core.lisp.

Setf expander for this generic function

(setf n-stripes).

Methods
Method: n-stripes ((weight ->weight))
Source

lumps.lisp.

Method: n-stripes ((lump lump))
Source

lumps.lisp.

Reader Method: n-stripes ((bpn bpn))

The current number of instances the network has.
This is automatically set to the number of instances passed to SET-INPUT, so it rarely has to be manipulated directly although it can be set. When set N-STRIPES of all CLUMPS get set to the same value.

Source

backprop.lisp.

Target Slot

n-stripes.

Method: n-stripes ((dbn dbn))
Source

deep-belief-network.lisp.

Method: n-stripes ((bm bm))
Source

boltzmann-machine.lisp.

Reader Method: n-stripes ((chunk chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

n-stripes.

Generic Reader: n-weight-uses-in-batch (object)
Generic Writer: (setf n-weight-uses-in-batch) (object)
Package

mgl-gd.

Methods
Reader Method: n-weight-uses-in-batch ((per-weight-batch-gd-optimizer per-weight-batch-gd-optimizer))
Writer Method: (setf n-weight-uses-in-batch) ((per-weight-batch-gd-optimizer per-weight-batch-gd-optimizer))

Number of uses of the weight in its current batch.

Source

gradient-descent.lisp.

Target Slot

n-weight-uses-in-batch.

Reader Method: n-weight-uses-in-batch ((normalized-batch-gd-optimizer normalized-batch-gd-optimizer))
Writer Method: (setf n-weight-uses-in-batch) ((normalized-batch-gd-optimizer normalized-batch-gd-optimizer))

Number of uses of the weight in its current batch.

Source

gradient-descent.lisp.

Target Slot

n-weight-uses-in-batch.

Generic Function: name (object)
Package

mgl-common.

Source

common.lisp.

Methods
Method: name ((ref lagged-clump))
Source

lumps.lisp.

Reader Method: name ((clump clump))

automatically generated reader method

Source

backprop.lisp.

Target Slot

name.

Reader Method: name ((cloud cloud))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

name.

Reader Method: name ((chunk chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

name.

Method: name ((attributed attributed))

Return a string assembled from the values of the ATTRIBUTES of ATTRIBUTED. If there are multiple entries with the same key, then they are printed near together.

Values may be padded according to an enclosing WITH-PADDED-ATTRIBUTE-PRINTING.

Source

counter.lisp.

Reader Method: name ((function-sampler function-sampler))

An arbitrary object naming the sampler. Only used for printing the sampler object.

Source

dataset.lisp.

Target Slot

name.

Generic Function: negative-phase (batch learner gradient-sink multiplier)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: negative-phase (batch (learner bm-pcd-learner) gradient-sink multiplier)
Method: negative-phase (batch (learner rbm-cd-learner) gradient-sink multiplier)
Generic Function: nodes (object)

Returns a MAT object representing the state or
result of OBJECT. The first dimension of the returned matrix is equal to the number of stripes.

Package

mgl-common.

Source

common.lisp.

Methods
Reader Method: nodes ((lump lump))

The values computed by the lump in the forward
pass are stored here. It is an ‘N-STRIPES * SIZE‘ matrix that has storage allocated for ‘MAX-N-STRIPES * SIZE‘ elements for non-weight lumps. ->WEIGHT lumps have no stripes nor restrictions on their shape.

Source

lumps.lisp.

Target Slot

nodes.

Method: nodes ((bpn bpn))
Source

backprop.lisp.

Reader Method: nodes ((chunk chunk))

A value for each node in the chunk. First,
activations are put here (weights*inputs) then the mean of the probability distribution is calculated from the activation and finally (optionally) a sample is taken from the probability distribution. All these values are stored in this vector. This is also where SET-INPUT is supposed to clamp the values. Note that not only the values in the matrix but also the matrix object itself can change when the network is used.

Source

boltzmann-machine.lisp.

Target Slot

nodes.

Generic Reader: on-cg-batch-done (object)
Generic Writer: (setf on-cg-batch-done) (object)
Package

mgl-cg.

Methods
Reader Method: on-cg-batch-done ((cg-optimizer cg-optimizer))
Writer Method: (setf on-cg-batch-done) ((cg-optimizer cg-optimizer))

An event hook called when processing a conjugate
gradient batch is done. The handlers on the hook are called with 8 arguments:

(optimizer gradient-source instances
best-w best-f n-line-searches
n-succesful-line-searches n-evaluations)

The latter 5 of which are the return values of the CG function.

Source

conjugate-gradient.lisp.

Target Slot

on-cg-batch-done.

Generic Reader: on-n-instances-changed (object)
Generic Writer: (setf on-n-instances-changed) (object)
Package

mgl-opt.

Methods
Reader Method: on-n-instances-changed ((iterative-optimizer iterative-optimizer))
Writer Method: (setf on-n-instances-changed) ((iterative-optimizer iterative-optimizer))

An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called when optimization of a batch of instances is done and N-INSTANCES is incremented.

Source

optimize.lisp.

Target Slot

on-n-instances-changed.

Generic Reader: on-optimization-finished (object)
Generic Writer: (setf on-optimization-finished) (object)
Package

mgl-opt.

Methods
Reader Method: on-optimization-finished ((iterative-optimizer iterative-optimizer))
Writer Method: (setf on-optimization-finished) ((iterative-optimizer iterative-optimizer))

An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called when optimization has finished.

Source

optimize.lisp.

Target Slot

on-optimization-finished.

Generic Reader: on-optimization-started (object)
Generic Writer: (setf on-optimization-started) (object)
Package

mgl-opt.

Methods
Reader Method: on-optimization-started ((iterative-optimizer iterative-optimizer))
Writer Method: (setf on-optimization-started) ((iterative-optimizer iterative-optimizer))

An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called after initializations are performed (INITIALIZE-OPTIMIZER*, INITIALIZE-GRADIENT-SOURCE*) but before optimization is started.

Source

optimize.lisp.

Target Slot

on-optimization-started.

Generic Reader: parameter-indices (object)
Package

mgl-diffun.

Methods
Reader Method: parameter-indices ((diffun diffun))

The list of indices of parameters that we don’t
optimize. Values for these will come from the DATASET argument of MINIMIZE.

Source

differentiable-function.lisp.

Target Slot

parameter-indices.

Generic Reader: persistent-chains (object)
Package

mgl-bm.

Methods
Reader Method: persistent-chains ((bm-pcd-learner bm-pcd-learner))

A BM that keeps the states of the persistent
chains (each stripe is a chain), initialized from the BM being trained by COPY with ’PCD as the context. Suitable for training BM and RBM.

Source

boltzmann-machine.lisp.

Target Slot

persistent-chains.

Generic Reader: populate-key (object)
Package

mgl-unroll.

Methods
Reader Method: populate-key ((fnn-clamping-cache fnn-clamping-cache))

automatically generated reader method

Source

unroll.lisp.

Target Slot

populate-key.

Generic Reader: populate-map-cache-lazily-from-dbm (object)
Package

mgl-unroll.

Methods
Reader Method: populate-map-cache-lazily-from-dbm ((fnn-clamping-cache fnn-clamping-cache))

automatically generated reader method

Source

unroll.lisp.

Target Slot

populate-map-cache-lazily-from-dbm.

Generic Reader: population-decay (object)
Package

mgl-bp.

Methods
Reader Method: population-decay ((->batch-normalization ->batch-normalization))

While training, an exponential moving average of
batch means and standard deviances (termed _population statistics_) is updated. When making predictions, normalization is performed using these statistics. These population statistics are persisted by SAVE-STATE.

Source

lumps.lisp.

Target Slot

population-decay.

Generic Function: positive-phase (batch learner gradient-sink multiplier)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: positive-phase (batch (learner bm-pcd-learner) gradient-sink multiplier)
Method: positive-phase (batch (learner rbm-cd-learner) gradient-sink multiplier)
Generic Reader: prior-gp (object)
Package

mgl-gp.

Methods
Reader Method: prior-gp ((posterior-gp posterior-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

prior-gp.

Generic Reader: rbm (object)
Package

mgl-bm.

Methods
Reader Method: rbm ((rbm-cd-learner rbm-cd-learner))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

bm.

Generic Reader: rbms (object)
Package

mgl-bm.

Methods
Reader Method: rbms ((dbn dbn))

automatically generated reader method

Source

deep-belief-network.lisp.

Target Slot

rbms.

Generic Function: read-state* (object stream context)

This is the extension point for READ-STATE. It is
guaranteed that primary READ-STATE* methods will be called only once for each OBJECT (under EQ). CONTEXT is an opaque object and must be passed on to any recursive READ-STATE* calls.

Package

mgl-core.

Source

core.lisp.

Methods
Method: read-state* ((lump ->batch-normalization) stream context)
Source

lumps.lisp.

Method: read-state* ((lump ->weight) stream context)
Source

lumps.lisp.

Method: read-state* ((lump lump) stream context)
Source

lumps.lisp.

Method: read-state* ((bpn bpn) stream context)
Source

backprop.lisp.

Method: read-state* ((dbn dbn) stream context)
Source

deep-belief-network.lisp.

Method: read-state* ((bm bm) stream context)
Source

boltzmann-machine.lisp.

Method: read-state* ((cloud factored-cloud) stream seen)
Source

boltzmann-machine.lisp.

Method: read-state* ((cloud full-cloud) stream seen)
Source

boltzmann-machine.lisp.

Method: read-state* ((indexer encoder/decoder) stream context)
Source

feature.lisp.

Method: read-state* :around (object stream context)
Generic Function: report-optimization-parameters (optimizer gradient-source)

A utility that’s often called at the start of
optimization (from ON-OPTIMIZATION-STARTED). The default implementation logs the description of GRADIENT-SOURCE (as in DESCRIBE) and OPTIMIZER and calls LOG-MAT-ROOM.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: report-optimization-parameters (optimizer gradient-source)
Generic Function: reset-counter (counter)

Restore state of COUNTER to what it was just after creation.

Package

mgl-core.

Source

counter.lisp.

Methods
Method: reset-counter ((counter cross-entropy-counter))
Source

classification.lisp.

Method: reset-counter ((counter concat-counter))
Method: reset-counter ((counter basic-counter))
Generic Function: reset-optimization-monitors (optimizer gradient-source)

Report the state of [MONITORS][generic-function] of
OPTIMIZER and GRADIENT-SOURCE and reset their counters. See MONITOR-OPTIMIZATION-PERIODICALLY for an example of how this is used.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: reset-optimization-monitors ((optimizer iterative-optimizer) gradient-source)

Log the counters of the monitors of OPTIMIZER and GRADIENT-SOURCE and reset them.

Generic Function: sample (sampler)

If SAMPLER has not run out of data (see FINISHEDP)
SAMPLE returns an object that represents a sample from the world to be experienced or, in other words, simply something the can be used as input for training or prediction. It is not allowed to call SAMPLE if SAMPLER is FINISHEDP.

Package

mgl-dataset.

Source

dataset.lisp.

Methods
Method: sample ((sampler function-sampler))
Generic Function: sample-chunk (chunk)

Sample from the probability distribution of CHUNK whose means are in NODES.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: sample-chunk ((chunk conditioning-chunk))
Method: sample-chunk ((chunk sigmoid-chunk))
Method: sample-chunk ((chunk gaussian-chunk))
Method: sample-chunk ((chunk relu-chunk))
Method: sample-chunk ((chunk softmax-chunk))
Method: sample-chunk ((chunk constrained-poisson-chunk))
Generic Reader: scale (object)
Package

mgl-common.

Source

common.lisp.

Methods
Reader Method: scale ((->normalized ->normalized))

The sum of nodes after normalization. Can be
changed during training, for instance when clamping. If it is a vector then its length must be MAX-N-STRIPES which automatically maintained.

Source

lumps.lisp.

Target Slot

scale.

Reader Method: scale ((->batch-normalization ->batch-normalization))

A weight lump of the same size as SHIFT. This is $\gamma$ in the paper.

Source

lumps.lisp.

Target Slot

scale.

Reader Method: scale ((normalized-group-chunk normalized-group-chunk))

The sum of the means after normalization. Can be
changed during training, for instance when clamping. If it is a vector then its length must be MAX-N-STRIPES which is automatically maintained when changing the number of stripes.

Source

boltzmann-machine.lisp.

Target Slot

scale.

Generic Writer: (setf scale) (object)
Package

mgl-common.

Methods
Writer Method: (setf scale) ((->normalized ->normalized))

The sum of nodes after normalization. Can be
changed during training, for instance when clamping. If it is a vector then its length must be MAX-N-STRIPES which automatically maintained.

Source

lumps.lisp.

Target Slot

scale.

Writer Method: (setf scale) ((normalized-group-chunk normalized-group-chunk))

The sum of the means after normalization. Can be
changed during training, for instance when clamping. If it is a vector then its length must be MAX-N-STRIPES which is automatically maintained when changing the number of stripes.

Source

boltzmann-machine.lisp.

Target Slot

scale.

Generic Function: segment-derivatives (segment)

Return the derivatives matrix of SEGMENT. A segment
doesn’t need to be a MAT object itself. For example, it may be a MGL-BM:CHUNK of a [MGL-BM:BM][CLASS] or a MGL-BP:LUMP of a [MGL-BP:BPN][CLASS] whose DERIVATIVES slot holds the gradient.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: segment-derivatives ((lump lump))
Source

lumps.lisp.

Generic Reader: segment-filter (object)
Package

mgl-cg.

Methods
Reader Method: segment-filter ((cg-optimizer cg-optimizer))

A predicate function on segments that filters out uninteresting segments. Called from INITIALIZE-OPTIMIZER*.

Source

conjugate-gradient.lisp.

Target Slot

segment-filter.

Generic Reader: segment-set (object)
Package

mgl-opt.

Methods
Reader Method: segment-set ((cg-optimizer cg-optimizer))

Segments to train.

Source

conjugate-gradient.lisp.

Target Slot

segment-set.

Reader Method: segment-set ((gd-optimizer gd-optimizer))

The set of segments that are to be trained. The
ACCUMULATOR, WEIGHT-DELTAS, etc vectors are indexed by SEGMENT-SET indices.

Source

gradient-descent.lisp.

Target Slot

segment-set.

Generic Function: segment-weights (segment)

Return the weight matrix of SEGMENT. A segment
doesn’t need to be a MAT object itself. For example, it may be a MGL-BM:CHUNK of a [MGL-BM:BM][CLASS] or a MGL-BP:LUMP of a [MGL-BP:BPN][CLASS] whose NODES slot holds the weights.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: segment-weights ((lump lump))
Source

lumps.lisp.

Method: segment-weights ((cloud full-cloud))
Source

boltzmann-machine.lisp.

Method: segment-weights ((mat mat))

When the segment is really a MAT, then just return it.

Generic Reader: segmenter (object)
Package

mgl-gd.

Methods
Reader Method: segmenter ((segmented-gd-optimizer segmented-gd-optimizer))

When this optimizer is initialized it loops over
the segment of the learner with MAP-SEGMENTS. SEGMENTER is a function that is called with each segment and returns an optimizer or NIL. Several segments may be mapped to the same optimizer. After the segment->optimizer mappings are collected, each optimizer is initialized by INITIALIZE-OPTIMIZER with the list of segments mapped to it.

Source

gradient-descent.lisp.

Target Slot

segmenter.

Generic Function: segments (optimizer)

Several weight matrices known as *segments* can be
optimized by a single optimizer. This function returns them as a list.

Package

mgl-opt.

Source

optimize.lisp.

Methods
Method: segments ((optimizer cg-optimizer))
Source

conjugate-gradient.lisp.

Reader Method: segments ((segmented-gd-optimizer segmented-gd-optimizer))

automatically generated reader method

Source

gradient-descent.lisp.

Target Slot

segments.

Method: segments ((optimizer gd-optimizer))
Source

gradient-descent.lisp.

Reader Method: segments ((segment-set segment-set))

A list of weight matrices.

Target Slot

segments.

Generic Reader: seq-elt-fn (object)
Package

mgl-bp.

Methods
Reader Method: seq-elt-fn ((->seq-barrier ->seq-barrier))

A function of an INDEX argument that returns the lump with that index in some sequence.

Source

lumps.lisp.

Target Slot

seq-elt-fn.

Generic Reader: seq-indices (object)
Generic Writer: (setf seq-indices) (object)
Package

mgl-bp.

Methods
Reader Method: seq-indices ((->seq-barrier ->seq-barrier))
Writer Method: (setf seq-indices) ((->seq-barrier ->seq-barrier))

A sequence of length batch size of indices. The
element at index ‘I‘ is the index to be passed to SEQ-ELT-FN to find the lump whose stripe ‘I‘ is copied to stripe ‘I‘ of this this lump.

Source

lumps.lisp.

Target Slot

seq-indices.

Generic Function: set-chunk-mean (chunk)

Set NODES of CHUNK to the means of the probability distribution. When called NODES contains the activations.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: set-chunk-mean :after ((chunk chunk))
Method: set-chunk-mean ((chunk conditioning-chunk))
Method: set-chunk-mean ((chunk sigmoid-chunk))
Method: set-chunk-mean ((chunk gaussian-chunk))
Method: set-chunk-mean ((chunk relu-chunk))
Method: set-chunk-mean ((chunk normalized-group-chunk))
Method: set-chunk-mean ((chunk exp-normalized-group-chunk))
Generic Function: set-hidden-mean (bm)

Like SET-HIDDEN-MEAN/1, but settle the mean field
if there are hidden-to-hidden connections. For an RBM it trivially calls SET-HIDDEN-MEAN/1, for a DBM it calls UP-DBM before settling.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: set-hidden-mean :around ((bm bm))
Method: set-hidden-mean ((bm bm))
Method: set-hidden-mean ((dbm dbm))
Generic Function: set-input (instances model)

Set INSTANCES as inputs in MODEL. INSTANCES is
always a SEQUENCE of instances even for models not capable of batch operation. It sets N-STRIPES to (LENGTH INSTANCES) in a :BEFORE method.

Package

mgl-core.

Source

core.lisp.

Methods
Method: set-input :after (samples (fnn fnn-gp))
Source

gaussian-process.lisp.

Method: set-input :before (samples (fnn fnn-clamping-cache))
Source

unroll.lisp.

Method: set-input (instances (rnn rnn))

RNNs operate on batches of instances just like FNNs. But the instances here are like datasets: sequences or samplers and they are turned into sequences of batches of instances with
MAP-DATASETS :IMPUTE NIL. The batch of instances at index 2 is clamped onto the BPN at time step 2 with SET-INPUT.

When the input sequences in the batch are not of the same length, already exhausted sequences will produce NIL (due to :IMPUTE NIL) above. When such a NIL is clamped with SET-INPUT on a BPN of the RNN, SET-INPUT must set the IMPORTANCE of the ->ERROR lumps to 0 else training would operate on the noise left there by previous invocations.

Source

backprop.lisp.

Method: set-input :around (samples (bpn bpn))
Source

backprop.lisp.

Method: set-input (samples (dbn dbn))
Source

deep-belief-network.lisp.

Method: set-input :before (samples (rbm rbm))
Source

deep-belief-network.lisp.

Method: set-input :after (samples (bm bm))
Source

boltzmann-machine.lisp.

Method: set-input :before (samples (bm bm))
Source

boltzmann-machine.lisp.

Generic Function: set-max-n-stripes (max-n-stripes object)

Allocate the necessary stuff to allow for
MAX-N-STRIPES number of stripes to be worked with simultaneously in OBJECT. This is called when MAX-N-STRIPES is SETF’ed.

Package

mgl-core.

Source

core.lisp.

Methods
Method: set-max-n-stripes (max-n-stripes (lump ->normalized))
Source

lumps.lisp.

Method: set-max-n-stripes (max-n-stripes (weight ->weight))
Source

lumps.lisp.

Method: set-max-n-stripes (max-n-stripes (lump lump))
Source

lumps.lisp.

Method: set-max-n-stripes (max-n-stripes (bpn bpn))
Source

backprop.lisp.

Method: set-max-n-stripes (max-n-stripes (dbn dbn))
Source

deep-belief-network.lisp.

Method: set-max-n-stripes (max-n-stripes (bm bm))
Source

boltzmann-machine.lisp.

Method: set-max-n-stripes (max-n-stripes (cloud factored-cloud))
Source

boltzmann-machine.lisp.

Method: set-max-n-stripes (max-n-stripes (cloud cloud))
Source

boltzmann-machine.lisp.

Method: set-max-n-stripes (max-n-stripes (chunk chunk))
Source

boltzmann-machine.lisp.

Generic Function: set-n-stripes (n-stripes object)

Set the number of stripes (out of MAX-N-STRIPES)
that are in use in OBJECT. This is called when N-STRIPES is SETF’ed.

Package

mgl-core.

Source

core.lisp.

Methods
Method: set-n-stripes (n-stripes (weight ->weight))
Source

lumps.lisp.

Method: set-n-stripes (n-stripes (lump lump))
Source

lumps.lisp.

Method: set-n-stripes (n-stripes (bpn bpn))
Source

backprop.lisp.

Method: set-n-stripes (n-stripes (dbn dbn))
Source

deep-belief-network.lisp.

Method: set-n-stripes (n-stripes (bm bm))
Source

boltzmann-machine.lisp.

Method: set-n-stripes (n-stripes (cloud factored-cloud))
Source

boltzmann-machine.lisp.

Method: set-n-stripes (n-stripes (cloud cloud))
Source

boltzmann-machine.lisp.

Method: set-n-stripes (n-stripes (chunk chunk))
Source

boltzmann-machine.lisp.

Generic Function: set-visible-mean (bm)

Like SET-VISIBLE-MEAN/1, but settle the mean field
if there are visible-to-visible connections. For an RBM it trivially calls SET-VISIBLE-MEAN.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: set-visible-mean :around ((bm bm))
Method: set-visible-mean ((bm bm))
Generic Reader: shift (object)
Package

mgl-bp.

Methods
Reader Method: shift ((->batch-normalization ->batch-normalization))

A weight lump of the same size as SCALE. This is $\beta$ in the paper.

Source

lumps.lisp.

Target Slot

shift.

Generic Function: size (object)
Package

mgl-common.

Source

common.lisp.

Methods
Method: size ((ref lagged-clump))
Source

lumps.lisp.

Reader Method: size ((lump lump))

The number of values in a single stripe.

Source

lumps.lisp.

Target Slot

size.

Method: size ((bpn bpn))
Source

backprop.lisp.

Reader Method: size ((chunk chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

size.

Reader Method: size ((segment-set segment-set))

The sum of the sizes of the weight matrices of SEGMENTS.

Source

optimize.lisp.

Target Slot

size.

Generic Function: sort-confusion-classes (matrix classes)

Return a list of CLASSES sorted for presentation purposes.

Package

mgl-core.

Source

classification.lisp.

Methods
Method: sort-confusion-classes ((matrix confusion-matrix) classes)
Generic Reader: sparser (object)
Package

mgl-bm.

Methods
Reader Method: sparser ((sparse-bm-learner sparse-bm-learner))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

sparser.

Generic Reader: step-monitors (object)
Generic Writer: (setf step-monitors) (object)
Package

mgl-bp.

Methods
Reader Method: step-monitors ((rnn rnn))
Writer Method: (setf step-monitors) ((rnn rnn))

During training, unfolded ‘BPN‘s corresponding to
previous time steps may be expensive to get at because they are no longer in GPU memory. This consideration also applies to making prediction with the additional caveat that with *WARP-TIME* true, previous states are discarded so it’s not possible to gather statistics after FORWARD finished.

Add monitor objects to this slot and they will be automatically applied to the RNN after each step when ‘FORWARD‘ing the RNN during training or prediction. To be able to easily switch between sets of monitors, in addition to a list of monitors this can be a symbol or a function, too. If it’s a symbol, then its a designator for its SYMBOL-VALUE. If it’s a function, then it must have no arguments and it’s a designator for its return value.

Source

backprop.lisp.

Target Slot

step-monitors.

Generic Function: stripe-end (stripe object)

Return the end index (exclusive) of STRIPE in some array or matrix of OBJECT.

Package

mgl-core.

Source

core.lisp.

Methods
Method: stripe-end (stripe (lump lump))
Source

lumps.lisp.

Method: stripe-end (stripe (chunk chunk))
Source

boltzmann-machine.lisp.

Generic Function: stripe-start (stripe object)

Return the start index of STRIPE in some array or matrix of OBJECT.

Package

mgl-core.

Source

core.lisp.

Methods
Method: stripe-start (stripe (lump lump))
Source

lumps.lisp.

Method: stripe-start (stripe (chunk chunk))
Source

boltzmann-machine.lisp.

Generic Function: stripedp (clump)

For efficiency, forward and backprop phases do
their stuff in batch mode: passing a number of instances through the network in batches. Thus clumps must be able to store values of and gradients for each of these instances. However, some clumps produce the same result for each instance in a batch. These clumps are the weights, the parameters of the network. STRIPEDP returns true iff CLUMP does not represent weights (i.e. it’s not a ->WEIGHT).

For striped clumps, their NODES and DERIVATIVES are MAT objects with a leading dimension (number of rows in the 2d case) equal to the number of instances in the batch. Non-striped clumps have no restriction on their shape apart from what their usage dictates.

Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: stripedp ((lump ->weight))
Source

lumps.lisp.

Method: stripedp ((clump clump))
Generic Reader: target (object)
Generic Writer: (setf target) (object)
Package

mgl-common.

Methods
Reader Method: target ((->softmax-xe-loss ->softmax-xe-loss))
Writer Method: (setf target) ((->softmax-xe-loss ->softmax-xe-loss))

Set in SET-INPUT, this is either a MAT of the same
size as the input lump ‘X‘ or if the target is very sparse, this can also be a sequence of batch size length that contains the index value pairs of non-zero entries:

(;; first instance in batch has two non-zero targets
(;; class 10 has 30% expected probability
(10 . 0.3)
;; class 2 has 70% expected probability
(2 . 0.7))
;; second instance in batch puts 100% on class 7 7
;; more instances in the batch follow
...)

Actually, in the rare case where [GROUP-SIZE][(reader ->softmax-xe-loss)] is not SIZE (i.e. there are several softmax normalization groups for every example), the length of the above target sequence is BATCH-SIZE * N-GROUPS. Indices are always relative to the start of the group.

If [GROUP-SIZE][(reader ->softmax-xe-loss)] is large (for example, in neural language models with a huge number of words), using sparse targets can make things go much faster, because calculation of the derivative is no longer quadratic.

Giving different weights to training instances is implicitly supported. While target values in a group should sum to 1, multiplying all target values with a weight ‘W‘ is equivalent to training that ‘W‘ times on the same example.

Source

lumps.lisp.

Target Slot

target.

Reader Method: target ((sparsity-gradient-source sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

sparsity-target.

Generic Reader: termination (object)
Generic Writer: (setf termination) (object)
Package

mgl-opt.

Methods
Reader Method: termination ((iterative-optimizer iterative-optimizer))
Writer Method: (setf termination) ((iterative-optimizer iterative-optimizer))

If a number, it’s the number of instances to train
on in the sense of N-INSTANCES. If N-INSTANCES is equal or greater than this value optimization stops. If TERMINATION is NIL, then optimization will continue. If it is T, then optimization will stop. If it is a function of no arguments, then its return value is processed as if it was returned by TERMINATION.

Source

optimize.lisp.

Target Slot

termination.

Generic Reader: transpose-weights-p (object)
Package

mgl-bp.

Methods
Reader Method: transpose-weights-p ((->v*m ->v*m))

Determines whether the input is multiplied by WEIGHTS or its transpose.

Source

lumps.lisp.

Target Slot

transpose-weights-p.

Generic Reader: unfolder (object)
Package

mgl-bp.

Methods
Reader Method: unfolder ((rnn rnn))

The UNFOLDER of an RNN is function of no arguments
that builds and returns a BPN. The unfolder is allowed to create networks with arbitrary topology even different ones for different [TIME-STEP][]s with the help of LAG, or nested RNNs. Weights of the same name are shared between the folds. That is, if a ->WEIGHT lump were to be created and a weight lump of the same name already exists, then the existing lump will be added to the BPN created by UNFOLDER.

Source

backprop.lisp.

Target Slot

unfolder.

Generic Reader: use-segment-derivatives-p (object)
Package

mgl-gd.

Methods
Reader Method: use-segment-derivatives-p ((gd-optimizer gd-optimizer))

Save memory if both the gradient source (the model
being optimized) and the optimizer support this feature. It works like this: the accumulator into which the gradient source is asked to place the derivatives of a segment will be SEGMENT-DERIVATIVES of the segment. This allows the optimizer not to allocate an accumulator matrix into which the derivatives are summed.

Source

gradient-descent.lisp.

Target Slot

use-segment-derivatives-p.

Generic Reader: variance (object)
Generic Writer: (setf variance) (object)
Package

mgl-bp.

Methods
Reader Method: variance ((->gaussian-random ->gaussian-random))
Writer Method: (setf variance) ((->gaussian-random ->gaussian-random))

The variance of the normal distribution.

Source

lumps.lisp.

Target Slot

variance.

Generic Reader: variance-adjustment (object)
Generic Writer: (setf variance-adjustment) (object)
Package

mgl-gd.

Methods
Reader Method: variance-adjustment ((adam-optimizer adam-optimizer))
Writer Method: (setf variance-adjustment) ((adam-optimizer adam-optimizer))

Within the bowels of adam, the estimated mean is
divided by the square root of the estimated variance (per weight) which can lead to numerical problems if the denominator is near zero. To avoid this, VARIANCE-ADJUSTMENT, which should be a small positive number, is added to the denominator. This is ‘epsilon‘ in the paper.

Source

gradient-descent.lisp.

Target Slot

variance-adjustment.

Reader Method: variance-adjustment ((->batch-normalization ->batch-normalization))

A small positive real number that’s added to the sample variance. This is $\epsilon$ in the paper.

Source

lumps.lisp.

Target Slot

variance-adjustment.

Generic Reader: variance-decay (object)
Generic Writer: (setf variance-decay) (object)
Package

mgl-gd.

Methods
Reader Method: variance-decay ((adam-optimizer adam-optimizer))
Writer Method: (setf variance-decay) ((adam-optimizer adam-optimizer))

A number between 0 and 1 that determines how fast
the estimated variance of derivatives is updated. This is $\beta_2$ in the paper.

Source

gradient-descent.lisp.

Target Slot

variance-decay.

Generic Reader: variance-for-prediction (object)
Generic Writer: (setf variance-for-prediction) (object)
Package

mgl-bp.

Methods
Reader Method: variance-for-prediction ((->gaussian-random ->gaussian-random))
Writer Method: (setf variance-for-prediction) ((->gaussian-random ->gaussian-random))

If not NIL, then this value overrides VARIANCE
when not in training (i.e. when making predictions).

Source

lumps.lisp.

Target Slot

variance-for-prediction.

Generic Function: visible-chunks (object)
Package

mgl-bm.

Methods
Method: visible-chunks ((dbn dbn))
Source

deep-belief-network.lisp.

Reader Method: visible-chunks ((bm bm))

A list of CHUNKs whose values come from the outside world: SET-INPUT sets them.

Source

boltzmann-machine.lisp.

Target Slot

visible-chunks.

Generic Reader: visible-sampling (object)
Generic Writer: (setf visible-sampling) (object)
Package

mgl-bm.

Methods
Reader Method: visible-sampling ((bm-mcmc-learner bm-mcmc-learner))
Writer Method: (setf visible-sampling) ((bm-mcmc-learner bm-mcmc-learner))

Controls whether visible nodes are sampled during the learning or the mean field is used instead.

Source

boltzmann-machine.lisp.

Target Slot

visible-sampling.

Generic Reader: warp-length (object)
Package

mgl-bp.

Methods
Reader Method: warp-length ((rnn rnn))

An integer such that the BPN UNFOLDER creates at
time step ‘I‘ (where ‘(<= WARP-START I)‘) is identical to the BPN created at time step ‘(+ WARP-START (MOD (- I WARP-START) WARP-LENGTH))‘ except for a shift in its time lagged connections.

Source

backprop.lisp.

Target Slot

warp-length.

Generic Reader: warp-start (object)
Package

mgl-bp.

Methods
Reader Method: warp-start ((rnn rnn))

The TIME-STEP from which UNFOLDER will create
‘BPN‘s that essentially repeat every WARP-LENGTH steps.

Source

backprop.lisp.

Target Slot

warp-start.

Generic Reader: weight-decay (object)
Generic Writer: (setf weight-decay) (object)
Package

mgl-gd.

Methods
Reader Method: weight-decay ((gd-optimizer gd-optimizer))
Writer Method: (setf weight-decay) ((gd-optimizer gd-optimizer))

An L2 penalty. It discourages large weights, much
like a zero mean gaussian prior. WEIGHT-DECAY * WEIGHT is added to the gradient to penalize large weights. It’s as if the function whose minimum is sought had WEIGHT-DECAY*sum_i{0.5 * WEIGHT_i^2} added to it.

Source

gradient-descent.lisp.

Target Slot

weight-decay.

Generic Reader: weight-indices (object)
Package

mgl-diffun.

Methods
Reader Method: weight-indices ((diffun diffun))

The list of indices of parameters to be optimized,
the values of which will come from the [WEIGHTS][dislocated] argument of MINIMIZE.

Source

differentiable-function.lisp.

Target Slot

weight-indices.

Generic Reader: weight-penalty (object)
Generic Writer: (setf weight-penalty) (object)
Package

mgl-gd.

Methods
Reader Method: weight-penalty ((gd-optimizer gd-optimizer))
Writer Method: (setf weight-penalty) ((gd-optimizer gd-optimizer))

An L1 penalty. It encourages sparsity.
SIGN(WEIGHT) * WEIGHT-PENALTY is added to the gradient pushing the weight towards negative infinity. It’s as if the function whose minima is sought had WEIGHT-PENALTY*sum_i{abs(WEIGHT_i)} added to it. Putting it on feature biases consitutes a sparsity constraint on the features.

Source

gradient-descent.lisp.

Target Slot

weight-penalty.

Generic Reader: weights (object)
Package

mgl-common.

Source

common.lisp.

Methods
Reader Method: weights ((->v*m ->v*m))

A ->WEIGHT lump.

Source

lumps.lisp.

Target Slot

weights.

Reader Method: weights ((->embedding ->embedding))

A weight lump whose rows indexed by
INPUT-ROW-INDICES are copied to the output of this lump.

Source

lumps.lisp.

Target Slot

weights.

Reader Method: weights ((full-cloud full-cloud))

A chunk is represented as a row vector
disregarding the multi-striped case). If the visible chunk is 1xN and the hidden is 1xM then the weight matrix is NxM. Hidden = hidden + weights * visible. Visible = visible + weights^T * hidden.

Source

boltzmann-machine.lisp.

Target Slot

weights.

Reader Method: weights ((cg-optimizer cg-optimizer))

automatically generated reader method

Source

conjugate-gradient.lisp.

Target Slot

weights.

Generic Writer: (setf weights) (object)
Package

mgl-common.

Methods
Writer Method: (setf weights) ((cg-optimizer cg-optimizer))

automatically generated writer method

Source

conjugate-gradient.lisp.

Target Slot

weights.

Generic Function: write-command (command stream)

Write the gnuplot COMMAND to STREAM. Commands represented by the classes PLOT, SET-COMMAND, etc.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Methods
Method: write-command ((plot plot) stream)
Method: write-command ((command string) stream)
Generic Function: write-data (data stream)

Serialize DATA to STREAM in gnuplot format.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Methods
Method: write-data ((list list) stream)
Method: write-data ((array array) stream)
Method: write-data ((mapping data-mapping) stream)
Method: write-data ((mapping file-mapping) stream)
Method: write-data ((mapping function-mapping) stream)
Generic Function: write-mapping (mapping stream)

Write the gnuplot MAPPING to STREAM.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Methods
Method: write-mapping ((list list) stream)
Method: write-mapping ((array array) stream)
Method: write-mapping ((mapping data-mapping) stream)
Method: write-mapping ((mapping file-mapping) stream)
Method: write-mapping ((mapping function-mapping) stream)
Method: write-mapping ((mapping mapping) stream)
Generic Function: write-state* (object stream context)

This is the extension point for WRITE-STATE. It is
guaranteed that primary WRITE-STATE* methods will be called only once for each OBJECT (under EQ). CONTEXT is an opaque object and must be passed on to any recursive WRITE-STATE* calls.

Package

mgl-core.

Source

core.lisp.

Methods
Method: write-state* ((lump ->batch-normalization) stream context)
Source

lumps.lisp.

Method: write-state* ((lump ->weight) stream context)
Source

lumps.lisp.

Method: write-state* ((lump lump) stream context)
Source

lumps.lisp.

Method: write-state* ((bpn bpn) stream context)
Source

backprop.lisp.

Method: write-state* ((dbn dbn) stream context)
Source

deep-belief-network.lisp.

Method: write-state* ((bm bm) stream context)
Source

boltzmann-machine.lisp.

Method: write-state* ((cloud factored-cloud) stream seen)
Source

boltzmann-machine.lisp.

Method: write-state* ((cloud full-cloud) stream seen)
Source

boltzmann-machine.lisp.

Method: write-state* ((indexer encoder/decoder) stream context)
Source

feature.lisp.

Method: write-state* :around (object stream context)

6.1.7 Standalone methods

Method: describe-object ((optimizer batch-gd-optimizer) g0)
Source

gradient-descent.lisp.

Method: describe-object :after ((optimizer segmented-gd-optimizer) stream)
Source

gradient-descent.lisp.

Method: describe-object ((optimizer segmented-gd-optimizer) g0)
Source

gradient-descent.lisp.

Method: describe-object ((optimizer adam-optimizer) g0)
Source

gradient-descent.lisp.

Method: describe-object ((optimizer gd-optimizer) g0)
Source

gradient-descent.lisp.

Method: describe-object ((rnn rnn) g0)
Source

backprop.lisp.

Method: describe-object ((bpn bpn) g0)
Source

backprop.lisp.

Method: describe-object :after ((learner bp-learner) stream)
Source

backprop.lisp.

Method: describe-object ((learner bp-learner) g0)
Source

backprop.lisp.

Method: describe-object ((optimizer cg-optimizer) g0)
Source

conjugate-gradient.lisp.

Method: describe-object ((cloud factored-cloud) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((bm bm) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((chunk chunk) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((sparsity sparsity-gradient-source) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((cloud full-cloud) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((cloud cloud) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((learner bm-pcd-learner) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((rbm rbm) g0)
Source

boltzmann-machine.lisp.

Method: describe-object :after ((learner bm-learner) stream)
Source

boltzmann-machine.lisp.

Method: describe-object ((chunk conditioning-chunk) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((chunk normalized-group-chunk) g0)
Source

boltzmann-machine.lisp.

Method: describe-object ((bm bm-mcmc-learner) g0)
Source

boltzmann-machine.lisp.

Method: describe-object :after ((learner sparse-bm-learner) stream)
Source

boltzmann-machine.lisp.

Method: describe-object ((learner sparse-bm-learner) g0)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((segment-set segment-set) &key &allow-other-keys)
Source

optimize.lisp.

Method: initialize-instance :after ((lump ->max-channel) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->sample-binary) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->seq-barrier) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->max) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :around ((clump clump) &key &allow-other-keys)
Source

backprop.lisp.

Method: initialize-instance :after ((lump ->scaled-tanh) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :around ((weight ->weight) &key dimensions size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((rnn rnn) &key &allow-other-keys)
Source

backprop.lisp.

Method: initialize-instance :after ((bpn bpn) &key &allow-other-keys)
Source

backprop.lisp.

Method: initialize-instance :after ((lump ->min) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->squared-difference) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->sum) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->softmax-xe-loss) &key &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->tanh) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->dropout) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->sin) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :before ((lump ->input) &key &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump lump) &key &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->*) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->embedding) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->batch-normalized) &key size batch-size variance-adjustment population-decay &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->+) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->relu) &key size &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((lump ->v*m) &key &allow-other-keys)
Source

lumps.lisp.

Method: initialize-instance :after ((cloud factored-cloud) &key rank &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((bm bm) &key &allow-other-keys)

Return an BM that consists of VISIBLE-CHUNKS, HIDDEN-CHUNKS and CLOUDS of weights where CLOUDS is a list of cloud specifications. Names of chunks and clouds must be unique under EQUAL. CLOUDS is merged with DEFAULT-CLOUDS. DEFAULT-CLOUDS defaults to connecting all visible and hidden chunks with FULL-CLOUDS without any intralayer connection. See MERGE-CLOUD-SPECS on the semantics of merging.

Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((sparsity cheating-sparsity-gradient-source) &key &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((chunk chunk) &key size max-n-stripes &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((dbn dbn) &key &allow-other-keys)
Source

deep-belief-network.lisp.

Method: initialize-instance :around ((dbn dbn) &key layers clouds-up-to-layers rbm-class &allow-other-keys)
Source

deep-belief-network.lisp.

Method: initialize-instance :after ((cloud full-cloud) &key &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((sparsity normal-sparsity-gradient-source) &key &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((learner bm-pcd-learner) &key &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((rbm rbm) &key &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((dbm dbm) &key &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :around ((dbm dbm) &rest initargs &key &allow-other-keys)
Source

boltzmann-machine.lisp.

Method: initialize-instance :after ((attributed attributed) &key prepend-attributes &allow-other-keys)
Source

counter.lisp.

Method: print-object ((optimizer gd-optimizer) stream)
Source

gradient-descent.lisp.

Method: print-object ((sampler function-sampler) stream)
Source

dataset.lisp.

Method: print-object ((set segment-set) stream)
Source

optimize.lisp.

Method: print-object ((rnn rnn) stream)
Source

backprop.lisp.

Method: print-object ((bpn bpn) stream)
Source

backprop.lisp.

Method: print-object ((lump lump) stream)
Source

lumps.lisp.

Method: print-object ((stat running-stat) stream)
Source

util.lisp.

Method: print-object ((cloud factored-cloud) stream)
Source

boltzmann-machine.lisp.

Method: print-object ((bm bm) stream)
Source

boltzmann-machine.lisp.

Method: print-object ((chunk chunk) stream)
Source

boltzmann-machine.lisp.

Method: print-object ((sparsity sparsity-gradient-source) stream)
Source

boltzmann-machine.lisp.

Method: print-object ((cloud full-cloud) stream)
Source

boltzmann-machine.lisp.

Method: print-object ((cloud cloud) stream)
Source

boltzmann-machine.lisp.

Method: print-object ((monitor monitor) stream)
Source

monitor.lisp.

Method: print-object ((attributed attributed) stream)
Source

counter.lisp.

Method: print-object ((counter cross-entropy-counter) stream)
Source

classification.lisp.

Method: print-object ((matrix confusion-matrix) stream)
Source

classification.lisp.

Method: print-object ((counter classification-accuracy-counter) stream)
Source

classification.lisp.

Method: print-object ((counter basic-counter) stream)
Source

counter.lisp.


6.1.8 Classes

Class: ->*

Performs elementwise multiplication on its two
input lumps. The SIZE of this lump is automatically determined from the size of its inputs. Either input can be a ->WEIGHT lump.

“‘cl-transcript
(->* (->input :size 10) (->weight :size 10 :name ’scale)
:name ’mult)
==> #<->* MULT :SIZE 10 1/1 :NORM 0.00000>
“‘

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: y
Initargs

:y

Readers

y.

Writers

This slot is read-only.

Class: ->+

Performs elementwise addition on its input lumps.
The SIZE of this lump is automatically determined from the size of its inputs if there is at least one. If one of the inputs is a ->WEIGHT lump, then it is added to every stripe.

“‘cl-transcript
(->+ (list (->input :size 10) (->weight :size 10 :name ’bias)) :name ’plus)
==> #<->+ PLUS :SIZE 10 1/1 :NORM 0.00000>
“‘

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: args
Initargs

:args

Readers

args.

Writers

This slot is read-only.

Class: ->abs
Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Class: ->activation

Activation subnetworks are built by the function
->ACTIVATION and they have a number of lumps hidden inside them. Ultimately, this subnetwork computes a sum like ‘sum_i x_i * W_i + sum_j y_j .* V_j + biases‘ where ‘x_i‘ are input lumps, ‘W_i‘ are dense matrices representing connections, while ‘V_j‘ are peephole connection vectors that are mulitplied in an elementwise manner with their corresponding input ‘y_j‘.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

bpn.

Class: ->batch-normalization

The primary purpose of this class is to hold the
estimated mean and variance of the inputs to be normalized and allow them to be shared between multiple ->BATCH-NORMALIZED lumps that carry out the computation. These estimations are saved and loaded by SAVE-STATE and LOAD-STATE.

“‘commonlisp
(->batch-normalization (->weight :name ’(h1 :scale) :size 10) (->weight :name ’(h1 :shift) :size 10) :name ’(h1 :batch-normalization))
“‘

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

->weight.

Direct methods
Direct slots
Slot: scale

A weight lump of the same size as SHIFT. This is $\gamma$ in the paper.

Package

mgl-common.

Initargs

:scale

Readers

scale.

Writers

This slot is read-only.

Slot: shift

A weight lump of the same size as SCALE. This is $\beta$ in the paper.

Initargs

:shift

Readers

shift.

Writers

This slot is read-only.

Slot: batch-mean
Readers

batch-mean.

Writers

(setf batch-mean).

Slot: batch-variance
Readers

batch-variance.

Writers

(setf batch-variance).

Slot: batch-size

Normally all stripes participate in the batch.
Lowering the number of stripes may increase the regularization effect, but it also makes the computation less efficient. By setting BATCH-SIZE to a divisor of N-STRIPES one can decouple the concern of efficiency from that of regularization. The default value, NIL, is equivalent to N-STRIPES. BATCH-SIZE only affects training.

With the special value :USE-POPULATION, instead of the mean and the variance of the current batch, use the population statistics for normalization. This effectively cancels the regularization effect, leaving only the faster learning.

Package

mgl-common.

Initargs

:batch-size

Readers

batch-size.

Writers

This slot is read-only.

Slot: variance-adjustment

A small positive real number that’s added to the sample variance. This is $\epsilon$ in the paper.

Package

mgl-gd.

Initform

1.0e-4

Initargs

:variance-adjustment

Readers

variance-adjustment.

Writers

This slot is read-only.

Slot: population-mean
Readers

population-mean.

Writers

(setf population-mean).

Slot: population-variance
Readers

population-variance.

Writers

(setf population-variance).

Slot: population-decay

While training, an exponential moving average of
batch means and standard deviances (termed _population statistics_) is updated. When making predictions, normalization is performed using these statistics. These population statistics are persisted by SAVE-STATE.

Initform

0.99

Initargs

:population-decay

Readers

population-decay.

Writers

This slot is read-only.

Slot: n-steps
Initform

0

Readers

n-steps.

Writers

(setf n-steps).

Class: ->batch-normalized

This is an implementation of v3 of the [Batch
Normalization paper](http://arxiv.org/abs/1502.03167). The output of ->BATCH-NORMALIZED is its input normalized so that for all elements the mean across stripes is zero and the variance is 1. That is, the mean of the batch is subtracted from the inputs and they are rescaled by their sample stddev. Actually, after the normalization step the values are rescaled and shifted (but this time with learnt parameters) in order to keep the representational power of the model the same. The primary purpose of this lump is to speed up learning, but it also acts as a regularizer. See the paper for the details.

To normalize the output of [LUMP][dislocated] without no additional regularizer effect:

“‘commonlisp
(->batch-normalized lump :batch-size :use-population)
“‘

The above uses an exponential moving average to estimate the mean and variance of batches and these estimations are used at both training and test time. In contrast to this, the published version uses the sample mean and variance of the current batch at training time which injects noise into the process. The noise is higher for lower batch sizes and has a regularizing effect. This is the default behavior (equivalent to ‘:BATCH-SIZE NIL‘):

“‘commonlisp
(->batch-normalized lump)
“‘

For performance reasons one may wish to process a higher number of instances in a batch (in the sense of N-STRIPES) and get the regularization effect associated with a lower batch size. This is possible by setting :BATCH-SIZE to a divisor of the the number of stripes. Say, the number of stripes is 128, but we want as much regularization as we would get with 32:

“‘commonlisp
(->batch-normalized lump :batch-size 32)
“‘

The primary input of ->BATCH-NORMALIZED is often an ->ACTIVATION and its output is fed into an activation function (see @MGL-BP-ACTIVATION-FUNCTIONS).

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: normalization

The ->BATCH-NORMALIZATION of this lump. May be
shared between multiple ->BATCH-NORMALIZED lumps.

Batch normalization is special in that it has state apart from the computed results (NODES) and its derivatives (DERIVATIVES). This state is the estimated mean and variance of its inputs and they are encapsulated by ->BATCH-NORMALIZATION.

If NORMALIZATION is not given at instantiation, then a new ->BATCH-NORMALIZATION object will be created automatically, passing :BATCH-SIZE, :VARIANCE-ADJUSTMENT, and :POPULATION-DECAY arguments on to ->BATCH-NORMALIZATION. See [BATCH-SIZE][(reader ->batch-normalization)], [VARIANCE-ADJUSTMENT][(reader ->batch-normalization)] and [POPULATION-DECAY][(reader ->batch-normalization)]. New scale and shift weight lumps will be created with names:

‘(,name :scale)
‘(,name :shift)

where ‘\NAME‘ is the NAME of this lump.

This default behavior covers the use-case where the statistics kept by ->BATCH-NORMALIZATION are to be shared only between time steps of an RNN.

Initargs

:normalization

Readers

batch-normalization.

Writers

This slot is read-only.

Class: ->dropout

The output of this lump is identical to its input,
except it randomly zeroes out some of them during training which act as a very strong regularizer. See Geoffrey Hinton’s ’Improving neural networks by preventing co-adaptation of feature detectors’.

The SIZE of this lump is the size of its input which is determined automatically.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct subclasses
Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: dropout

If non-NIL, then in the forward pass zero out each node in this chunk with DROPOUT probability.

Type

(or null real)

Initform

0.5

Initargs

:dropout

Readers

dropout.

Writers

(setf dropout).

Slot: mask
Readers

mask.

Writers

This slot is read-only.

Class: ->embedding

Select rows of WEIGHTS, one row for each index in INPUT-ROW-INDICES. This lump is equivalent to adding an ->INPUT lump with a one hot encoding scheme and a ->V*M lump on top of it, but it is more efficient in execution and in memory usage, because it works with a sparse representation of the input.

The SIZE of this lump is the number of columns of WEIGHTS which is determined automatically.

“‘cl-transcript
(->embedding :weights (->weight :name ’embedding-weights :dimensions ’(3 5))
:name ’embeddings)
==> #<->EMBEDDING EMBEDDINGS :SIZE 5 1/1 :NORM 0.00000>
“‘

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: weights

A weight lump whose rows indexed by
INPUT-ROW-INDICES are copied to the output of this lump.

Package

mgl-common.

Initargs

:weights

Readers

weights.

Writers

This slot is read-only.

Slot: input-row-indices

A sequence of batch size length of row indices. To be set in SET-INPUT.

Initargs

:input-row-indices

Readers

input-row-indices.

Writers

(setf input-row-indices).

Class: ->exp
Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Class: ->gaussian-random

This lump has no input, it produces normally
distributed independent random numbers with MEAN and VARIANCE (or VARIANCE-FOR-PREDICTION). This is useful building block for noise based regularization methods.

“‘cl-transcript
(->gaussian-random :size 10 :name ’normal :mean 1 :variance 2) ==> #<->GAUSSIAN-RANDOM NORMAL :SIZE 10 1/1 :NORM 0.00000> “‘

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: mean

The mean of the normal distribution.

Type

real

Initform

0

Initargs

:mean

Readers

mean.

Writers

(setf mean).

Slot: variance

The variance of the normal distribution.

Type

real

Initform

1

Initargs

:variance

Readers

variance.

Writers

(setf variance).

Slot: variance-for-prediction

If not NIL, then this value overrides VARIANCE
when not in training (i.e. when making predictions).

Initform

0

Initargs

:variance-for-prediction

Readers

variance-for-prediction.

Writers

(setf variance-for-prediction).

Class: ->gp
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: means
Initargs

:means

Readers

means.

Writers

This slot is read-only.

Slot: covariances
Initargs

:covariances

Readers

covariances.

Writers

This slot is read-only.

Slot: samples
Readers

samples.

Writers

(setf samples).

Slot: posterior-gps
Readers

posterior-gps.

Writers

(setf posterior-gps).

Class: ->input

A lump that has no input lumps, does not change its
values in the forward pass (except when [DROPOUT][(ACCESSOR ->INPUT)] is non-zero), and does not compute derivatives. _Clamp_ inputs on NODES of input lumps in SET-INPUT.

For convenience, ->INPUT can perform dropout itself although it defaults to no dropout.

“‘cl-transcript
(->input :size 10 :name ’some-input)
==> #<->INPUT SOME-INPUT :SIZE 10 1/1 :NORM 0.00000>
“‘

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

->dropout.

Direct methods
Direct slots
Slot: dropout

See [DROPOUT][(ACCESSOR ->DROPOUT)].

Readers

dropout.

Writers

(setf dropout).

Class: ->loss

Calculate the loss for the instances in the batch.
The main purpose of this lump is to provide a training signal.

An error lump is usually a leaf in the graph of lumps (i.e. there are no other lumps whose input is this one). The special thing about error lumps is that 1 (but see IMPORTANCE) is added automatically to their derivatives. Error lumps have exactly one node (per stripe) whose value is computed as the sum of nodes in their input lump.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

->sum.

Direct methods
Direct slots
Slot: importance

This is to support weighted instances. That is
when not all training instances are equally important. If non-NIL, a 1d MAT with the importances of stripes of the batch. When IMPORTANCE is given (typically in SET-INPUT), then instead of adding 1 to the derivatives of all stripes, IMPORTANCE is added elemtwise.

Initargs

:importance

Readers

importance.

Writers

(setf importance).

Class: ->lstm

Long-Short Term Memory subnetworks are built by the
function ->LSTM and they have many lumps hidden inside them. These lumps are packaged into a subnetwork to reduce clutter.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

bpn.

Class: ->max

This is basically maxout without dropout (see http://arxiv.org/abs/1302.4389). It groups its inputs by [GROUP-SIZE][(READER ->MAX)], and outputs the maximum of each group. The SIZE of the output is automatically calculated, it is the size of the input divided by [GROUP-SIZE][(READER ->MAX)].

“‘cl-transcript
(->max (->input :size 120) :group-size 3 :name ’my-max)
==> #<->MAX MY-MAX :SIZE 40 1/1 :NORM 0.00000 :GROUP-SIZE 3> “‘

The advantage of ->MAX over ->RELU is that flow gradient is never stopped so there is no problem of units getting stuck in off state.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: group-size

The number of inputs in each group.

Package

mgl-common.

Initargs

:group-size

Readers

group-size.

Writers

This slot is read-only.

Class: ->max-channel

Called LWTA (Local Winner Take All) or
Channel-Out (see http://arxiv.org/abs/1312.1909) in the literature it is basically ->MAX, but instead of producing one output per group, it just produces zeros for all unit but the one with the maximum value in the group. This allows the next layer to get some information about the path along which information flowed. The SIZE of this lump is the size of its input which is determined automatically.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x

Input comes from here.

Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: group-size

The number of inputs in each group.

Package

mgl-common.

Initargs

:group-size

Readers

group-size.

Writers

This slot is read-only.

Class: ->min

Same as ->MAX, but it computes the MIN of groups. Rarely useful.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: group-size

The number of inputs in each group.

Package

mgl-common.

Initargs

:group-size

Readers

group-size.

Writers

This slot is read-only.

Class: ->normalized
Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: group-size
Package

mgl-common.

Initargs

:group-size

Readers

group-size.

Writers

This slot is read-only.

Slot: scale

The sum of nodes after normalization. Can be
changed during training, for instance when clamping. If it is a vector then its length must be MAX-N-STRIPES which automatically maintained.

Package

mgl-common.

Type

(or real array)

Initform

1

Initargs

:scale

Readers

scale.

Writers

(setf scale).

Class: ->periodic
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: period
Initargs

:period

Readers

period.

Writers

This slot is read-only.

Class: ->ref
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: index
Package

mgl-util.

Initargs

:index

Readers

index.

Writers

This slot is read-only.

Slot: into
Initargs

:into

Readers

into.

Writers

This slot is read-only.

Slot: drop-negative-index-p
Initargs

:drop-negative-index-p

Readers

drop-negative-index-p.

Writers

This slot is read-only.

Class: ->relu

‘max(0,x)‘ activation function. Be careful, relu
units can get stuck in the off state: if they move to far to negative territory it can be very difficult to get out of it. The SIZE of this lump is the size of its input which is determined automatically.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Class: ->rep
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: n
Initargs

:n

Readers

n.

Writers

This slot is read-only.

Class: ->rough-exponential
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: signal-variance
Initargs

:signal-variance

Readers

signal-variance.

Writers

This slot is read-only.

Slot: length-scale
Initargs

:length-scale

Readers

length-scale.

Writers

This slot is read-only.

Slot: roughness
Initargs

:roughness

Readers

roughness.

Writers

This slot is read-only.

Class: ->sample-binary

Treating values of its input as probabilities,
sample independent binomials. Turn true into 1 and false into 0. The SIZE of this lump is determined automatically from the size of its input.

“‘cl-transcript
(->sample-binary (->input :size 10) :name ’binarized-input)
==> #<->SAMPLE-BINARY BINARIZED-INPUT :SIZE 10 1/1 :NORM 0.00000> “‘

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: randoms
Readers

randoms.

Writers

This slot is read-only.

Class: ->scaled-tanh

Pretty much like TANH but its input and output is
scaled in such a way that the variance of its output is close to 1 if the variance of its input is close to 1 which is a nice property to combat vanishing gradients. The actual function is ‘1.7159 * tanh(2/3 * x)‘. The SIZE of this lump is the size of its input which is determined automatically.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Class: ->seq-barrier

In an RNN, processing of stripes (instances in the
batch) may require different number of time step so the final state for stripe 0 is in stripe 0 of some lump L at time step 7, while for stripe 1 it is in stripe 1 of sump lump L at time step 42.

This lump copies the per-stripe states from different lumps into a single lump so that further processing can take place (typically when the RNN is embedded in another network).

The SIZE of this lump is automatically set to the size of the lump returned by ‘(FUNCALL SEQ-ELT-FN 0)‘.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: seq-elt-fn

A function of an INDEX argument that returns the lump with that index in some sequence.

Initargs

:seq-elt-fn

Readers

seq-elt-fn.

Writers

This slot is read-only.

Slot: seq-indices

A sequence of length batch size of indices. The
element at index ‘I‘ is the index to be passed to SEQ-ELT-FN to find the lump whose stripe ‘I‘ is copied to stripe ‘I‘ of this this lump.

Readers

seq-indices.

Writers

(setf seq-indices).

Class: ->sigmoid

Applies the ‘1/(1 + e^{-x})‘ function elementwise
to its inputs. This is one of the classic non-linearities for neural networks.

For convenience, ->SIGMOID can perform dropout itself although it defaults to no dropout.

“‘cl-transcript
(->sigmoid (->activation (->input :size 10) :size 5) :name ’this) ==> #<->SIGMOID THIS :SIZE 5 1/1 :NORM 0.00000>
“‘

The SIZE of this lump is the size of its input which is determined automatically.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses
Direct methods
Direct slots
Slot: dropout

See [DROPOUT][(ACCESSOR ->DROPOUT)].

Readers

dropout.

Writers

(setf dropout).

Class: ->sin

Applies the SIN function to its input in an
elementwise manner. The SIZE of this lump is the size of its input which is determined automatically.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Class: ->softmax-xe-loss

A specialized lump that computes the softmax of its
input in the forward pass and backpropagates a cross-entropy loss. The advantage of doing these together is numerical stability. The total cross-entropy is the sum of cross-entropies per group of [GROUP-SIZE][(reader ->softmax-xe-loss)] elements:

$$
XE(x) = - \sum_{i=1,g} t_i \ln(s_i),
$$

where ‘g‘ is the number of classes ([GROUP-SIZE][(reader ->softmax-xe-loss)]), ‘t_i‘ are the targets (i.e. the true probabilities of the class, often all zero but one), ‘s_i‘ is the output of softmax calculated from input ‘X‘:

$$
s_i = {softmax}(x_1, x_2, ..., x_g) =
\frac{e^x_i}{\sum_{j=1,g} e^x_j}
$$

In other words, in the forward phase this lump takes input ‘X‘, computes its elementwise EXP, normalizes each group of [GROUP-SIZE][(reader ->softmax-xe-loss)] elements to sum to 1 to get the softmax which is the result that goes into NODES. In the backward phase, there are two sources of gradients: the lumps that use the output of this lump as their input (currently not implemented and would result in an error) and an implicit cross-entropy loss.

One can get the cross-entropy calculated in the most recent forward pass by calling COST on this lump.

This is the most common loss function for classification. In fact, it is nearly ubiquitous. See the @MGL-FNN-TUTORIAL and the @MGL-RNN-TUTORIAL for how this loss and SET-INPUT work together.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: group-size

The number of elements in a softmax group. This is
the number of classes for classification. Often GROUP-SIZE is equal to SIZE (it is the default), but in general the only constraint is that SIZE is a multiple of GROUP-SIZE.

Package

mgl-common.

Initargs

:group-size

Readers

group-size.

Writers

This slot is read-only.

Slot: target

Set in SET-INPUT, this is either a MAT of the same
size as the input lump ‘X‘ or if the target is very sparse, this can also be a sequence of batch size length that contains the index value pairs of non-zero entries:

(;; first instance in batch has two non-zero targets
(;; class 10 has 30% expected probability
(10 . 0.3)
;; class 2 has 70% expected probability
(2 . 0.7))
;; second instance in batch puts 100% on class 7 7
;; more instances in the batch follow
...)

Actually, in the rare case where [GROUP-SIZE][(reader ->softmax-xe-loss)] is not SIZE (i.e. there are several softmax normalization groups for every example), the length of the above target sequence is BATCH-SIZE * N-GROUPS. Indices are always relative to the start of the group.

If [GROUP-SIZE][(reader ->softmax-xe-loss)] is large (for example, in neural language models with a huge number of words), using sparse targets can make things go much faster, because calculation of the derivative is no longer quadratic.

Giving different weights to training instances is implicitly supported. While target values in a group should sum to 1, multiplying all target values with a weight ‘W‘ is equivalent to training that ‘W‘ times on the same example.

Package

mgl-common.

Initargs

:target

Readers

target.

Writers

(setf target).

Slot: derivatives
Class: ->squared-difference

This lump takes two input lumps and calculates
their squared difference ‘(x - y)^2‘ in an elementwise manner. The SIZE of this lump is automatically determined from the size of its inputs. This lump is often fed into ->LOSS that sums the squared differences and makes it part of the function to be minimized.

“‘cl-transcript
(->loss (->squared-difference (->activation (->input :size 100) :size 10)
(->input :name ’target :size 10)) :name ’squared-error)
==> #<->LOSS SQUARED-ERROR :SIZE 1 1/1 :NORM 0.00000>
“‘

Currently this lump is not CUDAized, but it will copy data from the GPU if it needs to.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: y
Initargs

:y

Readers

y.

Writers

This slot is read-only.

Class: ->stretch
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: n
Initargs

:n

Readers

n.

Writers

This slot is read-only.

Class: ->sum

Computes the sum of all nodes of its input per stripe. This SIZE of this lump is always 1.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct subclasses

->loss.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Class: ->tanh

Applies the TANH function to its input in an
elementwise manner. The SIZE of this lump is the size of its input which is determined automatically.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Class: ->v*m

Perform ‘X * WEIGHTS‘ where ‘X‘ (the input) is of
size ‘M‘ and WEIGHTS is a ->WEIGHT whose single stripe is taken to be of dimensions ‘M x N‘ stored in row major order. ‘N‘ is the size of this lump. If TRANSPOSE-WEIGHTS-P then WEIGHTS is ‘N x M‘ and ‘X * WEIGHTS’‘ is computed.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: x
Initargs

:x

Readers

x.

Writers

This slot is read-only.

Slot: weights

A ->WEIGHT lump.

Package

mgl-common.

Type

mgl-bp:->weight

Initargs

:weights

Readers

weights.

Writers

This slot is read-only.

Slot: transpose-weights-p

Determines whether the input is multiplied by WEIGHTS or its transpose.

Initargs

:transpose-weights-p

Readers

transpose-weights-p.

Writers

This slot is read-only.

Class: ->weight

A set of optimizable parameters of some kind. When
a BPN is is trained (see @MGL-BP-TRAINING) the NODES of weight lumps will be changed. Weight lumps perform no computation.

Weights can be created by specifying the total size or the dimensions:

“‘cl-transcript
(dimensions (->weight :size 10 :name ’w))
=> (1 10)
(dimensions (->weight :dimensions ’(5 10) :name ’w))
=> (5 10)
“‘

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

lump.

Direct subclasses

->batch-normalization.

Direct methods
Direct slots
Slot: dimensions

NODES and DERIVATIVES of this lump will be allocated with these dimensions.

Initargs

:dimensions

Readers

dimensions.

Writers

This slot is read-only.

Class: adam-optimizer

Adam is a first-order stochasistic gradient descent
optimizer. It maintains an internal estimation for the mean and raw variance of each derivative as exponential moving averages. The step it takes is basically ‘M/(sqrt(V)+E)‘ where ‘M‘ is the estimated mean, ‘V‘ is the estimated variance, and ‘E‘ is a small adjustment factor to prevent the gradient from blowing up. See version 5 of the [paper](http://arxiv.org/abs/1412.6980) for more.

Note that using momentum is not supported with Adam. In fact, an error is signalled if it’s not :NONE.

See @MGL-GD-BATCH-GD-OPTIMIZER for the description of the various options common to all batch based optimizers.

Package

mgl-gd.

Source

gradient-descent.lisp.

Direct superclasses

batch-gd-optimizer.

Direct methods
Direct slots
Slot: learning-rate

Same thing as [LEARNING-RATE][(ACCESSOR
GD-OPTIMIZER)] but with the default suggested by the Adam paper.

Initform

2.0e-4

Readers

learning-rate.

Writers

(setf learning-rate).

Slot: momentum-type
Initform

:none

Slot: mean-decay

A number between 0 and 1 that determines how fast
the estimated mean of derivatives is updated. 0 basically gives you RMSPROP (if VARIANCE-DECAY is not too large) or AdaGrad (if VARIANCE-DECAY is close to 1 and the learning rate is annealed. This is $\beta_1$ in the paper.

Initform

0.9

Initargs

:mean-decay

Readers

mean-decay.

Writers

(setf mean-decay).

Slot: mean-decay-decay

A value that should be close to 1. MEAN-DECAY is
multiplied by this value after each update. This is $\lambda$ in the paper.

Initform

(- 1 1.0d-7)

Initargs

:mean-decay-decay

Readers

mean-decay-decay.

Writers

(setf mean-decay-decay).

Slot: variance-decay

A number between 0 and 1 that determines how fast
the estimated variance of derivatives is updated. This is $\beta_2$ in the paper.

Initform

0.999

Initargs

:variance-decay

Readers

variance-decay.

Writers

(setf variance-decay).

Slot: variance-adjustment

Within the bowels of adam, the estimated mean is
divided by the square root of the estimated variance (per weight) which can lead to numerical problems if the denominator is near zero. To avoid this, VARIANCE-ADJUSTMENT, which should be a small positive number, is added to the denominator. This is ‘epsilon‘ in the paper.

Initform

1.0d-7

Initargs

:variance-adjustment

Readers

variance-adjustment.

Writers

(setf variance-adjustment).

Slot: mean-estimates
Readers

mean-estimates.

Writers

(setf mean-estimates).

Slot: variance-estimates
Readers

variance-estimates.

Writers

(setf variance-estimates).

Slot: adam-time-step
Initform

0

Readers

adam-time-step.

Writers

(setf adam-time-step).

Slot: momentum
Initform

:none

Class: attributed

This is a utility class that all counters subclass.
The ATTRIBUTES plist can hold basically anything. Currently the attributes are only used when printing and they can be specified by the user. The monitor maker functions such as those in @MGL-CLASSIFICATION-MONITOR also add attributes of their own to the counters they create.

With the :PREPEND-ATTRIBUTES initarg when can easily add new attributes without clobbering the those in the :INITFORM, (:TYPE "rmse") in this case.

(princ (make-instance ’rmse-counter
:prepend-attributes ’(:event "pred." :dataset "test"))) ;; pred. test rmse: 0.000e+0 (0)
=> #<RMSE-COUNTER pred. test rmse: 0.000e+0 (0)>

Package

mgl-core.

Source

counter.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: attributes

A plist of attribute keys and values.

Initargs

:attributes

Readers

attributes.

Writers

(setf attributes).

Class: bag-of-words-encoder

ENCODE all features of a document with a sparse
vector. Get the features of document from MAPPER, encode each
feature with FEATURE-ENCODER. FEATURE-ENCODER may return NIL if the feature is not used. The result is a vector of encoded-feature/value conses. encoded-features are unique (under ENCODED-FEATURE-TEST)
within the vector but are in no particular order.

Depending on KIND, value is calculated in various ways:

- For :FREQUENCY it is the number of times the corresponding feature
was found in DOCUMENT.

- For :BINARY it is always 1.

- For :NORMALIZED-FREQUENCY and :NORMALIZED-BINARY are like the unnormalized counterparts except that as the final step values in
the assembled sparse vector are normalized to sum to 1.

- Finally, :COMPACTED-BINARY is like :BINARY but the return values
is not a vector of conses, but a vector of element-type ENCODED-FEATURE-TYPE.

“‘cl-transcript
(let* ((feature-indexer
(make-indexer
(alexandria:alist-hash-table ’(("I" . 3) ("me" . 2) ("mine" . 1))) 2))
(bag-of-words-encoder
(make-instance ’bag-of-words-encoder
:feature-encoder feature-indexer
:feature-mapper (lambda (fn document)
(map nil fn document))
:kind :frequency)))
(encode bag-of-words-encoder ’("All" "through" "day" "I" "me" "mine" "I" "me" "mine" "I" "me" "mine")))
=> #((0 . 3.0d0) (1 . 3.0d0))
“‘

Package

mgl-nlp.

Source

nlp.lisp.

Direct methods
Direct slots
Slot: feature-encoder
Initargs

:feature-encoder

Readers

feature-encoder.

Writers

This slot is read-only.

Slot: feature-mapper
Initargs

:feature-mapper

Readers

feature-mapper.

Writers

This slot is read-only.

Slot: encoded-feature-test
Initform

(function eql)

Initargs

:encoded-feature-test

Readers

encoded-feature-test.

Writers

This slot is read-only.

Slot: encoded-feature-type
Initform

t

Initargs

:encoded-feature-type

Readers

encoded-feature-type.

Writers

This slot is read-only.

Slot: kind
Type

(member :binary :frequency :normalized-binary :normalized-frequency)

Initform

:binary

Initargs

:kind

Readers

bag-of-words-kind.

Writers

This slot is read-only.

Class: basic-counter

A simple counter whose ADD-TO-COUNTER takes two
additional parameters: an increment to the internal sums of called the NUMERATOR and DENOMINATOR. COUNTER-VALUES returns two values:

- NUMERATOR divided by DENOMINATOR (or 0 if DENOMINATOR is 0) and

- DENOMINATOR

Here is an example the compute the mean of 5 things received in two batches:

(let ((counter (make-instance ’basic-counter))) (add-to-counter counter 6.5 3)
(add-to-counter counter 3.5 2)
counter)
=> #<BASIC-COUNTER 2.00000e+0 (5)>

Package

mgl-core.

Source

counter.lisp.

Direct superclasses

attributed.

Direct subclasses
Direct methods
Direct slots
Slot: numerator
Package

common-lisp.

Initform

0

Readers

numerator*.

Writers

This slot is read-only.

Slot: denominator
Package

common-lisp.

Initform

0

Readers

denominator*.

Writers

This slot is read-only.

Class: batch-gd-optimizer

Another abstract base class for gradient based
optimizers tath updates all weights simultaneously after chewing through BATCH-SIZE inputs. See subclasses SGD-OPTIMIZER, ADAM-OPTIMIZER and NORMALIZED-BATCH-GD-OPTIMIZER.

PER-WEIGHT-BATCH-GD-OPTIMIZER may be a better choice when some weights can go unused for instance due to missing input values.

Package

mgl-gd.

Source

gradient-descent.lisp.

Direct superclasses

gd-optimizer.

Direct subclasses
Direct methods
Direct slots
Slot: n-instances-in-batch
Initform

0

Initargs

:n-instances-in-batch

Readers

n-instances-in-batch.

Writers

(setf n-instances-in-batch).

Slot: before-update-hook

A list of functions of no parameters. Each
function is called just before a weight update takes place (after accumulated gradients have been divided the length of the batch). Convenient to hang some additional gradient accumulating code on.

Type

list

Initargs

:before-update-hook

Readers

before-update-hook.

Writers

(setf before-update-hook).

Class: bm

The network is assembled from CHUNKS (nodes of the
same behaviour) and CLOUDs (connections between two chunks). To instantiate, arrange for VISIBLE-CHUNKS, HIDDEN-CHUNKS,
CLOUDS (either as initargs or initforms) to be set.

Usage of CLOUDS is slightly tricky: you may pass a list of CLOUD objects connected to chunks in this network. Alternatively, a cloud spec may stand for a cloud. Also, the initial value of CLOUDS is merged with the default cloud spec list before the final cloud spec list is instantiated. The default cloud spec list is what FULL-CLOUDS-EVERYWHERE returns for VISIBLE-CHUNKS and HIDDEN-CHUNKS. See MERGE-CLOUD-SPECS for the gory details. The initform, ’(:MERGE), simply leaves the default cloud specs alone.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: chunks

A list of all the chunks in this BM. It’s VISIBLE-CHUNKS and HIDDEN-CHUNKS appended.

Type

list

Readers

chunks.

Writers

This slot is read-only.

Slot: visible-chunks

A list of CHUNKs whose values come from the outside world: SET-INPUT sets them.

Type

list

Initargs

:visible-chunks

Readers

visible-chunks.

Writers

This slot is read-only.

Slot: hidden-chunks

A list of CHUNKs that are not directly observed. Disjunct from VISIBLE-CHUNKS.

Type

list

Initargs

:hidden-chunks

Readers

hidden-chunks.

Writers

This slot is read-only.

Slot: visible-and-conditioning-chunks
Type

list

Readers

visible-and-conditioning-chunks.

Writers

This slot is read-only.

Slot: hidden-and-conditioning-chunks
Type

list

Readers

hidden-and-conditioning-chunks.

Writers

This slot is read-only.

Slot: conditioning-chunks
Type

list

Readers

conditioning-chunks.

Writers

This slot is read-only.

Slot: clouds

Normally, a list of CLOUDS representing the
connections between chunks. During initialization cloud specs are allowed in the list.

Type

list

Initform

(quote (:merge))

Initargs

:clouds

Readers

clouds.

Writers

This slot is read-only.

Slot: has-hidden-to-hidden-p
Readers

has-hidden-to-hidden-p.

Writers

This slot is read-only.

Slot: has-visible-to-visible-p
Readers

has-visible-to-visible-p.

Writers

This slot is read-only.

Slot: max-n-stripes
Package

mgl-core.

Initform

1

Initargs

:max-n-stripes

Readers

max-n-stripes.

Writers

This slot is read-only.

Slot: importances
Initargs

:importances

Readers

importances.

Writers

(setf importances).

Class: bm-learner
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: bm
Initargs

:bm

Readers

bm.

Writers

This slot is read-only.

Slot: monitors
Package

mgl-core.

Initargs

:monitors

Readers

monitors.

Writers

This slot is read-only.

Class: bm-pcd-learner

Persistent Contrastive Divergence trainer.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses
Direct methods
Direct slots
Slot: n-particles

The number of persistent chains to run. Also known as the number of fantasy particles.

Type

unsigned-byte

Initargs

:n-particles

Readers

n-particles.

Writers

This slot is read-only.

Slot: persistent-chains

A BM that keeps the states of the persistent
chains (each stripe is a chain), initialized from the BM being trained by COPY with ’PCD as the context. Suitable for training BM and RBM.

Type

mgl-bm:bm

Readers

persistent-chains.

Writers

This slot is read-only.

Class: bp-learner
Package

mgl-bp.

Source

backprop.lisp.

Direct methods
Direct slots
Slot: bpn

The BPN for which this BP-LEARNER provides the gradients.

Initargs

:bpn

Readers

bpn.

Writers

This slot is read-only.

Slot: first-trained-clump
Readers

first-trained-clump.

Writers

This slot is read-only.

Slot: monitors

A list of ‘MONITOR‘s.

Package

mgl-core.

Initargs

:monitors

Readers

monitors.

Writers

(setf monitors).

Class: bpn

Abstract base class for FNN and RNN.

Package

mgl-bp.

Source

backprop.lisp.

Direct superclasses

clump.

Direct subclasses
Direct methods
Direct slots
Slot: clumps

A topological sorted adjustable array with a fill
pointer that holds the clumps that make up the network. Clumps are added to it by ADD-CLUMP or, more often, automatically when within a BUILD-FNN or BUILD-RNN. Rarely needed, FIND-CLUMP takes care of most uses.

Type

(array mgl-bp:clump (*))

Initform

(make-array 0 :element-type (quote mgl-bp:clump) :adjustable t :fill-pointer t)

Initargs

:clumps

Readers

clumps.

Writers

This slot is read-only.

Slot: n-stripes

The current number of instances the network has.
This is automatically set to the number of instances passed to SET-INPUT, so it rarely has to be manipulated directly although it can be set. When set N-STRIPES of all CLUMPS get set to the same value.

Package

mgl-core.

Type

mgl-util:index

Initform

1

Initargs

:n-stripes

Readers

n-stripes.

Writers

This slot is read-only.

Slot: max-n-stripes

The maximum number of instances the network can
operate on in parallel. Within BUILD-FNN or BUILD-RNN, it defaults to MAX-N-STRIPES of that parent network, else it defaults to 1. When set MAX-N-STRIPES of all CLUMPS get set to the same value.

Package

mgl-core.

Type

(or mgl-util:index null)

Initargs

:max-n-stripes

Readers

max-n-stripes.

Writers

This slot is read-only.

Slot: last-cost
Initform

(list 0 0)

Readers

last-cost.

Writers

(setf last-cost).

Class: cg-optimizer

Updates all weights simultaneously after chewing through BATCH-SIZE inputs.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Direct superclasses

iterative-optimizer.

Direct methods
Direct slots
Slot: batch-size

After having gone through BATCH-SIZE number of
instances, weights are updated. Normally, CG operates on all available data, but it may be useful to introduce some noise into the optimization to reduce overfitting by using smaller batch sizes. If BATCH-SIZE is not set, it is initialized to the size of the dataset at the start of optimization.

Package

mgl-common.

Initargs

:batch-size

Readers

batch-size.

Writers

(setf batch-size).

Slot: cg-args
Initform

(quote nil)

Initargs

:cg-args

Readers

cg-args.

Writers

(setf cg-args).

Slot: segment-filter

A predicate function on segments that filters out uninteresting segments. Called from INITIALIZE-OPTIMIZER*.

Initform

(constantly t)

Initargs

:segment-filter

Readers

segment-filter.

Writers

This slot is read-only.

Slot: on-cg-batch-done

An event hook called when processing a conjugate
gradient batch is done. The handlers on the hook are called with 8 arguments:

(optimizer gradient-source instances
best-w best-f n-line-searches
n-succesful-line-searches n-evaluations)

The latter 5 of which are the return values of the CG function.

Initargs

:on-cg-batch-done

Readers

on-cg-batch-done.

Writers

(setf on-cg-batch-done).

Slot: segment-set

Segments to train.

Package

mgl-opt.

Readers

segment-set.

Writers

This slot is read-only.

Slot: weights
Package

mgl-common.

Type

(or mgl-mat:mat null)

Readers

weights.

Writers

(setf weights).

Slot: spare-vectors

Pre-allocated vectors to make CG less consy.

Type

list

Readers

spare-vectors.

Writers

(setf spare-vectors).

Slot: accumulator
Readers

accumulator.

Writers

This slot is read-only.

Class: cheating-sparsity-gradient-source

Like NORMAL-SPARSITY-GRADIENT-SOURCE, but it needs
less memory because it only tracks average activation levels of nodes independently (as opposed to simultaneous activations) and thus it may produce the wrong gradient an example for which is when two connected nodes are on a lot, but never at the same time. Clearly, it makes little sense to change the weight but this is exactly what happens.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

sparsity-gradient-source.

Direct methods
Direct slots
Slot: sum1
Initargs

:sum1

Readers

sum1.

Writers

This slot is read-only.

Slot: old-sum1
Initargs

:old-sum1

Readers

old-sum1.

Writers

This slot is read-only.

Slot: sum2
Initargs

:sum2

Readers

sum2.

Writers

This slot is read-only.

Class: chunk

A chunk is a set of nodes of the same type in a Boltzmann Machine. This is an abstract base class.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: name
Package

mgl-common.

Initform

(gensym)

Initargs

:name

Readers

name.

Writers

This slot is read-only.

Slot: size
Package

mgl-common.

Initargs

:size

Readers

size.

Writers

This slot is read-only.

Slot: n-stripes
Package

mgl-core.

Initform

1

Readers

n-stripes.

Writers

This slot is read-only.

Slot: nodes

A value for each node in the chunk. First,
activations are put here (weights*inputs) then the mean of the probability distribution is calculated from the activation and finally (optionally) a sample is taken from the probability distribution. All these values are stored in this vector. This is also where SET-INPUT is supposed to clamp the values. Note that not only the values in the matrix but also the matrix object itself can change when the network is used.

Package

mgl-common.

Readers

nodes.

Writers

This slot is read-only.

Slot: old-nodes

The previous value of each node. Used to provide parallel computation semantics when there are intralayer connections. Swapped with NODES or MEANS at times.

Readers

old-nodes.

Writers

This slot is read-only.

Slot: means

Saved values of the means (see SET-MEAN) last computed.

Readers

means.

Writers

This slot is read-only.

Slot: inputs

This is where the after method of SET-INPUT saves
the input for later use by RECONSTRUCTION-ERROR, INPUTS->NODES. It is NIL in CONDITIONING-CHUNKS.

Readers

inputs.

Writers

This slot is read-only.

Slot: random-numbers
Readers

random-numbers.

Writers

(setf random-numbers).

Slot: scratch

Another matrix that parallels NODES. Used as a temporary.

Readers

scratch.

Writers

(setf scratch).

Slot: indices-present

NIL or a simple vector of array indices into the
layer’s NODES. Need not be ordered. SET-INPUT sets it. Note, that if it is non-NIL then N-STRIPES must be 1.

Type

(or null mgl-util:index-vector)

Initargs

:indices-present

Readers

indices-present.

Writers

(setf indices-present).

Class: classification-accuracy-counter

A BASIC-COUNTER with "acc." as its :TYPE
attribute and a PRINT-OBJECT method that prints percentages.

Package

mgl-core.

Source

classification.lisp.

Direct superclasses

basic-counter.

Direct methods

print-object.

Direct slots
Slot: attributes
Initform

(quote (:type "acc."))

Class: cloud

A set of connections between two chunks. The chunks
may be the same, be both visible or both hidden subject to constraints imposed by the type of boltzmann machine the cloud is part of.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: name
Package

mgl-common.

Initargs

:name

Readers

name.

Writers

This slot is read-only.

Slot: chunk1
Type

mgl-bm:chunk

Initargs

:chunk1

Readers

chunk1.

Writers

This slot is read-only.

Slot: chunk2
Type

mgl-bm:chunk

Initargs

:chunk2

Readers

chunk2.

Writers

This slot is read-only.

Slot: scale1

When CHUNK1 is being activated count activations coming from this cloud multiplied by SCALE1.

Type

mgl-util:flt

Initform

1.0d0

Initargs

:scale1

Readers

scale1.

Writers

This slot is read-only.

Slot: scale2

When CHUNK2 is being activated count activations coming from this cloud multiplied by SCALE2.

Type

mgl-util:flt

Initform

1.0d0

Initargs

:scale2

Readers

scale2.

Writers

This slot is read-only.

Slot: cached-version1
Initform

(gensym)

Readers

cached-version1.

Writers

(setf cached-version1).

Slot: cached-version2
Initform

(gensym)

Readers

cached-version2.

Writers

(setf cached-version2).

Slot: cached-activations1
Readers

cached-activations1.

Writers

This slot is read-only.

Slot: cached-activations2
Readers

cached-activations2.

Writers

This slot is read-only.

Class: clump

A CLUMP is a LUMP or a [BPN][class]. It represents
a differentiable function. Arguments of clumps are given during instantiation. Some arguments are clumps themselves so they get permenantly wired together like this:

“‘commonlisp
(->v*m (->input :size 10 :name ’input)
(->weight :dimensions ’(10 20) :name ’weight)
:name ’activation)
“‘

The above creates three clumps: the vector-matrix multiplication clumps called ‘ACTIVATION‘ which has a reference to its operands: INPUT and WEIGHT. Note that the example just defines a function, no actual computation has taken place, yet.

This wiring of ‘CLUMP‘s is how one builds feed-forward nets (FNN) or recurrent neural networks (RNN) that are ‘CLUMP‘s themselves so one can build nets in a hiearchical style if desired. Non-composite ‘CLUMP‘s are called LUMP (note the loss of ‘C‘ that stands for composite). The various LUMP subtypes correspond to different layer types (->SIGMOID, ->DROPOUT, ->RELU, ->TANH, etc).

Package

mgl-bp.

Source

backprop.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: name
Package

mgl-common.

Initform

(gensym)

Initargs

:name

Readers

name.

Writers

This slot is read-only.

Class: command

A gnuplot command. There are separate classes for plot/splot, etc.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Class: concat-counter

A counter that simply concatenates sequences.

“‘cl-transcript
(let ((counter (make-instance ’concat-counter))) (add-to-counter counter ’(1 2 3) #(4 5)) (add-to-counter counter ’(6 7)) (counter-values counter))
=> (1 2 3 4 5 6 7)
““

Package

mgl-core.

Source

counter.lisp.

Direct superclasses

attributed.

Direct methods
Direct slots
Slot: concatenation
Initargs

:concatenation

Readers

concatenation.

Writers

(setf concatenation).

Slot: concatenation-type

A type designator suitable as the RESULT-TYPE argument to CONCATENATE.

Initform

(quote list)

Initargs

:concatenation-type

Readers

concatenation-type.

Writers

This slot is read-only.

Class: conditioning-chunk

Nodes in CONDITIONING-CHUNK never change their
values on their own so they are to be clamped. Including this chunk in the visible layer allows ‘conditional’ RBMs.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

chunk.

Direct subclasses
Direct methods
Class: confusion-matrix

A confusion matrix keeps count of classification
results. The correct class is called ‘target’ and the output of the classifier is called ‘prediction’.

Package

mgl-core.

Source

classification.lisp.

Direct methods
Direct slots
Slot: counts
Initform

(make-hash-table)

Initargs

:counts

Readers

counts.

Writers

This slot is read-only.

Class: constant-chunk

A special kind of CONDITIONING-CHUNK whose NODES
are always DEFAULT-VALUE. This conveniently allows biases in the opposing layer.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

conditioning-chunk.

Direct methods
Direct slots
Slot: default-value
Package

mgl-common.

Initform

1.0d0

Readers

default-value.

Writers

This slot is read-only.

Class: constrained-poisson-chunk

Poisson units with normalized (EXP ACTIVATION) means.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

exp-normalized-group-chunk.

Direct methods

sample-chunk.

Class: cross-entropy-counter

A BASIC-COUNTER with "xent" as its :TYPE attribute.

Package

mgl-core.

Source

classification.lisp.

Direct superclasses

basic-counter.

Direct methods
Direct slots
Slot: attributes
Initform

(quote (:type "xent"))

Slot: per-label-counters

A hash table mapping labels to the cross entropy counters for samples with that label.

Initform

(make-hash-table)

Initargs

:per-label-counters

Readers

per-label-counters.

Writers

This slot is read-only.

Class: data-mapping

Consider plot ’-’, sin(x). For gnuplot ’-’
indicates inline data that will be read from the command stream after all mappings are read. The DATA slot of this class holds an object that can be serialized by WRITE-DATA.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Direct superclasses

mapping.

Direct methods
Direct slots
Slot: data
Initargs

:data

Readers

mapping-data.

Writers

This slot is read-only.

Class: dbm

A Deep Boltzmann Machine. See "Deep Boltzmann
Machines" by Ruslan Salakhutdinov and Geoffrey Hinton at <http://www.cs.toronto.edu/~hinton/absps/dbm.pdf>.

To instantiate, set up LAYERS and CLOUDS but not VISIBLE-CHUNKS and HIDDEN-CHUNKS, because contrary to how initialization works in the superclass (BM), the values of these slots are inferred from LAYERS and CLOUDS: chunks without a connection from below are visible while the rest are hidden.

The default cloud spec list is computed by calling FULL-CLOUDS-EVERYWHERE-BETWEEN-LAYERS on LAYERS.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

bm.

Direct methods
Direct slots
Slot: layers

A list of layers from bottom up. A layer is a list
of chunks. The layers partition the set of all chunks in the BM. Chunks with no connections to layers below are visible (including constant and conditioning) chunks. The layered structure is used in the single, bottom-up, approximate inference pass. When instantiating a DBM, VISIBLE-CHUNKS and HIDDEN-CHUNKS are inferred from LAYERS and CLOUDS.

Type

list

Initargs

:layers

Readers

layers.

Writers

This slot is read-only.

Slot: clouds-up-to-layers

Each element of this list is a list of clouds connected from below to the layer of the same index.

Type

list

Readers

clouds-up-to-layers.

Writers

This slot is read-only.

Class: dbn

Deep Belief Network: a stack of RBMs. DBNs with
multiple hidden layers are not Boltzmann Machines. The chunks in the hidden layer of a constituent RBM and the chunk in the visible layer of the RBM one on top of it must be EQ for the DBN to consider them the same. Naming them the same is not enough, in fact, all chunks must have unique names under EQUAL as usual.

Similarly to DBMs, DBNs can be constructed using the :LAYERS initarg. When using this feature, a number of RBMs are instantiated. Often one wants to create a DBN that consists of some RBM subclass, this is what the :RBM-CLASS initarg is for.

Package

mgl-bm.

Source

deep-belief-network.lisp.

Direct methods
Direct slots
Slot: rbms
Type

list

Initargs

:rbms

Readers

rbms.

Writers

This slot is read-only.

Slot: inactive-rbms
Type

list

Readers

inactive-rbms.

Writers

This slot is read-only.

Slot: max-n-stripes
Package

mgl-core.

Initform

1

Initargs

:max-n-stripes

Readers

max-n-stripes.

Writers

This slot is read-only.

Class: diffun

DIFFUN dresses a lisp function (in its FN slot) as
a gradient source (see MGL-OPT::@MGL-OPT-GRADIENT-SOURCE), which allows it to be used in MINIMIZE. See the examples in MGL-GD::@MGL-GD and MGL-CG::@MGL-CG.

Package

mgl-diffun.

Source

differentiable-function.lisp.

Direct methods
Direct slots
Slot: fn

A real valued lisp function. It may have any number of parameters.

Package

mgl-common.

Initargs

:fn

Readers

fn.

Writers

This slot is read-only.

Slot: parameter-indices

The list of indices of parameters that we don’t
optimize. Values for these will come from the DATASET argument of MINIMIZE.

Initargs

:parameter-indices

Readers

parameter-indices.

Writers

This slot is read-only.

Slot: weight-indices

The list of indices of parameters to be optimized,
the values of which will come from the [WEIGHTS][dislocated] argument of MINIMIZE.

Initargs

:weight-indices

Readers

weight-indices.

Writers

This slot is read-only.

Class: encoder/decoder

Implements O(1) ENCODE and DECODE by having an
internal decoded-to-encoded and an encoded-to-decoded EQUAL hash
table. ENCODER/DECODER objects can be saved and loaded (see @MGL-PERSISTENCE) as long as the elements in the hash tables have read/write consitency.

“‘cl-transcript
(let ((indexer
(make-indexer
(alexandria:alist-hash-table ’(("I" . 3) ("me" . 2) ("mine" . 1))) 2)))
(values (encode indexer "I")
(encode indexer "me")
(encode indexer "mine")
(decode indexer 0)
(decode indexer 1)
(decode indexer 2)))
=> 0
=> 1
=> NIL
=> "I"
=> "me"
=> NIL
“‘

Package

mgl-core.

Source

feature.lisp.

Direct methods
Direct slots
Slot: encodings
Initargs

:encodings

Readers

encodings.

Writers

This slot is read-only.

Slot: decodings
Initargs

:decodings

Readers

decodings.

Writers

This slot is read-only.

Class: exp-normalized-group-chunk

Means are normalized (EXP ACTIVATION).

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

normalized-group-chunk.

Direct subclasses
Direct methods
Class: factored-cloud

Like FULL-CLOUD but the weight matrix is factored
into a product of two matrices: A*B. At activation time, HIDDEN += VISIBLE*A*B.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

cloud.

Direct methods
Direct slots
Slot: cloud-a

A full cloud whose visible chunk is the same as
the visible chunk of this cloud and whose hidden chunk is the same as the visible chunk of CLOUD-B.

Type

mgl-bm:full-cloud

Initargs

:cloud-a

Readers

cloud-a.

Writers

This slot is read-only.

Slot: cloud-b

A full cloud whose hidden chunk is the same as the
hidden chunk of this cloud and whose visible chunk is the same as the hidden chunk of CLOUD-A.

Type

mgl-bm:full-cloud

Initargs

:cloud-b

Readers

cloud-b.

Writers

This slot is read-only.

Class: file-mapping

When the data comes from a file. As in plot ’filename’ title ’something’.

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Direct superclasses

mapping.

Direct methods
Direct slots
Slot: filename
Initargs

:filename

Readers

filename.

Writers

This slot is read-only.

Class: fnn

A feed-forward neural net (as opposed to a recurrent one, see RNN).

Package

mgl-bp.

Source

backprop.lisp.

Direct superclasses

bpn.

Direct subclasses

fnn-gp.

Class: fnn-clamping-cache

This slot is a sample -> (lump array)* list hash
table. Inherit from this and set input will clamp the arrays to the respective lumps for the right sample.

Package

mgl-unroll.

Source

unroll.lisp.

Direct methods
Direct slots
Slot: clamping-cache
Initform

(make-hash-table)

Readers

clamping-cache.

Writers

This slot is read-only.

Slot: populate-key
Initform

(function identity)

Initargs

:populate-key

Readers

populate-key.

Writers

This slot is read-only.

Slot: populate-convert-to-dbm-sample-fn
Initform

(function identity)

Initargs

:populate-convert-to-dbm-sample-fn

Readers

populate-convert-to-dbm-sample-fn.

Writers

This slot is read-only.

Slot: populate-map-cache-lazily-from-dbm
Type

(or null mgl-bm:dbm)

Initargs

:populate-map-cache-lazily-from-dbm

Readers

populate-map-cache-lazily-from-dbm.

Writers

This slot is read-only.

Slot: populate-periodic-fn
Initargs

:populate-periodic-fn

Readers

populate-periodic-fn.

Writers

This slot is read-only.

Class: fnn-gp
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses
Direct methods
Direct slots
Slot: mean-lump-name
Initargs

:mean-lump-name

Readers

mean-lump-name.

Writers

This slot is read-only.

Slot: covariance-lump-name
Initargs

:covariance-lump-name

Readers

covariance-lump-name.

Writers

This slot is read-only.

Class: full-cloud
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

cloud.

Direct methods
Direct slots
Slot: weights

A chunk is represented as a row vector
disregarding the multi-striped case). If the visible chunk is 1xN and the hidden is 1xM then the weight matrix is NxM. Hidden = hidden + weights * visible. Visible = visible + weights^T * hidden.

Package

mgl-common.

Initargs

:weights

Readers

weights.

Writers

This slot is read-only.

Class: function-mapping
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Direct superclasses

mapping.

Direct methods
Direct slots
Slot: expression
Initargs

:expression

Readers

function-expression.

Writers

This slot is read-only.

Class: function-sampler

A sampler with a function in its GENERATOR that
produces a stream of samples which may or may not be finite depending on MAX-N-SAMPLES. FINISHEDP returns T iff MAX-N-SAMPLES is non-nil, and it’s not greater than the number of samples generated (N-SAMPLES).

(list-samples (make-instance ’function-sampler
:generator (lambda ()
(random 10)) :max-n-samples 5) 10)
=> (3 5 2 3 3)

Package

mgl-dataset.

Source

dataset.lisp.

Direct methods
Direct slots
Slot: generator

A generator function of no arguments that returns the next sample.

Initargs

:generator

Readers

generator.

Writers

This slot is read-only.

Slot: n-samples
Initform

0

Initargs

:n-samples

Readers

n-samples.

Writers

This slot is read-only.

Slot: max-n-samples
Initargs

:max-n-samples

Readers

max-n-samples.

Writers

(setf max-n-samples).

Slot: name

An arbitrary object naming the sampler. Only used for printing the sampler object.

Package

mgl-common.

Initargs

:name

Readers

name.

Writers

This slot is read-only.

Class: gaussian-chunk

Nodes are real valued. The sample of a node is its activation plus guassian noise of unit variance.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

chunk.

Direct methods
Class: gp
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct subclasses
Class: iterative-optimizer

An abstract base class of MGL-GD::@MGL-GD and
MGL-CG::@MGL-CG based optimizers that iterate over instances until a termination condition is met.

Package

mgl-opt.

Source

optimize.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: n-instances

The number of instances this optimizer has seen so far. Incremented automatically during optimization.

Initform

0

Initargs

:n-instances

Readers

n-instances.

Writers

This slot is read-only.

Slot: termination

If a number, it’s the number of instances to train
on in the sense of N-INSTANCES. If N-INSTANCES is equal or greater than this value optimization stops. If TERMINATION is NIL, then optimization will continue. If it is T, then optimization will stop. If it is a function of no arguments, then its return value is processed as if it was returned by TERMINATION.

Initargs

:termination

Readers

termination.

Writers

(setf termination).

Slot: on-optimization-started

An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called after initializations are performed (INITIALIZE-OPTIMIZER*, INITIALIZE-GRADIENT-SOURCE*) but before optimization is started.

Initargs

:on-optimization-started

Readers

on-optimization-started.

Writers

(setf on-optimization-started).

Slot: on-optimization-finished

An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called when optimization has finished.

Initargs

:on-optimization-finished

Readers

on-optimization-finished.

Writers

(setf on-optimization-finished).

Slot: on-n-instances-changed

An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called when optimization of a batch of instances is done and N-INSTANCES is incremented.

Initargs

:on-n-instances-changed

Readers

on-n-instances-changed.

Writers

(setf on-n-instances-changed).

Class: lump

A LUMP is a simple, layerlike component of a neural
network. There are many kinds of lumps, each of which performs a specific operation or just stores inputs and weights. By convention, the names of lumps start with the prefix ‘->‘. Defined as classes, they also have a function of the same name as the class to create them easily. These maker functions typically have keyword arguments corresponding to initargs of the class, with some (mainly the input lumps) turned into normal positional arguments. So instead of having to do

(make-instance ’->tanh :x some-input :name ’my-tanh)

one can simply write

(->tanh some-input :name ’my-tanh)

Lumps instantiated in any way within a BUILD-FNN or BUILD-RNN are automatically added to the network being built.

A lump has its own NODES and DERIVATIVES matrices allocated for it in which the results of the forward and backward passes are stored. This is in contrast to a [BPN][class] whose NODES and DERIVATIVES are those of its last constituent CLUMP.

Since lumps almost always live within a BPN, their [N-STRIPES][(reader bpn)] and [MAX-N-STRIPES][(reader bpn)] are handled automagically behind the scenes.

Package

mgl-bp.

Source

lumps.lisp.

Direct superclasses

clump.

Direct subclasses
Direct methods
Direct slots
Slot: size

The number of values in a single stripe.

Package

mgl-common.

Type

mgl-util:index

Initargs

:size

Readers

size.

Writers

This slot is read-only.

Slot: nodes

The values computed by the lump in the forward
pass are stored here. It is an ‘N-STRIPES * SIZE‘ matrix that has storage allocated for ‘MAX-N-STRIPES * SIZE‘ elements for non-weight lumps. ->WEIGHT lumps have no stripes nor restrictions on their shape.

Package

mgl-common.

Type

(or mgl-mat:mat null)

Readers

nodes.

Writers

This slot is read-only.

Slot: derivatives

The derivatives computed in the backward pass are
stored here. This matrix is very much like [NODES][(reader lump)] in shape and size.

Type

(or mgl-mat:mat null)

Readers

derivatives.

Writers

This slot is read-only.

Slot: default-value

Upon creation or resize the lump’s nodes get filled with this value.

Package

mgl-common.

Type

real

Initform

0

Initargs

:default-value

Readers

default-value.

Writers

This slot is read-only.

Slot: shared-with-clump
Initargs

:shared-with-clump

Readers

shared-with-clump.

Writers

This slot is read-only.

Class: mapping

A plot can contain multiple mappings. For example
in

plot sin(x) title ’sine’, "datafile" with lines

there is a function and a file mapping. Subclasses of MAPPING represent the function or the data and the options (e.g. ’title’, ’with’ above).

Package

mgl-gnuplot.

Source

gnuplot.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: options
Type

(or list string)

Initargs

:options

Readers

options.

Writers

This slot is read-only.

Class: monitor

A monitor that has another monitor called MEASURER
embedded in it. When this monitor is applied, it applies the measurer and passes the returned values to ADD-TO-COUNTER called on its COUNTER slot. One may further specialize APPLY-MONITOR to change that.

This class is useful when the same event monitor is applied repeatedly over a period and its results must be aggregated such as when training statistics are being tracked or when predictions are begin made. Note that the monitor must be compatible with the event it handles. That is, the embedded MEASURER must be prepared to take the arguments that are documented to come with the event.

Package

mgl-core.

Source

monitor.lisp.

Direct methods
Direct slots
Slot: measurer

This must be a monitor itself which only means
that APPLY-MONITOR is defined on it (but see @MGL-MONITORING). The returned values are aggregated by [COUNTER][(READER MONITOR)]. See @MGL-MEASURER for a library of measurers.

Initargs

:measurer

Readers

measurer.

Writers

This slot is read-only.

Slot: counter

The COUNTER of a monitor carries out the
aggregation of results returned by MEASURER. The See @MGL-COUNTER for a library of counters.

Initargs

:counter

Readers

counter.

Writers

This slot is read-only.

Class: normal-sparsity-gradient-source

Keep track of how much pairs of nodes connected by
CLOUD are simultaneously active. If a node in CHUNK deviates from the target sparsity, that is, its average activation is different from the target, then decrease or increase the weight to nodes to which it’s connected by CLOUD in such a way that it will be closer to the target. Smooth the empirical estimates in simultaneous activations in PRODUCTS by DAMPING.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

sparsity-gradient-source.

Direct methods
Direct slots
Slot: products
Initargs

:products

Readers

products.

Writers

This slot is read-only.

Slot: old-products
Initargs

:old-products

Readers

old-products.

Writers

This slot is read-only.

Class: normalized-batch-gd-optimizer

Like BATCH-GD-OPTIMIZER but keeps count of how many
times each weight was used in the batch and divides the accumulated gradient by this count instead of dividing by N-INSTANCES-IN-BATCH. This only makes a difference if there are missing values in the learner that’s being trained. The main feature that distuinguishes this class from PER-WEIGHT-BATCH-GD-OPTIMIZER is that batches end at same time for all weights.

Package

mgl-gd.

Source

gradient-descent.lisp.

Direct superclasses

batch-gd-optimizer.

Direct methods
Direct slots
Slot: n-weight-uses-in-batch

Number of uses of the weight in its current batch.

Readers

n-weight-uses-in-batch.

Writers

(setf n-weight-uses-in-batch).

Class: parameterized-executor-cache-mixin

Mix this into a model, implement
INSTANCE-TO-EXECUTOR-PARAMETERS and MAKE-EXECUTOR-WITH-PARAMETERS and DO-EXECUTORS will be to able build executors suitable for different instances. The canonical example is using a BPN to compute the means and convariances of a gaussian process. Since each instance is made of a variable number of observations, the size of the input is not constant, thus we have a bpn (an executor) for each input dimension (the parameters).

Package

mgl-core.

Source

core.lisp.

Direct methods
Direct slots
Slot: executor-cache
Initform

(make-hash-table :test (function equal))

Readers

executor-cache.

Writers

This slot is read-only.

Class: per-weight-batch-gd-optimizer

This is much like @MGL-GD-BATCH-GD-OPTIMIZER but it
is more clever about when to update weights. Basically every weight has its own batch independent from the batches of others. This has desirable properties. One can for example put two neural networks together without adding any connections between them and the learning will produce results equivalent to the separated case. Also, adding inputs with only missing values does not change anything.

Due to its very non-batch nature, there is no CUDA implementation of this optimizer.

Package

mgl-gd.

Source

gradient-descent.lisp.

Direct superclasses

gd-optimizer.

Direct methods
Direct slots
Slot: n-weight-uses-in-batch

Number of uses of the weight in its current batch.

Readers

n-weight-uses-in-batch.

Writers

(setf n-weight-uses-in-batch).

Class: periodic-fn
Package

mgl-util.

Source

util.lisp.

Direct methods
Direct slots
Slot: period
Initargs

:period

Readers

period.

Writers

This slot is read-only.

Slot: fn
Package

mgl-common.

Initargs

:fn

Readers

fn.

Writers

This slot is read-only.

Slot: last-eval
Initargs

:last-eval

Readers

last-eval.

Writers

(setf last-eval).

Class: plot
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Direct methods
Direct slots
Slot: kind
Type

(member :|2d| :|3d|)

Initform

:|2d|

Initargs

:kind

Readers

kind.

Writers

This slot is read-only.

Slot: mappings
Initargs

:mappings

Readers

mappings.

Writers

(setf mappings).

Class: posterior-gp
Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses

gp.

Direct methods
Direct slots
Slot: prior-gp
Initargs

:prior-gp

Readers

prior-gp.

Writers

This slot is read-only.

Slot: inverted-covariances
Initargs

:inverted-covariances

Readers

inverted-covariances.

Writers

This slot is read-only.

Slot: evidence-inputs
Initargs

:evidence-inputs

Readers

evidence-inputs.

Writers

This slot is read-only.

Slot: evidence-outputs
Initargs

:evidence-outputs

Readers

evidence-outputs.

Writers

This slot is read-only.

Slot: centered-evidence-outputs
Initargs

:centered-evidence-outputs

Readers

centered-evidence-outputs.

Writers

This slot is read-only.

Class: prior-gp

A GP whose mean and covariance are defined by two lisp functions. Can be updated, but it’s not trainable.

Package

mgl-gp.

Source

gaussian-process.lisp.

Direct superclasses

gp.

Direct methods
Direct slots
Slot: mean-fn
Initargs

:mean-fn

Readers

mean-fn.

Writers

This slot is read-only.

Slot: covariance-fn
Initargs

:covariance-fn

Readers

covariance-fn.

Writers

This slot is read-only.

Class: rbm

An RBM is a BM with no intralayer connections. An
RBM when trained with PCD behaves the same as a BM with the same chunks, clouds but it can also be trained by contrastive divergence (see RBM-CD-TRAINER) and stacked in a DBN.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

bm.

Direct methods
Direct slots
Slot: dbn
Type

(or null mgl-bm:dbn)

Readers

dbn.

Writers

This slot is read-only.

Class: rbm-cd-learner

A contrastive divergence based learner for RBMs.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses
Direct methods
Direct slots
Slot: bm
Initargs

:rbm

Readers

rbm.

Writers

This slot is read-only.

Class: relu-chunk
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

chunk.

Direct methods
Class: rmse-counter

A BASIC-COUNTER with whose nominator accumulates
the square of some statistics. It has the attribute :TYPE "rmse". COUNTER-VALUES returns the square root of what BASIC-COUNTER’s COUNTER-VALUES would return.

(let ((counter (make-instance ’rmse-counter))) (add-to-counter counter (+ (* 3 3) (* 4 4)) 2) counter)
=> #<RMSE-COUNTER rmse: 3.53553e+0 (2)>

Package

mgl-core.

Source

counter.lisp.

Direct superclasses

basic-counter.

Direct methods

counter-values.

Direct slots
Slot: attributes
Initform

(quote (:type "rmse"))

Class: rnn

A recurrent neural net (as opposed to a
feed-forward one. It is typically built with BUILD-RNN that’s no more than a shallow convenience macro.

An RNN takes instances as inputs that are sequences of variable length. At each time step, the next unprocessed elements of these sequences are set as input until all input sequences in the batch run out. To be able to perform backpropagation, all intermediate ‘LUMP‘s must be kept around, so the recursive connections are transformed out by [unfolding](http://en.wikipedia.org/wiki/Backpropagation_through_time) the network. Just how many lumps this means depends on the length of the sequences.

When an RNN is created, ‘MAX-LAG + 1‘ BPNs are instantiated so that all weights are present and one can start training it.

Package

mgl-bp.

Source

backprop.lisp.

Direct superclasses

bpn.

Direct methods
Direct slots
Slot: unfolder

The UNFOLDER of an RNN is function of no arguments
that builds and returns a BPN. The unfolder is allowed to create networks with arbitrary topology even different ones for different [TIME-STEP][]s with the help of LAG, or nested RNNs. Weights of the same name are shared between the folds. That is, if a ->WEIGHT lump were to be created and a weight lump of the same name already exists, then the existing lump will be added to the BPN created by UNFOLDER.

Initargs

:unfolder

Readers

unfolder.

Writers

This slot is read-only.

Slot: max-lag

The networks built by UNFOLDER may contain new
weights up to time step MAX-LAG. Beyond that point, all weight lumps must be reappearances of weight lumps with the same name at previous time steps. Most recurrent networks reference only the state of lumps at the previous time step (with the function LAG), hence the default of 1. But it is possible to have connections to arbitrary time steps. The maximum connection lag must be specified when creating the RNN.

Initform

1

Initargs

:max-lag

Readers

max-lag.

Writers

This slot is read-only.

Slot: input-seqs
Readers

input-seqs.

Writers

(setf input-seqs).

Slot: current-time
Initform

0

Readers

current-time.

Writers

(setf current-time).

Slot: max-time
Initform

0

Readers

max-time.

Writers

(setf max-time).

Slot: weight-lumps
Readers

weight-lumps.

Writers

(setf weight-lumps).

Slot: warp-start

The TIME-STEP from which UNFOLDER will create
‘BPN‘s that essentially repeat every WARP-LENGTH steps.

Initform

1

Initargs

:warp-start

Readers

warp-start.

Writers

This slot is read-only.

Slot: warp-length

An integer such that the BPN UNFOLDER creates at
time step ‘I‘ (where ‘(<= WARP-START I)‘) is identical to the BPN created at time step ‘(+ WARP-START (MOD (- I WARP-START) WARP-LENGTH))‘ except for a shift in its time lagged connections.

Initform

1

Initargs

:warp-length

Readers

warp-length.

Writers

This slot is read-only.

Slot: cuda-window-start-time

Due to unfolding, the memory footprint of an RNN
is almost linear in the number of time steps (i.e. the max sequence length). For prediction, this is addressed by @MGL-RNN-TIME-WARP. For training, we cannot discard results of previous time steps because they are needed for backpropagation, but we can at least move them out of GPU memory if they are not going to be used for a while and copy them back before they are needed. Obviously, this is only relevant if CUDA is being used.

If CUDA-WINDOW-START-TIME is NIL, then this feature is turned off. Else, during training, at CUDA-WINDOW-START-TIME or later time steps, matrices belonging to non-weight lumps may be forced out of GPU memory and later brought back as neeeded.

This feature is implemented in terms of MGL-MAT:WITH-SYNCING-CUDA-FACETS that uses CUDA host memory (also known as _page-locked_ or _pinned memory_) to do asynchronous copies concurrently with normal computation. The consequence of this is that it is now main memory usage that’s unbounded which toghether with page-locking makes it a potent weapon to bring a machine to a halt. You were warned.

Initform

mgl-bp:*cuda-window-start-time*

Initargs

:cuda-window-start-time

Readers

cuda-window-start-time.

Writers

(setf cuda-window-start-time).

Slot: step-monitors

During training, unfolded ‘BPN‘s corresponding to
previous time steps may be expensive to get at because they are no longer in GPU memory. This consideration also applies to making prediction with the additional caveat that with *WARP-TIME* true, previous states are discarded so it’s not possible to gather statistics after FORWARD finished.

Add monitor objects to this slot and they will be automatically applied to the RNN after each step when ‘FORWARD‘ing the RNN during training or prediction. To be able to easily switch between sets of monitors, in addition to a list of monitors this can be a symbol or a function, too. If it’s a symbol, then its a designator for its SYMBOL-VALUE. If it’s a function, then it must have no arguments and it’s a designator for its return value.

Initargs

:step-monitors

Readers

step-monitors.

Writers

(setf step-monitors).

Slot: remove-trailing-nil-instances
Initargs

:remove-trailing-nil-instances

Readers

remove-trailing-nil-instances.

Writers

(setf remove-trailing-nil-instances).

Class: running-stat
Package

mgl-util.

Source

util.lisp.

Direct methods

print-object.

Direct slots
Slot: n
Initform

0

Slot: mean
Initform

0

Slot: m2
Initform

0

Class: segment-set

This is a utility class for optimizers that have a
list of SEGMENTS and (the weights being optimized) is able to copy back and forth between those segments and a single MAT (the accumulator).

Package

mgl-opt.

Source

optimize.lisp.

Direct methods
Direct slots
Slot: segments

A list of weight matrices.

Initargs

:segments

Readers

segments.

Writers

This slot is read-only.

Slot: start-indices
Readers

start-indices.

Writers

This slot is read-only.

Slot: size

The sum of the sizes of the weight matrices of SEGMENTS.

Package

mgl-common.

Readers

size.

Writers

This slot is read-only.

Class: segmented-gd-optimizer

An optimizer that delegates training of segments to
other optimizers. Useful to delegate training of different segments to different optimizers (capable of working with segmentables) or simply to not train all segments.

Package

mgl-gd.

Source

gradient-descent.lisp.

Direct superclasses

base-gd-optimizer.

Direct methods
Direct slots
Slot: segmenter

When this optimizer is initialized it loops over
the segment of the learner with MAP-SEGMENTS. SEGMENTER is a function that is called with each segment and returns an optimizer or NIL. Several segments may be mapped to the same optimizer. After the segment->optimizer mappings are collected, each optimizer is initialized by INITIALIZE-OPTIMIZER with the list of segments mapped to it.

Initargs

:segmenter

Readers

segmenter.

Writers

This slot is read-only.

Slot: optimizers
Type

list

Readers

optimizers.

Writers

This slot is read-only.

Slot: segments
Package

mgl-opt.

Type

list

Readers

segments.

Writers

This slot is read-only.

Class: sgd-optimizer

With BATCH-SIZE 1 this is Stochastic Gradient
Descent. With higher batch sizes, one gets mini-batch and Batch Gradient Descent.

Assuming that ACCUMULATOR has the sum of gradients for a mini-batch, the weight update looks like this:

$$
\Delta_w^{t+1} = momentum * \Delta_w^t
+ \frac{accumulator}{batchsize}
+ l_2 w + l_1 sign(w)
$$

$$
w^{t+1} = w^{t} - learningrate * \Delta_w,
$$

which is the same as the more traditional formulation:

$$
\Delta_w^{t+1} = momentum * \Delta_w^{t}
+ learningrate * \left(\frac{\frac{df}{dw}}{batchsize}
+ l_2 w + l_1 sign(w)\right)
$$

$$
w^{t+1} = w^{t} - \Delta_w,
$$

but the former works better when batch size, momentum or learning rate change during the course of optimization. The above is with normal momentum, Nesterov’s momentum (see MOMENTUM-TYPE) momentum is also available.

See @MGL-GD-BATCH-GD-OPTIMIZER for the description of the various options common to all batch based optimizers.

Package

mgl-gd.

Source

gradient-descent.lisp.

Direct superclasses

batch-gd-optimizer.

Direct methods

maybe-update-weights.

Class: sigmoid-chunk

Nodes in a sigmoid chunk have two possible samples:
0 and 1. The probability of a node being on is given by the sigmoid of its activation.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

chunk.

Direct methods
Class: softmax-chunk

Binary units with normalized (EXP ACTIVATION)
firing probabilities representing a multinomial distribution. That is, samples have exactly one 1 in each group of GROUP-SIZE.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

exp-normalized-group-chunk.

Direct subclasses

softmax-label-chunk.

Direct methods

sample-chunk.

Class: softmax-label-chunk
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

softmax-chunk.

Direct methods
Class: sparsity-gradient-source
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct subclasses
Direct methods
Direct slots
Slot: cloud
Type

mgl-bm:cloud

Initargs

:cloud

Readers

cloud.

Writers

This slot is read-only.

Slot: chunk
Type

mgl-bm:chunk

Initargs

:chunk

Readers

chunk.

Writers

This slot is read-only.

Slot: sparsity-target
Type

mgl-util:flt

Initargs

:sparsity-target, :target, :sparsity

Readers
Writers

This slot is read-only.

Slot: cost
Package

mgl-common.

Type

mgl-util:flt

Initargs

:cost

Readers

cost.

Writers

This slot is read-only.

Slot: damping
Type

mgl-util:flt

Initargs

:damping

Readers

damping.

Writers

This slot is read-only.

Class: temporal-chunk

After a SET-HIDDEN-MEAN, the means of
HIDDEN-SOURCE-CHUNK are stored in NEXT-NODE-INPUTS and on the next SET-INPUT copied onto NODES. If there are multiple SET-HIDDEN-MEAN calls between two SET-INPUT calls then only the first set of values are remembered.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

conditioning-chunk.

Direct methods
Direct slots
Slot: hidden-source-chunk
Initargs

:hidden-source-chunk

Readers

hidden-source-chunk.

Writers

This slot is read-only.

Slot: next-node-inputs
Readers

next-node-inputs.

Writers

This slot is read-only.

Slot: has-inputs-p
Readers

has-inputs-p.

Writers

This slot is read-only.


6.1.9 Types

Type: flt ()
Package

mgl-util.

Source

util.lisp.

Type: flt-vector ()
Package

mgl-util.

Source

util.lisp.

Type: index ()
Package

mgl-util.

Source

util.lisp.

Type: index-vector ()
Package

mgl-util.

Source

util.lisp.

Type: positive-flt ()
Package

mgl-util.

Source

util.lisp.


6.2 Internals


6.2.1 Special variables

Special Variable: *attribute-print-widths*
Package

mgl-core.

Source

counter.lisp.

Special Variable: *bpn-being-built*
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: *current-stripe*
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Special Variable: *gnuplot-binary*
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Special Variable: *in-training-p*
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: *lumps-to-copy*
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: *n-softmax-threads*
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: *names-of-nested-bpns-in-rnn*
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: *objects-copied*
Package

mgl-util.

Source

copy.lisp.

Special Variable: *print-label-counters*
Package

mgl-core.

Source

classification.lisp.

Special Variable: *rnn*
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: *versions*
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Special Variable: +default-population-decay+
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: +default-variance-adjustment+
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-attributes
Package

mgl-core.

Source

counter.lisp.

Special Variable: @mgl-bm
Package

mgl.

Source

mgl.lisp.

Special Variable: @mgl-bp
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-bp-*-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-+-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-abs-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-activation-functions
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-activation-subnet
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-activations
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-arithmetic
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-batch-normalization
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-dropout-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-embedding-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-exp-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-extension-api
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-bp-gaussian-random-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-input-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-inputs
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-loss-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-losses
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-lstm-subnet
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-lumps
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-max-channel-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-max-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-min-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-monitoring
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-bp-normalized-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-overview
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-bp-relu-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-rnn-operations
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-sample-binary-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-scaled-tanh-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-seq-barrier-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-sigmoid-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-sine-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-softmax-xe-loss-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-squared-difference-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-stochasticity
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-sum-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-tanh-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-training
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-bp-utilities
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-v*m-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bp-weight-lump
Package

mgl-bp.

Source

lumps.lisp.

Special Variable: @mgl-bpn
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-cg
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Special Variable: @mgl-classification
Package

mgl-core.

Source

classification.lisp.

Special Variable: @mgl-classification-counter
Package

mgl-core.

Source

classification.lisp.

Special Variable: @mgl-classification-measurer
Package

mgl-core.

Source

classification.lisp.

Special Variable: @mgl-classification-monitor
Package

mgl-core.

Source

classification.lisp.

Special Variable: @mgl-code-organization
Package

mgl.

Source

mgl.lisp.

Special Variable: @mgl-common
Package

mgl-common.

Source

common.lisp.

Special Variable: @mgl-confusion-matrix
Package

mgl-core.

Source

classification.lisp.

Special Variable: @mgl-core
Package

mgl-core.

Source

core.lisp.

Special Variable: @mgl-counter
Package

mgl-core.

Source

counter.lisp.

Special Variable: @mgl-counter-classes
Package

mgl-core.

Source

counter.lisp.

Special Variable: @mgl-dataset
Package

mgl-dataset.

Source

dataset.lisp.

Special Variable: @mgl-dependencies
Package

mgl.

Source

mgl.lisp.

Special Variable: @mgl-diffun
Package

mgl-diffun.

Source

differentiable-function.lisp.

Special Variable: @mgl-executors
Package

mgl-core.

Source

core.lisp.

Special Variable: @mgl-feature-encoding
Package

mgl-core.

Source

feature.lisp.

Special Variable: @mgl-feature-selection
Package

mgl-core.

Source

feature.lisp.

Special Variable: @mgl-features
Package

mgl-core.

Source

feature.lisp.

Special Variable: @mgl-fnn
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-fnn-tutorial
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-gd
Package

mgl-gd.

Source

gradient-descent.lisp.

Special Variable: @mgl-gd-adam-optimizer
Package

mgl-gd.

Source

gradient-descent.lisp.

Special Variable: @mgl-gd-batch-gd-optimizer
Package

mgl-gd.

Source

gradient-descent.lisp.

Special Variable: @mgl-gd-normalized-batch-gd-optimizer
Package

mgl-gd.

Source

gradient-descent.lisp.

Special Variable: @mgl-gd-per-weight-optimization
Package

mgl-gd.

Source

gradient-descent.lisp.

Special Variable: @mgl-gd-segmented-gd-optimizer
Package

mgl-gd.

Source

gradient-descent.lisp.

Special Variable: @mgl-gd-sgd-optimizer
Package

mgl-gd.

Source

gradient-descent.lisp.

Special Variable: @mgl-gd-utilities
Package

mgl-gd.

Source

gradient-descent.lisp.

Special Variable: @mgl-glossary
Package

mgl.

Source

mgl.lisp.

Special Variable: @mgl-gp
Package

mgl.

Source

mgl.lisp.

Special Variable: @mgl-introduction
Package

mgl.

Source

mgl.lisp.

Package

mgl.

Source

mgl.lisp.

Special Variable: @mgl-log
Package

mgl-log.

Source

log.lisp.

Special Variable: @mgl-manual
Package

mgl.

Source

mgl.lisp.

Special Variable: @mgl-measurer
Package

mgl-core.

Source

measure.lisp.

Special Variable: @mgl-model-stripe
Package

mgl-core.

Source

core.lisp.

Special Variable: @mgl-monitor
Package

mgl-core.

Source

monitor.lisp.

Special Variable: @mgl-monitoring
Package

mgl-core.

Source

monitor.lisp.

Special Variable: @mgl-nlp
Package

mgl-nlp.

Source

nlp.lisp.

Special Variable: @mgl-nlp-bag-of-words
Package

mgl-nlp.

Source

nlp.lisp.

Special Variable: @mgl-opt
Package

mgl-opt.

Source

optimize.lisp.

Special Variable: @mgl-opt-cost
Package

mgl-opt.

Source

optimize.lisp.

Special Variable: @mgl-opt-extension-api
Package

mgl-opt.

Source

optimize.lisp.

Special Variable: @mgl-opt-gradient-sink
Package

mgl-opt.

Source

optimize.lisp.

Special Variable: @mgl-opt-gradient-source
Package

mgl-opt.

Source

optimize.lisp.

Special Variable: @mgl-opt-iterative-optimizer
Package

mgl-opt.

Source

optimize.lisp.

Special Variable: @mgl-opt-optimizer
Package

mgl-opt.

Source

optimize.lisp.

Special Variable: @mgl-overview
Package

mgl.

Source

mgl.lisp.

Special Variable: @mgl-parameterized-executor-cache
Package

mgl-core.

Source

core.lisp.

Special Variable: @mgl-persistence
Package

mgl-core.

Source

core.lisp.

Special Variable: @mgl-resample
Package

mgl-resample.

Source

resample.lisp.

Special Variable: @mgl-resample-bagging
Package

mgl-resample.

Source

resample.lisp.

Special Variable: @mgl-resample-cross-validation
Package

mgl-resample.

Source

resample.lisp.

Special Variable: @mgl-resample-cv-bagging
Package

mgl-resample.

Source

resample.lisp.

Special Variable: @mgl-resample-misc
Package

mgl-resample.

Source

resample.lisp.

Special Variable: @mgl-resample-partitions
Package

mgl-resample.

Source

resample.lisp.

Special Variable: @mgl-resample-shuffling
Package

mgl-resample.

Source

resample.lisp.

Special Variable: @mgl-rnn
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-rnn-time-warp
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-rnn-tutorial
Package

mgl-bp.

Source

backprop.lisp.

Special Variable: @mgl-sampler
Package

mgl-dataset.

Source

dataset.lisp.

Special Variable: @mgl-sampler-function-sampler
Package

mgl-dataset.

Source

dataset.lisp.


6.2.2 Macros

Macro: define-slot-not-to-be-copied (context class slot-name)
Package

mgl-util.

Source

copy.lisp.

Macro: define-slot-to-be-shallow-copied (context class slot-name)
Package

mgl-util.

Source

copy.lisp.

Macro: do-chunk ((index chunk) &body body)

Iterate over the indices of nodes of CHUNK skipping missing ones.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Macro: do-cloud-runs (((start end) cloud) &body body)

Iterate over consecutive runs of weights present in CLOUD.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Macro: do-cloud/chunk1 ((chunk1-index cloud) &body body)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Macro: do-sparse-targets (((group-start target-index target-value) targets group-size) &body body)
Package

mgl-bp.

Source

lumps.lisp.

Macro: do-stripes ((chunk &optional stripe) &body body)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Macro: maybe-print-unreadable-object ((object stream &key type identity) &body body)
Package

mgl-core.

Source

counter.lisp.

Macro: with-nil-on-arithmetic-error (&body body)
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Macro: with-safe-printing (&body body)
Package

mgl-util.

Source

util.lisp.

Macro: with-sink-accumulator ((accumulator (segment source sink)) &body body)

Bind ACCUMULATOR to the accumulator MAT associated with SEGMENT of SOURCE in SINK. ACCUMULATOR is dynamic extent. This is a convenience macro on top of CALL-WITH-SINK-ACCUMULATOR.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Macro: with-versions ((version objects) &body body)
Package

mgl-bm.

Source

boltzmann-machine.lisp.


6.2.3 Ordinary functions

Function: ->chunk (chunk-designator chunks)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: ->cloud (cloud-designator bm)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: ->clouds (chunks cloud-specs)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: ->constant (&key name size default-value shared-with-clump)
Package

mgl-unroll.

Source

unroll.lisp.

Function: ->description (object description)
Package

mgl-util.

Source

util.lisp.

Function: accumulated-in-sink-p (segment source sink)

See if SEGMENT of SOURCE has an accumulator associated with it in SINK.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: activate-cloud (cloud reversep &key from-fn to-fn)

From CHUNK1 calculate the activations of CHUNK2 and _add_ them to CHUNK2. If REVERSEP then swap the roles of the chunks. FROM-FN and TO-FN are the accessors to use to get the nodes value arrays (one of #’NODES, #’OLD-NODES, #’MEANS. In the simplest case it adds weights (of CLOUD) * OLD-NODES (of CHUNK1) to the nodes of the hidden chunk.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: adam-update (step-size mean-estimates variance-estimates variance-adjustment weight-deltas)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: add-and-forget-derivatives (bpn gradient-sink multiplier)
Package

mgl-bp.

Source

backprop.lisp.

Function: add-chunk-nodes (chunk from to)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: add-connection (cloud &key from to)
Package

mgl-unroll.

Source

unroll.lisp.

Function: add-diffun-gradients (diff-fn weights args accumulator multiplier)
Package

mgl-diffun.

Source

differentiable-function.lisp.

Function: add-rbm (rbm dbn)
Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: all-document-classes (documents class-fn)
Package

mgl-core.

Source

feature.lisp.

Function: all-rbms (dbn)
Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: attribute-print-width (attribute)
Package

mgl-core.

Source

counter.lisp.

Function: bm-type-name (bm)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: both-cloud-ends-in-p (cloud chunks)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: call-with-weights-copied (from-clump fn)
Package

mgl-bp.

Source

lumps.lisp.

Function: check-dbm-clouds (dbm)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: check-limit (value limit)
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Function: check-no-name-clashes (rbms)
Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: check-no-self-connection (bm)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: check-rnn (name)
Package

mgl-bp.

Source

backprop.lisp.

Function: check-size-and-default-size (lump size)
Package

mgl-bp.

Source

lumps.lisp.

Function: check-stripes (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: check-valuep (valuep)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: check-weights-not-new (rnn bpn)
Package

mgl-bp.

Source

backprop.lisp.

Function: chunk-activation-lump-name (chunk-name kind)

The name of the lump that computes the activations of CHUNK.

Package

mgl-unroll.

Source

unroll.lisp.

Function: clamp-cached-entry-on-fnn (fnn stripe sample &key key)
Package

mgl-unroll.

Source

unroll.lisp.

Function: cloud-between-chunks-p (cloud chunks1 chunks2)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: cloud-linear-lump-name (cloud-name transposep)

The name of the lump that represents part of the activation of a chunk. CLOUD comes from the rbm in the DBN. TRANSPOSEP determines from which direction the activation crosses the cloud.

Package

mgl-unroll.

Source

unroll.lisp.

Function: cloud-weight-lump-name (cloud-name transposep)

The name of the lump that represents the weights of CLOUD or its transpose. CLOUD comes from the rbm in the DBN.

Package

mgl-unroll.

Source

unroll.lisp.

Function: coerce-to-fresh-vector (seq)
Package

mgl-resample.

Source

resample.lisp.

Function: collect-distinct (seq &key key test)
Package

mgl-resample.

Source

resample.lisp.

Function: collect-map-chunks-and-lumps (fnn dbm)

Return a list of chunk, lump sublists. Elements are MAP lumps in FNN and the corresponding chunk in DBM.

Package

mgl-unroll.

Source

unroll.lisp.

Function: collect-non-constant-mats-for-cuda-sync (bpn)
Package

mgl-bp.

Source

backprop.lisp.

Function: compact-binary-feature-vector (feature-vector element-type)
Package

mgl-nlp.

Source

nlp.lisp.

Function: compute-derivatives (samples optimizer learner)
Package

mgl-bp.

Source

backprop.lisp.

Function: conditioning-chunk-p (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: conditioning-clouds-to (chunks clouds)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: confusion-matrix-test (matrix)
Package

mgl-core.

Source

classification.lisp.

Function: connects-to-p (chunk chunks clouds)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: copy-chunk-nodes (chunk from to)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: copy-dbm-chunk-to-dbn (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: copy-dbm-cloud-to-dbn (cloud clouds layer-below layer1 layer2 layer-above)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: copy-incoming (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Function: copy-lagged-clump (instance)
Package

mgl-bp.

Source

backprop.lisp.

Function: copy-lumpy (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Function: cross-entropy-to-perplexity (cross-entropy)
Package

mgl-util.

Source

util.lisp.

Function: cuda-adam-update (step-size mean-estimates variance-estimates variance-adjustment weight-deltas n &key grid-dim block-dim)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: cuda-adam-update-d (step-size mean-estimates variance-estimates variance-adjustment weight-deltas n &key grid-dim block-dim)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: cuda-adam-update-s (step-size mean-estimates variance-estimates variance-adjustment weight-deltas n &key grid-dim block-dim)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: cuda-dropout-xorwow (x n mask dropout-probability &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-dropout-xorwow-d (x n mask dropout-probability &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-dropout-xorwow-s (x n mask dropout-probability &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-exp-normalized (group-size scale x n &key grid-dim block-dim)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: cuda-exp-normalized-d (group-size scale x n &key grid-dim block-dim)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: cuda-exp-normalized-s (group-size scale x n &key grid-dim block-dim)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: cuda-max (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-channel (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-channel-d (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-channel-derivative (group-size x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-channel-derivative-d (group-size x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-channel-derivative-s (group-size x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-channel-s (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-d (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-derivative (group-size x n l ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-derivative-d (group-size x n l ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-derivative-s (group-size x n l ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-max-s (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-maybe-renormalize-columns (x n-rows n-columns l2-upper-bound norms &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-maybe-renormalize-columns-d (x n-rows n-columns l2-upper-bound norms &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-maybe-renormalize-columns-s (x n-rows n-columns l2-upper-bound norms &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-maybe-renormalize-rows (x n-rows n-columns l2-upper-bound norms &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-maybe-renormalize-rows-d (x n-rows n-columns l2-upper-bound norms &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-maybe-renormalize-rows-s (x n-rows n-columns l2-upper-bound norms &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-min (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-min-d (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-min-derivative (group-size x n l ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-min-derivative-d (group-size x n l ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-min-derivative-s (group-size x n l ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-min-s (group-size x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-rectify (x y n &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-rectify-d (x y n &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-rectify-derivative (xd l ld n &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-rectify-derivative-d (xd l ld n &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-rectify-derivative-s (xd l ld n &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-rectify-s (x y n &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sample-softmax (group-size scale x n randoms &key grid-dim block-dim)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: cuda-sample-softmax-d (group-size scale x n randoms &key grid-dim block-dim)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: cuda-sample-softmax-s (group-size scale x n randoms &key grid-dim block-dim)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: cuda-scaled-tanh! (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-scaled-tanh!-d (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-scaled-tanh!-s (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-scaled-tanh-derivative! (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-scaled-tanh-derivative!-d (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-scaled-tanh-derivative!-s (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sigmoid! (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sigmoid!-d (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sigmoid!-s (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sigmoid-derivative! (l n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sigmoid-derivative!-d (l n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sigmoid-derivative!-s (l n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sin! (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sin!-d (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sin!-s (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sin-derivative! (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sin-derivative!-d (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-sin-derivative!-s (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe (group-size input n output &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe-d (group-size input n output &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe-derivative (group-size xd n target softmax &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe-derivative-d (group-size xd n target softmax &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe-derivative-s (group-size xd n target softmax &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe-derivative/sparse (group-start group-size xd softmax target-index target-value &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe-derivative/sparse-d (group-start group-size xd softmax target-index target-value &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe-derivative/sparse-s (group-start group-size xd softmax target-index target-value &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-softmax-xe-s (group-size input n output &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-tanh! (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-tanh!-d (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-tanh!-s (x n y &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-tanh-derivative! (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-tanh-derivative!-d (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: cuda-tanh-derivative!-s (x n ld xd &key grid-dim block-dim)
Package

mgl-bp.

Source

lumps.lisp.

Function: derive-rough-exponential (x &key signal-variance length-scale roughness)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: differentiate-numerically (fn args index &key delta)
Package

mgl-diffun.

Source

differentiable-function.lisp.

Function: document-features (document mapper)
Package

mgl-core.

Source

feature.lisp.

Function: dropout! (x mask dropout-probability &key n)
Package

mgl-bp.

Source

lumps.lisp.

Function: effective-mean-decay (optimizer)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: encode/bag-of-words (document mapper feature-encoder &key kind encoded-feature-type)
Package

mgl-nlp.

Source

nlp.lisp.

Function: ensure-batch-mean (lump subbatch-index)
Package

mgl-bp.

Source

lumps.lisp.

Function: ensure-batch-variance (lump subbatch-index)
Package

mgl-bp.

Source

lumps.lisp.

Function: ensure-lumpy (lumpies &key depth chunk kind)
Package

mgl-unroll.

Source

unroll.lisp.

Function: ensure-mask (lump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ensure-mat-large-enough (mat prototype)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: ensure-mean-estimates (optimizer)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: ensure-population-mean (lump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ensure-population-variance (lump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ensure-random-numbers (chunk &key div)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: ensure-randoms (lump)
Package

mgl-bp.

Source

lumps.lisp.

Function: ensure-rnn-bpn (rnn)
Package

mgl-bp.

Source

backprop.lisp.

Function: ensure-sampler (dataset)
Package

mgl-dataset.

Source

dataset.lisp.

Function: ensure-scratch (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: ensure-seq (obj)
Package

mgl-opt.

Source

optimize.lisp.

Function: evaluate-diffun (diff-fn weights args)
Package

mgl-diffun.

Source

differentiable-function.lisp.

Function: extract-covariances (lump stripe n-rows n-cols)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: extract-means (lump stripe)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: factored-cloud-shared-chunk (cloud)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: fill-chunk (chunk value &key allp)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: find-activation-lump-for-weight (->weight bpn)
Package

mgl-bp.

Source

lumps.lisp.

Function: find-enough-weights (seq start weight weights-sum weights-sum-limit)
Package

mgl-resample.

Source

resample.lisp.

Function: find-gp-lump (fnn)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: find-lumpy (lumpies &key depth chunk kind)
Package

mgl-unroll.

Source

unroll.lisp.

Function: find-lumpy-by-name (name lumpies)
Package

mgl-unroll.

Source

unroll.lisp.

Function: find-mm-lumps-from (lump bpn)
Package

mgl-unroll.

Source

unroll.lisp.

Function: find-nested-bpn (bpn path)
Package

mgl-bp.

Source

backprop.lisp.

Function: format-description (description stream)
Package

mgl-util.

Source

util.lisp.

Function: format-full-cloud-norm (cloud)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: full-cloud-norm (cloud)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: full-clouds-everywhere (visible-chunks hidden-chunks)

Return a list of cloud specifications suitable for instantiating a BM. Put a cloud between each pair of visible and hidden chunks unless they are both conditioning chunks. The names of the clouds are two element lists of the names of the visible and hidden chunks.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: full-clouds-everywhere-between-layers (layers)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: gaussian-kernel (x1 x2 &key signal-variance bias-variance length-scale roughness)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: get-all (plist indicator)
Package

mgl-core.

Source

counter.lisp.

Function: gp-data-matrix (inputs outputs)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: gp-data-matrix-for-level (inputs means covariances level)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: hijack-means-to-activation (chunks clouds)

Set NODES of CHUNKS to the activations calculated from CLOUDS. Skip chunks that don’t need activations. If ADDP don’t zero NODES first, but add to it.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Reader: incoming-cloud (instance)
Writer: (setf incoming-cloud) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

cloud.

Reader: incoming-from-lumpy (instance)
Writer: (setf incoming-from-lumpy) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

from-lumpy.

Function: incoming-list->fnn-definition (to-lumpy incomings)
Package

mgl-unroll.

Source

unroll.lisp.

Function: incoming-p (object)
Package

mgl-unroll.

Source

unroll.lisp.

Reader: incoming-transposep (instance)
Writer: (setf incoming-transposep) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

transposep.

Function: index-scored-features (scored-features n &key start)

Take scored features as a feature -> score hash table (returned by COUNT-FEATURES or COMPUTE-FEATURE-LLR, for instance) and return a feature -> index hash table that maps the first N (or less) features with the highest scores to distinct dense indices starting from START.

Package

mgl-core.

Source

feature.lisp.

Function: insert-into-executor-cache (parameters cache value)
Package

mgl-core.

Source

core.lisp.

Function: kind-to-string (kind)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Reader: lagged-clump-lag (instance)
Writer: (setf lagged-clump-lag) (instance)
Package

mgl-bp.

Source

backprop.lisp.

Target Slot

lag.

Reader: lagged-clump-name (instance)
Writer: (setf lagged-clump-name) (instance)
Package

mgl-bp.

Source

backprop.lisp.

Target Slot

name.

Function: lagged-clump-p (object)
Package

mgl-bp.

Source

backprop.lisp.

Reader: lagged-clump-path (instance)
Writer: (setf lagged-clump-path) (instance)
Package

mgl-bp.

Source

backprop.lisp.

Target Slot

path.

Function: lisp-adam-update (step-size mean-estimates variance-estimates variance-adjustment weight-deltas n)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: lisp-adam-update-d (step-size mean-estimates variance-estimates variance-adjustment weight-deltas n)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: lisp-adam-update-s (step-size mean-estimates variance-estimates variance-adjustment weight-deltas n)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: lisp-dropout (x start-x n mask start-mask dropout-probability)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-dropout-d (x start-x n mask start-mask dropout-probability)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-dropout-s (x start-x n mask start-mask dropout-probability)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max (group-size x n y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-channel (group-size x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-channel-d (group-size x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-channel-derivative (group-size x start-x n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-channel-derivative-d (group-size x start-x n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-channel-derivative-s (group-size x start-x n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-channel-s (group-size x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-d (group-size x n y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-derivative (group-size x n l ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-derivative-d (group-size x n l ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-derivative-s (group-size x n l ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-max-s (group-size x n y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-maybe-renormalize-columns (x start-x n-rows n-columns l2-upper-bound norms start-norms)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-maybe-renormalize-columns-d (x start-x n-rows n-columns l2-upper-bound norms start-norms)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-maybe-renormalize-columns-s (x start-x n-rows n-columns l2-upper-bound norms start-norms)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-maybe-renormalize-rows (x start-x n-rows n-columns l2-upper-bound norms start-norms)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-maybe-renormalize-rows-d (x start-x n-rows n-columns l2-upper-bound norms start-norms)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-maybe-renormalize-rows-s (x start-x n-rows n-columns l2-upper-bound norms start-norms)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-min (group-size x n y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-min-d (group-size x n y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-min-derivative (group-size x n l ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-min-derivative-d (group-size x n l ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-min-derivative-s (group-size x n l ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-min-s (group-size x n y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-rectify (x start-x y start-y n)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-rectify-d (x start-x y start-y n)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-rectify-derivative (xd start-xd l start-l ld start-ld n)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-rectify-derivative-d (xd start-xd l start-l ld start-ld n)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-rectify-derivative-s (xd start-xd l start-l ld start-ld n)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-rectify-s (x start-x y start-y n)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-scaled-tanh! (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-scaled-tanh!-d (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-scaled-tanh!-s (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-scaled-tanh-derivative! (x start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-scaled-tanh-derivative!-d (x start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-scaled-tanh-derivative!-s (x start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sigmoid! (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sigmoid!-d (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sigmoid!-s (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sigmoid-derivative! (l start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sigmoid-derivative!-d (l start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sigmoid-derivative!-s (l start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sin! (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sin!-d (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sin!-s (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sin-derivative! (x start-x n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sin-derivative!-d (x start-x n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-sin-derivative!-s (x start-x n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe (group-size x n softmax)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe-d (group-size x n softmax)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe-derivative (group-size xd n target softmax)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe-derivative-d (group-size xd n target softmax)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe-derivative-s (group-size xd n target softmax)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe-derivative/sparse (group-start group-size xd softmax target-index target-value)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe-derivative/sparse-d (group-start group-size xd softmax target-index target-value)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe-derivative/sparse-s (group-start group-size xd softmax target-index target-value)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-softmax-xe-s (group-size x n softmax)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-tanh! (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-tanh!-d (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-tanh!-s (x start-x n y start-y)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-tanh-derivative! (x start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-tanh-derivative!-d (x start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lisp-tanh-derivative!-s (x start-l n ld start-ld xd start-xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: lookup-executor-cache (parameters cache)
Package

mgl-core.

Source

core.lisp.

Function: lumpies->fnn-definition (lumpies)
Package

mgl-unroll.

Source

unroll.lisp.

Function: lumpy-activation-name (lumpy)
Package

mgl-unroll.

Source

unroll.lisp.

Reader: lumpy-chunk (instance)
Writer: (setf lumpy-chunk) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

chunk.

Reader: lumpy-depth (instance)
Writer: (setf lumpy-depth) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

depth.

Reader: lumpy-incomings (instance)
Writer: (setf lumpy-incomings) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

incomings.

Reader: lumpy-kind (instance)
Writer: (setf lumpy-kind) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

kind.

Function: lumpy-name (lumpy)
Package

mgl-unroll.

Source

unroll.lisp.

Reader: lumpy-original (instance)
Writer: (setf lumpy-original) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

original.

Function: lumpy-p (object)
Package

mgl-unroll.

Source

unroll.lisp.

Reader: lumpy-symbol (instance)
Writer: (setf lumpy-symbol) (instance)
Package

mgl-unroll.

Source

unroll.lisp.

Target Slot

symbol.

Function: make-do-cloud/chunk2 (chunk2-index index chunk2-size offset body)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: make-incoming (&key from-lumpy cloud transposep)
Package

mgl-unroll.

Source

unroll.lisp.

Function: make-instance-args (class)
Package

mgl-util.

Source

util.lisp.

Function: make-lagged-clump (&key path name lag)
Package

mgl-bp.

Source

backprop.lisp.

Function: make-lumpy (&key depth chunk incomings original kind symbol)
Package

mgl-unroll.

Source

unroll.lisp.

Function: make-matrix-from-lump-stripe (lump n-rows n-cols stripe)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: make-vector-from-lump-stripe (lump stripe)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: map-sparser (learner sink)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: mat-and-row/column-sum-size (mat-and-row/column-list)
Package

mgl-bp.

Source

lumps.lisp.

Function: maybe-add-name-to-init (init-form symbol)
Package

mgl-bp.

Source

backprop.lisp.

Function: maybe-copy-weight (class-name args)
Package

mgl-bp.

Source

lumps.lisp.

Function: maybe-remember (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: maybe-renormalize-columns (mat l2-upper-bound norms)
Package

mgl-bp.

Source

lumps.lisp.

Function: maybe-renormalize-rows (mat l2-upper-bound norms)
Package

mgl-bp.

Source

lumps.lisp.

Function: maybe-use-remembered (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: means-or-samples (learner bm chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: merge-weights-and-arguments (diff-fn weights args)
Package

mgl-diffun.

Source

differentiable-function.lisp.

Function: monitor-optimization-periodically* (optimizer periodic-fn)
Package

mgl-opt.

Source

optimize.lisp.

Function: name-clashes (list)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: negate-vector (v &key result)
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Function: node-change (chunks)

Return the average of the absolute values of NODES - OLD-NODES over CHUNKS. The second value returned is the number of nodes that contributed to the average.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: nodes->means (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: norm (matrix)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: other-chunk (cloud chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: pad-to-width (value width)
Package

mgl-core.

Source

counter.lisp.

Function: pax-pages ()
Package

mgl.

Source

doc.lisp.

Function: pax-sections ()
Package

mgl.

Source

doc.lisp.

Function: perplexity-to-cross-entropy (perplexity)
Package

mgl-util.

Source

util.lisp.

Function: pipe-to-gnuplot (input &key binary display geometry persistp output error)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: plists-to-column-widths (plists)
Package

mgl-core.

Source

counter.lisp.

Function: polack-ribiere (old-df new-df)
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Function: posterior-gp-means-and-covariances (gp x1 x2 &key compute-covariances-p)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: pprint-descriptions (class descriptions stream)
Package

mgl-util.

Source

util.lisp.

Function: previous-rbm (dbn rbm)
Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: print-name (attributed stream)
Package

mgl-core.

Source

counter.lisp.

Function: process-batch (optimizer learner batch weights derivatives)
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Function: rectify! (x y &key n)
Package

mgl-bp.

Source

lumps.lisp.

Function: remove-if* (filter seq)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: remove-trailing-nils (seq)
Package

mgl-bp.

Source

backprop.lisp.

Function: remove-unkeyword-args (args unkeyword-args)
Package

mgl-util.

Source

util.lisp.

Function: renormalize-mats (mat-and-row/column-list l2-upper-bound)
Package

mgl-bp.

Source

lumps.lisp.

Function: resolve-clumps (object)
Package

mgl-bp.

Source

backprop.lisp.

Function: resolve-step-monitors (step-monitors)
Package

mgl-bp.

Source

backprop.lisp.

Function: reverse-map (hash-table)
Package

mgl-core.

Source

feature.lisp.

Function: rnn-backward-cuda-syncs (rnn)
Package

mgl-bp.

Source

backprop.lisp.

Function: rnn-forward-cuda-syncs (rnn)
Package

mgl-bp.

Source

backprop.lisp.

Function: rough-exponential (x &key signal-variance length-scale roughness)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: sample-gp (gp inputs &key means covariances)

Return a sample from the multivariate normal distribution defined by GP at INPUTS as a column vector.

Package

mgl-gp.

Source

gaussian-process.lisp.

Function: sample-with-replacement* (ratio seq &key weight random-state)
Package

mgl-resample.

Source

resample.lisp.

Function: sample-without-replacement* (ratio seq &key weight random-state)
Package

mgl-resample.

Source

resample.lisp.

Function: scaled-tanh! (x y)
Package

mgl-bp.

Source

lumps.lisp.

Function: scaled-tanh-derivative! (x ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: set-mean (chunks bm &key other-chunks)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: set-mean* (chunks bm &key other-chunks)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: set-n-rbms (dbn n-rbms)
Package

mgl-bm.

Source

deep-belief-network.lisp.

Function: set-up-n-weight-uses (optimizer)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: shuffle-vector! (vector)
Package

mgl-resample.

Source

resample.lisp.

Function: sigmoid! (x y)
Package

mgl-bp.

Source

lumps.lisp.

Function: sigmoid-derivative! (l ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: sin! (x y)
Package

mgl-bp.

Source

lumps.lisp.

Function: sin-derivative! (x ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: sort-groups! (seq pred group-size &key key start end)
Package

mgl-util.

Source

util.lisp.

Function: split-by-index (seq pred)

Partition SEQ into two sequences: one with the elements with indices for which PRED returns true, one with the rest. The order of elements remains is stable within the two splits.

Package

mgl-resample.

Source

resample.lisp.

Function: stable-set-difference (list1 list2)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: stripe-binding (stripe object start &optional end)
Package

mgl-core.

Source

core.lisp.

Function: sum-chunk-nodes-and-old-nodes (chunk node-weight old-node-weight)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: sum-nodes-and-old-nodes (chunks node-weight old-node-weight)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: swap-nodes (chunks)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: swap-nodes* (chunks)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: tanh! (x y)
Package

mgl-bp.

Source

lumps.lisp.

Function: tanh-derivative! (x ld xd)
Package

mgl-bp.

Source

lumps.lisp.

Function: time->string (&optional time)
Package

mgl-log.

Source

log.lisp.

Function: train-batch (optimizer learner batch)
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Function: trivially-map-over-executors (fn instances obj)
Package

mgl-core.

Source

core.lisp.

Function: update-all-weights/adam (optimizer)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: update-all-weights/adam-use-segment-derivatives (optimizer)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: update-all-weights/sgd-nesterov (optimizer)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: update-all-weights/sgd-no-momentum (optimizer)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: update-all-weights/sgd-normal (optimizer)
Package

mgl-gd.

Source

gradient-descent.lisp.

Function: update-direction (s old-df new-df)
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Function: update-gp* (gp inputs outputs means covariances)
Package

mgl-gp.

Source

gaussian-process.lisp.

Function: use-blas-on-chunk-p (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: v1=v2+c*v3 (v1 v2 c v3)
Package

mgl-cg.

Source

conjugate-gradient.lisp.

Function: version (obj)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: visible-nodes->means (bm)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Function: write-options (options stream)
Package

mgl-gnuplot.

Source

gnuplot.lisp.

Function: zero-chunk (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.


6.2.4 Generic functions

Generic Function: accumulate-cloud-statistics (learner bm cloud gradient-sink multiplier)

Take the accumulator of TRAINER that corresponds to
CLOUD and add MULTIPLIER times the cloud statistics of [persistent] contrastive divergence.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: accumulate-cloud-statistics (learner bm (cloud factored-cloud) gradient-sink multiplier)
Method: accumulate-cloud-statistics (learner bm (cloud full-cloud) gradient-sink multiplier)
Generic Function: accumulate-cloud-statistics* (cloud v1 v2 v1-scratch importances multiplier accumulator)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: accumulate-cloud-statistics* ((cloud full-cloud) v1 v2 v1-scratch importances multiplier accumulator)
Generic Function: accumulate-negative-phase-statistics (learner gradient-sink multiplier)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: accumulate-negative-phase-statistics ((learner bm-pcd-learner) gradient-sink multiplier)
Method: accumulate-negative-phase-statistics ((learner rbm-cd-learner) gradient-sink multiplier)
Generic Function: accumulate-positive-phase-statistics (learner gradient-sink multiplier)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: accumulate-positive-phase-statistics ((learner bm-pcd-learner) gradient-sink multiplier)
Method: accumulate-positive-phase-statistics ((learner rbm-cd-learner) gradient-sink multiplier)
Method: accumulate-positive-phase-statistics ((learner sparse-bm-learner) gradient-sink multiplier)
Generic Function: accumulate-sparsity-statistics (sparsity importances multiplier)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: accumulate-sparsity-statistics ((sparsity normal-sparsity-gradient-source) importances multiplier)
Method: accumulate-sparsity-statistics ((sparsity cheating-sparsity-gradient-source) importances multiplier)
Generic Reader: accumulator (object)
Package

mgl-gd.

Methods
Reader Method: accumulator ((gd-optimizer gd-optimizer))

automatically generated reader method

Source

gradient-descent.lisp.

Target Slot

accumulator.

Generic Writer: (setf accumulator) (object)
Package

mgl-gd.

Methods
Writer Method: (setf accumulator) ((gd-optimizer gd-optimizer))

automatically generated writer method

Source

gradient-descent.lisp.

Target Slot

accumulator.

Generic Reader: accumulator (object)
Package

mgl-cg.

Methods
Reader Method: accumulator ((cg-optimizer cg-optimizer))

automatically generated reader method

Source

conjugate-gradient.lisp.

Target Slot

accumulator.

Generic Function: activate-cloud* (cloud reversep from-chunk to-chunk from-matrix to-matrix)

Like ACTIVATE-CLOUD but without keyword parameters.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: activate-cloud* ((cloud factored-cloud) reversep from-chunk to-chunk from to)
Method: activate-cloud* ((cloud full-cloud) reversep from-chunk to-chunk from to)
Method: activate-cloud* :around (cloud reversep from-chunk to-chunk from-matrix to-matrix)
Method: activate-cloud* :before (cloud reversep from-chunk to-chunk from-matrix to-matrix)
Generic Reader: adam-time-step (object)
Package

mgl-gd.

Methods
Reader Method: adam-time-step ((adam-optimizer adam-optimizer))

automatically generated reader method

Source

gradient-descent.lisp.

Target Slot

adam-time-step.

Generic Writer: (setf adam-time-step) (object)
Package

mgl-gd.

Methods
Writer Method: (setf adam-time-step) ((adam-optimizer adam-optimizer))

automatically generated writer method

Source

gradient-descent.lisp.

Target Slot

adam-time-step.

Generic Function: args (object)
Package

mgl-bp.

Source

lumps.lisp.

Methods
Reader Method: args ((->+ ->+))

automatically generated reader method

Target Slot

args.

Method: args :around (object)
Generic Function: backward-bpn (bpn &key last-clump)

Accumulate derivatives of weights.

Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: backward-bpn ((rnn rnn) &key last-clump)
Method: backward-bpn ((bpn bpn) &key last-clump)
Generic Reader: batch-mean (object)
Package

mgl-bp.

Methods
Reader Method: batch-mean ((->batch-normalization ->batch-normalization))

automatically generated reader method

Source

lumps.lisp.

Target Slot

batch-mean.

Generic Writer: (setf batch-mean) (object)
Package

mgl-bp.

Methods
Writer Method: (setf batch-mean) ((->batch-normalization ->batch-normalization))

automatically generated writer method

Source

lumps.lisp.

Target Slot

batch-mean.

Generic Reader: batch-variance (object)
Package

mgl-bp.

Methods
Reader Method: batch-variance ((->batch-normalization ->batch-normalization))

automatically generated reader method

Source

lumps.lisp.

Target Slot

batch-variance.

Generic Writer: (setf batch-variance) (object)
Package

mgl-bp.

Methods
Writer Method: (setf batch-variance) ((->batch-normalization ->batch-normalization))

automatically generated writer method

Source

lumps.lisp.

Target Slot

batch-variance.

Generic Reader: cached-activations1 (object)
Package

mgl-bm.

Methods
Reader Method: cached-activations1 ((cloud cloud))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

cached-activations1.

Generic Reader: cached-activations2 (object)
Package

mgl-bm.

Methods
Reader Method: cached-activations2 ((cloud cloud))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

cached-activations2.

Generic Reader: cached-version1 (object)
Package

mgl-bm.

Methods
Reader Method: cached-version1 ((cloud cloud))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

cached-version1.

Generic Writer: (setf cached-version1) (object)
Package

mgl-bm.

Methods
Writer Method: (setf cached-version1) ((cloud cloud))

automatically generated writer method

Source

boltzmann-machine.lisp.

Target Slot

cached-version1.

Generic Reader: cached-version2 (object)
Package

mgl-bm.

Methods
Reader Method: cached-version2 ((cloud cloud))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

cached-version2.

Generic Writer: (setf cached-version2) (object)
Package

mgl-bm.

Methods
Writer Method: (setf cached-version2) ((cloud cloud))

automatically generated writer method

Source

boltzmann-machine.lisp.

Target Slot

cached-version2.

Generic Function: call-with-sink-accumulator (fn segment source sink)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: call-with-sink-accumulator (fn cloud (learner bm-pcd-learner) trainer)
Method: call-with-sink-accumulator (fn segment source sink)
Generic Reader: centered-evidence-outputs (object)
Package

mgl-gp.

Methods
Reader Method: centered-evidence-outputs ((posterior-gp posterior-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

centered-evidence-outputs.

Generic Function: chunk->fnn-definition (chunk sym name size activation-symbol)

Return a fnn definition form (that is a list of
lump definition forms) for CHUNK that takes a single activation parameter given by the symbol ACTIVATION-SYMBOL with NAME and SIZE. Only called for non-conditioning chunks. Second value is a list of clamp inits, the third is a list of inits.

Package

mgl-unroll.

Source

unroll.lisp.

Methods
Method: chunk->fnn-definition ((chunk sigmoid-chunk) sym name size activation-symbol)
Method: chunk->fnn-definition ((chunk gaussian-chunk) sym name size activation-symbol)
Method: chunk->fnn-definition ((chunk relu-chunk) sym name size activation-symbol)
Method: chunk->fnn-definition ((chunk exp-normalized-group-chunk) sym name size activation-symbol)
Generic Reader: concatenation (object)
Package

mgl-core.

Methods
Reader Method: concatenation ((concat-counter concat-counter))

automatically generated reader method

Source

counter.lisp.

Target Slot

concatenation.

Generic Writer: (setf concatenation) (object)
Package

mgl-core.

Methods
Writer Method: (setf concatenation) ((concat-counter concat-counter))

automatically generated writer method

Source

counter.lisp.

Target Slot

concatenation.

Generic Function: copy-nodes (chunk)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: copy-nodes ((chunk factored-cloud-shared-chunk))
Method: copy-nodes ((chunk chunk))
Method: copy-nodes ((chunk conditioning-chunk))
Generic Reader: counts (object)
Package

mgl-core.

Methods
Reader Method: counts ((confusion-matrix confusion-matrix))

automatically generated reader method

Source

classification.lisp.

Target Slot

counts.

Generic Reader: covariance-fn (object)
Package

mgl-gp.

Methods
Reader Method: covariance-fn ((prior-gp prior-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

covariance-fn.

Generic Reader: covariances (object)
Package

mgl-gp.

Methods
Reader Method: covariances ((->gp ->gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

covariances.

Generic Reader: current-time (object)
Package

mgl-bp.

Methods
Reader Method: current-time ((rnn rnn))

automatically generated reader method

Source

backprop.lisp.

Target Slot

current-time.

Generic Writer: (setf current-time) (object)
Package

mgl-bp.

Methods
Writer Method: (setf current-time) ((rnn rnn))

automatically generated writer method

Source

backprop.lisp.

Target Slot

current-time.

Generic Reader: decodings (object)
Package

mgl-core.

Methods
Reader Method: decodings ((encoder/decoder encoder/decoder))

automatically generated reader method

Source

feature.lisp.

Target Slot

decodings.

Generic Reader: denominator* (object)
Package

mgl-core.

Methods
Reader Method: denominator* ((basic-counter basic-counter))

automatically generated reader method

Source

counter.lisp.

Target Slot

denominator.

Generic Reader: drop-negative-index-p (object)
Package

mgl-gp.

Methods
Reader Method: drop-negative-index-p ((->ref ->ref))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

drop-negative-index-p.

Generic Reader: encodings (object)
Package

mgl-core.

Methods
Reader Method: encodings ((encoder/decoder encoder/decoder))

automatically generated reader method

Source

feature.lisp.

Target Slot

encodings.

Generic Reader: evidence-inputs (object)
Package

mgl-gp.

Methods
Reader Method: evidence-inputs ((posterior-gp posterior-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

evidence-inputs.

Generic Reader: evidence-outputs (object)
Package

mgl-gp.

Methods
Reader Method: evidence-outputs ((posterior-gp posterior-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

evidence-outputs.

Generic Reader: executor-cache (object)
Package

mgl-core.

Methods
Reader Method: executor-cache ((parameterized-executor-cache-mixin parameterized-executor-cache-mixin))

automatically generated reader method

Source

core.lisp.

Target Slot

executor-cache.

Generic Reader: filename (object)
Package

mgl-gnuplot.

Methods
Reader Method: filename ((file-mapping file-mapping))

automatically generated reader method

Source

gnuplot.lisp.

Target Slot

filename.

Generic Function: find-one-executor (instance obj)
Package

mgl-core.

Source

core.lisp.

Methods
Method: find-one-executor (instance obj)
Method: find-one-executor :around (instance obj)
Method: find-one-executor (instance (cached parameterized-executor-cache-mixin))
Generic Reader: first-trained-clump (object)
Package

mgl-bp.

Methods
Reader Method: first-trained-clump ((bp-learner bp-learner))

automatically generated reader method

Source

backprop.lisp.

Target Slot

first-trained-clump.

Generic Function: flush-sparsity (sparsity accumulator n-instances-in-batch multiplier)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: flush-sparsity ((sparsity normal-sparsity-gradient-source) accumulator n-instances-in-batch multiplier)
Method: flush-sparsity ((sparsity cheating-sparsity-gradient-source) accumulator n-instances-in-batch multiplier)
Generic Function: forward-bpn (bpn &key from-clump to-clump end-clump)

Propagate the values from the already clamped
inputs and return total cost of all inputs (i.e. all stripes) and the sum of importances. These values are also returned by COST until the next forward pass on BPN.

Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: forward-bpn ((rnn rnn) &key from-clump to-clump end-clump)
Method: forward-bpn ((bpn bpn) &key from-clump to-clump end-clump)
Generic Reader: function-expression (object)
Package

mgl-gnuplot.

Methods
Reader Method: function-expression ((function-mapping function-mapping))

automatically generated reader method

Source

gnuplot.lisp.

Target Slot

expression.

Generic Function: gp-covariances* (gp x1 x2)

Returns the matrix of covariances between X1 and
X2. X1 and X2 are vectors of arbitrary objects. Noise is assumed to be included in the covariance function.

Package

mgl-gp.

Source

gaussian-process.lisp.

Methods
Method: gp-covariances* ((fnn fnn-gp) x1 x2)
Method: gp-covariances* ((gp posterior-gp) x1 x2)
Method: gp-covariances* ((gp prior-gp) x1 x2)
Generic Reader: has-hidden-to-hidden-p (object)
Package

mgl-bm.

Methods
Reader Method: has-hidden-to-hidden-p ((bm bm))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

has-hidden-to-hidden-p.

Generic Reader: has-inputs-p (object)
Package

mgl-bm.

Methods
Reader Method: has-inputs-p ((temporal-chunk temporal-chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

has-inputs-p.

Generic Reader: has-visible-to-visible-p (object)
Package

mgl-bm.

Methods
Reader Method: has-visible-to-visible-p ((bm bm))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

has-visible-to-visible-p.

Generic Reader: hidden-and-conditioning-chunks (object)
Package

mgl-bm.

Methods
Reader Method: hidden-and-conditioning-chunks ((bm bm))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

hidden-and-conditioning-chunks.

Generic Reader: hidden-source-chunk (object)
Package

mgl-bm.

Methods
Reader Method: hidden-source-chunk ((temporal-chunk temporal-chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

hidden-source-chunk.

Generic Reader: inactive-rbms (object)
Package

mgl-bm.

Methods
Reader Method: inactive-rbms ((dbn dbn))

automatically generated reader method

Source

deep-belief-network.lisp.

Target Slot

inactive-rbms.

Generic Function: incoming->fnn-defintion (from-lumpy to-lumpy cloud transposep)

Return a list of four elemenets. The first is a
list of lump definitions that represent the flow from FROM-LUMPY through CLOUD. The chunk of FROM-LUMPY may be either of the end points of CLOUD. The third values is the cloud inits. The third is name of the ‘end’ lump.

Package

mgl-unroll.

Source

unroll.lisp.

Methods
Method: incoming->fnn-defintion (from-lumpy to-lumpy (cloud full-cloud) transposep)
Method: incoming->fnn-defintion (from-lumpy to-lumpy (cloud factored-cloud) transposep)
Generic Function: initialize-from-cloud (fnn cloud args)
Package

mgl-unroll.

Source

unroll.lisp.

Methods
Method: initialize-from-cloud (fnn (cloud full-cloud) args)
Method: initialize-from-cloud (fnn (cloud factored-cloud) args)
Generic Reader: input-seqs (object)
Package

mgl-bp.

Methods
Reader Method: input-seqs ((rnn rnn))

automatically generated reader method

Source

backprop.lisp.

Target Slot

input-seqs.

Generic Writer: (setf input-seqs) (object)
Package

mgl-bp.

Methods
Writer Method: (setf input-seqs) ((rnn rnn))

automatically generated writer method

Source

backprop.lisp.

Target Slot

input-seqs.

Generic Reader: into (object)
Package

mgl-gp.

Methods
Reader Method: into ((->ref ->ref))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

into.

Generic Reader: inverted-covariances (object)
Package

mgl-gp.

Methods
Reader Method: inverted-covariances ((posterior-gp posterior-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

inverted-covariances.

Generic Reader: kind (object)
Package

mgl-gnuplot.

Methods
Reader Method: kind ((plot plot))

automatically generated reader method

Source

gnuplot.lisp.

Target Slot

kind.

Generic Reader: last-cost (object)
Package

mgl-bp.

Methods
Reader Method: last-cost ((bpn bpn))

automatically generated reader method

Source

backprop.lisp.

Target Slot

last-cost.

Generic Writer: (setf last-cost) (object)
Package

mgl-bp.

Methods
Writer Method: (setf last-cost) ((bpn bpn))

automatically generated writer method

Source

backprop.lisp.

Target Slot

last-cost.

Generic Reader: length-scale (object)
Package

mgl-gp.

Methods
Reader Method: length-scale ((->rough-exponential ->rough-exponential))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

length-scale.

Generic Function: map-clumps (fn clump)
Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: map-clumps (fn (bpn bpn))
Method: map-clumps (fn clump)
Generic Reader: mapping-data (object)
Package

mgl-gnuplot.

Methods
Reader Method: mapping-data ((data-mapping data-mapping))

automatically generated reader method

Source

gnuplot.lisp.

Target Slot

data.

Generic Reader: mappings (object)
Package

mgl-gnuplot.

Methods
Reader Method: mappings ((plot plot))

automatically generated reader method

Source

gnuplot.lisp.

Target Slot

mappings.

Generic Writer: (setf mappings) (object)
Package

mgl-gnuplot.

Methods
Writer Method: (setf mappings) ((plot plot))

automatically generated writer method

Source

gnuplot.lisp.

Target Slot

mappings.

Generic Reader: mask (object)
Package

mgl-bp.

Methods
Reader Method: mask ((->dropout ->dropout))

automatically generated reader method

Source

lumps.lisp.

Target Slot

mask.

Generic Reader: max-time (object)
Package

mgl-bp.

Methods
Reader Method: max-time ((rnn rnn))

automatically generated reader method

Source

backprop.lisp.

Target Slot

max-time.

Generic Writer: (setf max-time) (object)
Package

mgl-bp.

Methods
Writer Method: (setf max-time) ((rnn rnn))

automatically generated writer method

Source

backprop.lisp.

Target Slot

max-time.

Generic Function: maybe-update-weights (optimizer gradient-source n-new-inputs)

Update the weights being trained. N-NEW-INPUTS have been seen since the last time this was called.

Package

mgl-gd.

Source

gradient-descent.lisp.

Methods
Method: maybe-update-weights ((optimizer segmented-gd-optimizer) gradient-source n-new-inputs)
Method: maybe-update-weights ((optimizer per-weight-batch-gd-optimizer) gradient-source n-new-inputs)
Method: maybe-update-weights ((optimizer normalized-batch-gd-optimizer) gradient-source n-new-inputs)
Method: maybe-update-weights ((optimizer adam-optimizer) gradient-source n-new-inputs)
Method: maybe-update-weights ((optimizer sgd-optimizer) gradient-source n-new-inputs)
Generic Reader: mean-estimates (object)
Package

mgl-gd.

Methods
Reader Method: mean-estimates ((adam-optimizer adam-optimizer))

automatically generated reader method

Source

gradient-descent.lisp.

Target Slot

mean-estimates.

Generic Writer: (setf mean-estimates) (object)
Package

mgl-gd.

Methods
Writer Method: (setf mean-estimates) ((adam-optimizer adam-optimizer))

automatically generated writer method

Source

gradient-descent.lisp.

Target Slot

mean-estimates.

Generic Reader: mean-fn (object)
Package

mgl-gp.

Methods
Reader Method: mean-fn ((prior-gp prior-gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

mean-fn.

Generic Reader: means (object)
Package

mgl-gp.

Methods
Reader Method: means ((->gp ->gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

means.

Generic Reader: n (object)
Package

mgl-gp.

Methods
Reader Method: n ((->stretch ->stretch))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

n.

Reader Method: n ((->rep ->rep))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

n.

Generic Reader: n-instances-in-batch (object)
Package

mgl-gd.

Methods
Reader Method: n-instances-in-batch ((batch-gd-optimizer batch-gd-optimizer))

automatically generated reader method

Source

gradient-descent.lisp.

Target Slot

n-instances-in-batch.

Generic Writer: (setf n-instances-in-batch) (object)
Package

mgl-gd.

Methods
Writer Method: (setf n-instances-in-batch) ((batch-gd-optimizer batch-gd-optimizer))

automatically generated writer method

Source

gradient-descent.lisp.

Target Slot

n-instances-in-batch.

Generic Function: n-instances-until-update (optimizer)

Return the largest number of inputs guaranteed not to cause a change in the learner being trained.

Package

mgl-gd.

Source

gradient-descent.lisp.

Methods
Method: n-instances-until-update ((optimizer segmented-gd-optimizer))
Method: n-instances-until-update ((optimizer per-weight-batch-gd-optimizer))
Method: n-instances-until-update ((optimizer normalized-batch-gd-optimizer))
Method: n-instances-until-update ((optimizer batch-gd-optimizer))
Generic Reader: n-steps (object)
Package

mgl-bp.

Methods
Reader Method: n-steps ((->batch-normalization ->batch-normalization))

automatically generated reader method

Source

lumps.lisp.

Target Slot

n-steps.

Generic Writer: (setf n-steps) (object)
Package

mgl-bp.

Methods
Writer Method: (setf n-steps) ((->batch-normalization ->batch-normalization))

automatically generated writer method

Source

lumps.lisp.

Target Slot

n-steps.

Generic Reader: next-node-inputs (object)
Package

mgl-bm.

Methods
Reader Method: next-node-inputs ((temporal-chunk temporal-chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

next-node-inputs.

Generic Function: non-constant-mats (clump)
Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: non-constant-mats ((lump ->dropout))
Source

lumps.lisp.

Method: non-constant-mats ((lump ->weight))
Source

lumps.lisp.

Method: non-constant-mats ((lump lump))
Source

lumps.lisp.

Method: non-constant-mats ((bpn bpn))
Generic Reader: numerator* (object)
Package

mgl-core.

Methods
Reader Method: numerator* ((basic-counter basic-counter))

automatically generated reader method

Source

counter.lisp.

Target Slot

numerator.

Generic Reader: old-nodes (object)
Package

mgl-bm.

Methods
Reader Method: old-nodes ((chunk chunk))

The previous value of each node. Used to provide parallel computation semantics when there are intralayer connections. Swapped with NODES or MEANS at times.

Source

boltzmann-machine.lisp.

Target Slot

old-nodes.

Generic Reader: old-products (object)
Package

mgl-bm.

Methods
Reader Method: old-products ((normal-sparsity-gradient-source normal-sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

old-products.

Generic Reader: old-sum1 (object)
Package

mgl-bm.

Methods
Reader Method: old-sum1 ((cheating-sparsity-gradient-source cheating-sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

old-sum1.

Generic Reader: optimizers (object)
Package

mgl-gd.

Methods
Reader Method: optimizers ((segmented-gd-optimizer segmented-gd-optimizer))

automatically generated reader method

Source

gradient-descent.lisp.

Target Slot

optimizers.

Generic Reader: options (object)
Package

mgl-gnuplot.

Methods
Reader Method: options ((mapping mapping))

automatically generated reader method

Source

gnuplot.lisp.

Target Slot

options.

Generic Reader: per-label-counters (object)
Package

mgl-core.

Methods
Reader Method: per-label-counters ((cross-entropy-counter cross-entropy-counter))

A hash table mapping labels to the cross entropy counters for samples with that label.

Source

classification.lisp.

Target Slot

per-label-counters.

Generic Reader: period (object)
Package

mgl-util.

Methods
Reader Method: period ((periodic-fn periodic-fn))

automatically generated reader method

Source

util.lisp.

Target Slot

period.

Generic Reader: period (object)
Package

mgl-gp.

Methods
Reader Method: period ((->periodic ->periodic))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

period.

Generic Reader: populate-convert-to-dbm-sample-fn (object)
Package

mgl-unroll.

Methods
Reader Method: populate-convert-to-dbm-sample-fn ((fnn-clamping-cache fnn-clamping-cache))

automatically generated reader method

Source

unroll.lisp.

Target Slot

populate-convert-to-dbm-sample-fn.

Generic Reader: populate-periodic-fn (object)
Package

mgl-unroll.

Methods
Reader Method: populate-periodic-fn ((fnn-clamping-cache fnn-clamping-cache))

automatically generated reader method

Source

unroll.lisp.

Target Slot

populate-periodic-fn.

Generic Reader: population-mean (object)
Package

mgl-bp.

Methods
Reader Method: population-mean ((->batch-normalization ->batch-normalization))

automatically generated reader method

Source

lumps.lisp.

Target Slot

population-mean.

Generic Writer: (setf population-mean) (object)
Package

mgl-bp.

Methods
Writer Method: (setf population-mean) ((->batch-normalization ->batch-normalization))

automatically generated writer method

Source

lumps.lisp.

Target Slot

population-mean.

Generic Reader: population-variance (object)
Package

mgl-bp.

Methods
Reader Method: population-variance ((->batch-normalization ->batch-normalization))

automatically generated reader method

Source

lumps.lisp.

Target Slot

population-variance.

Generic Writer: (setf population-variance) (object)
Package

mgl-bp.

Methods
Writer Method: (setf population-variance) ((->batch-normalization ->batch-normalization))

automatically generated writer method

Source

lumps.lisp.

Target Slot

population-variance.

Generic Reader: posterior-gps (object)
Package

mgl-gp.

Methods
Reader Method: posterior-gps ((->gp ->gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

posterior-gps.

Generic Writer: (setf posterior-gps) (object)
Package

mgl-gp.

Methods
Writer Method: (setf posterior-gps) ((->gp ->gp))

automatically generated writer method

Source

gaussian-process.lisp.

Target Slot

posterior-gps.

Generic Function: print-lump-parts (lump stream)
Package

mgl-bp.

Source

lumps.lisp.

Methods
Method: print-lump-parts ((lump ->v*m) stream)
Method: print-lump-parts ((lump ->softmax-xe-loss) stream)
Method: print-lump-parts ((lump ->max-channel) stream)
Method: print-lump-parts ((lump ->min) stream)
Method: print-lump-parts ((lump ->max) stream)
Method: print-lump-parts ((lump ->batch-normalized) stream)
Method: print-lump-parts ((lump ->dropout) stream)
Method: print-lump-parts (lump stream)
Generic Reader: products (object)
Package

mgl-bm.

Methods
Reader Method: products ((normal-sparsity-gradient-source normal-sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

products.

Generic Reader: random-numbers (object)
Package

mgl-bm.

Methods
Reader Method: random-numbers ((chunk chunk))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

random-numbers.

Generic Writer: (setf random-numbers) (object)
Package

mgl-bm.

Methods
Writer Method: (setf random-numbers) ((chunk chunk))

automatically generated writer method

Source

boltzmann-machine.lisp.

Target Slot

random-numbers.

Generic Reader: randoms (object)
Package

mgl-bp.

Methods
Reader Method: randoms ((->sample-binary ->sample-binary))

automatically generated reader method

Source

lumps.lisp.

Target Slot

randoms.

Generic Reader: remove-trailing-nil-instances (object)
Package

mgl-bp.

Methods
Reader Method: remove-trailing-nil-instances ((rnn rnn))

automatically generated reader method

Source

backprop.lisp.

Target Slot

remove-trailing-nil-instances.

Generic Writer: (setf remove-trailing-nil-instances) (object)
Package

mgl-bp.

Methods
Writer Method: (setf remove-trailing-nil-instances) ((rnn rnn))

automatically generated writer method

Source

backprop.lisp.

Target Slot

remove-trailing-nil-instances.

Generic Function: resize-chunk (chunk size max-n-stripes)
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: resize-chunk ((chunk temporal-chunk) size max-n-stripes)
Method: resize-chunk ((chunk normalized-group-chunk) size max-n-stripes)
Method: resize-chunk ((chunk constant-chunk) size max-n-stripes)
Method: resize-chunk ((chunk chunk) size max-n-stripes)
Generic Function: resolve-clump (rnn ref)
Package

mgl-bp.

Source

backprop.lisp.

Methods
Method: resolve-clump (rnn (clump clump))
Method: resolve-clump (rnn (lagged lagged-clump))
Generic Reader: roughness (object)
Package

mgl-gp.

Methods
Reader Method: roughness ((->rough-exponential ->rough-exponential))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

roughness.

Generic Reader: samples (object)
Package

mgl-gp.

Methods
Reader Method: samples ((->gp ->gp))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

samples.

Generic Writer: (setf samples) (object)
Package

mgl-gp.

Methods
Writer Method: (setf samples) ((->gp ->gp))

automatically generated writer method

Source

gaussian-process.lisp.

Target Slot

samples.

Generic Reader: scale1 (object)
Package

mgl-bm.

Methods
Reader Method: scale1 ((cloud cloud))

When CHUNK1 is being activated count activations coming from this cloud multiplied by SCALE1.

Source

boltzmann-machine.lisp.

Target Slot

scale1.

Generic Reader: scale2 (object)
Package

mgl-bm.

Methods
Reader Method: scale2 ((cloud cloud))

When CHUNK2 is being activated count activations coming from this cloud multiplied by SCALE2.

Source

boltzmann-machine.lisp.

Target Slot

scale2.

Generic Reader: scratch (object)
Generic Writer: (setf scratch) (object)
Package

mgl-bm.

Methods
Reader Method: scratch ((chunk chunk))
Writer Method: (setf scratch) ((chunk chunk))

Another matrix that parallels NODES. Used as a temporary.

Source

boltzmann-machine.lisp.

Target Slot

scratch.

Generic Reader: segment-decay-fn (object)
Generic Writer: (setf segment-decay-fn) (object)
Package

mgl-cg.

Methods
Reader Method: segment-decay-fn ((decayed-cg-optimizer-mixin decayed-cg-optimizer-mixin))
Writer Method: (setf segment-decay-fn) ((decayed-cg-optimizer-mixin decayed-cg-optimizer-mixin))

If not NIL, it’s a designator for a function that
returns the decay for a given segment. For convenience NIL is also treated as 0 decay.

Source

conjugate-gradient.lisp.

Target Slot

segment-decay-fn.

Generic Reader: shared-with-clump (object)
Package

mgl-bp.

Methods
Reader Method: shared-with-clump ((lump lump))

automatically generated reader method

Source

lumps.lisp.

Target Slot

shared-with-clump.

Generic Reader: signal-variance (object)
Package

mgl-gp.

Methods
Reader Method: signal-variance ((->rough-exponential ->rough-exponential))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

signal-variance.

Generic Reader: spare-vectors (object)
Generic Writer: (setf spare-vectors) (object)
Package

mgl-cg.

Methods
Reader Method: spare-vectors ((cg-optimizer cg-optimizer))
Writer Method: (setf spare-vectors) ((cg-optimizer cg-optimizer))

Pre-allocated vectors to make CG less consy.

Source

conjugate-gradient.lisp.

Target Slot

spare-vectors.

Generic Reader: sparsity-gradient-sources (object)
Package

mgl-bm.

Methods
Reader Method: sparsity-gradient-sources ((sparse-bm-learner sparse-bm-learner))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

sparsity-gradient-sources.

Generic Reader: sparsity-target (object)
Package

mgl-bm.

Methods
Reader Method: sparsity-target ((sparsity-gradient-source sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

sparsity-target.

Generic Reader: start-indices (object)
Package

mgl-opt.

Methods
Reader Method: start-indices ((segment-set segment-set))

automatically generated reader method

Source

optimize.lisp.

Target Slot

start-indices.

Generic Reader: sum1 (object)
Package

mgl-bm.

Methods
Reader Method: sum1 ((cheating-sparsity-gradient-source cheating-sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

sum1.

Generic Reader: sum2 (object)
Package

mgl-bm.

Methods
Reader Method: sum2 ((cheating-sparsity-gradient-source cheating-sparsity-gradient-source))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

sum2.

Generic Reader: variance-estimates (object)
Package

mgl-gd.

Methods
Reader Method: variance-estimates ((adam-optimizer adam-optimizer))

automatically generated reader method

Source

gradient-descent.lisp.

Target Slot

variance-estimates.

Generic Writer: (setf variance-estimates) (object)
Package

mgl-gd.

Methods
Writer Method: (setf variance-estimates) ((adam-optimizer adam-optimizer))

automatically generated writer method

Source

gradient-descent.lisp.

Target Slot

variance-estimates.

Generic Reader: visible-and-conditioning-chunks (object)
Package

mgl-bm.

Methods
Reader Method: visible-and-conditioning-chunks ((bm bm))

automatically generated reader method

Source

boltzmann-machine.lisp.

Target Slot

visible-and-conditioning-chunks.

Generic Reader: weight-deltas (object)
Package

mgl-gd.

Methods
Reader Method: weight-deltas ((gd-optimizer gd-optimizer))

automatically generated reader method

Source

gradient-descent.lisp.

Target Slot

weight-deltas.

Generic Writer: (setf weight-deltas) (object)
Package

mgl-gd.

Methods
Writer Method: (setf weight-deltas) ((gd-optimizer gd-optimizer))

automatically generated writer method

Source

gradient-descent.lisp.

Target Slot

weight-deltas.

Generic Reader: weight-lumps (object)
Package

mgl-bp.

Methods
Reader Method: weight-lumps ((rnn rnn))

automatically generated reader method

Source

backprop.lisp.

Target Slot

weight-lumps.

Generic Writer: (setf weight-lumps) (object)
Package

mgl-bp.

Methods
Writer Method: (setf weight-lumps) ((rnn rnn))

automatically generated writer method

Source

backprop.lisp.

Target Slot

weight-lumps.

Generic Function: x (object)
Package

mgl-bp.

Source

lumps.lisp.

Methods
Reader Method: x ((->normalized ->normalized))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->exp ->exp))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->sin ->sin))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->abs ->abs))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->* ->*))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->v*m ->v*m))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->sample-binary ->sample-binary))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->softmax-xe-loss ->softmax-xe-loss))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->squared-difference ->squared-difference))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->sum ->sum))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->max-channel ->max-channel))

Input comes from here.

Target Slot

x.

Reader Method: x ((->min ->min))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->max ->max))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->relu ->relu))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->scaled-tanh ->scaled-tanh))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->tanh ->tanh))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->batch-normalized ->batch-normalized))

automatically generated reader method

Target Slot

x.

Reader Method: x ((->dropout ->dropout))

automatically generated reader method

Target Slot

x.

Method: x :around (object)
Generic Reader: x (object)
Package

mgl-gp.

Methods
Reader Method: x ((->periodic ->periodic))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

x.

Reader Method: x ((->rough-exponential ->rough-exponential))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

x.

Reader Method: x ((->stretch ->stretch))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

x.

Reader Method: x ((->rep ->rep))

automatically generated reader method

Source

gaussian-process.lisp.

Target Slot

x.

Generic Function: y (object)
Package

mgl-bp.

Source

lumps.lisp.

Methods
Reader Method: y ((->* ->*))

automatically generated reader method

Target Slot

y.

Reader Method: y ((->squared-difference ->squared-difference))

automatically generated reader method

Target Slot

y.

Method: y :around (object)
Generic Function: zero-weight-to-self (cloud)

In a BM W_{i,i} is always zero.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Methods
Method: zero-weight-to-self ((cloud factored-cloud))
Method: zero-weight-to-self ((cloud full-cloud))

6.2.5 Structures

Structure: incoming
Package

mgl-unroll.

Source

unroll.lisp.

Direct superclasses

structure-object.

Direct slots
Slot: from-lumpy
Readers

incoming-from-lumpy.

Writers

(setf incoming-from-lumpy).

Slot: cloud
Package

mgl-bm.

Readers

incoming-cloud.

Writers

(setf incoming-cloud).

Slot: transposep
Readers

incoming-transposep.

Writers

(setf incoming-transposep).

Structure: lagged-clump
Package

mgl-bp.

Source

backprop.lisp.

Direct superclasses

structure-object.

Direct methods
Direct slots
Slot: path
Readers

lagged-clump-path.

Writers

(setf lagged-clump-path).

Slot: name
Package

mgl-common.

Readers

lagged-clump-name.

Writers

(setf lagged-clump-name).

Slot: lag
Readers

lagged-clump-lag.

Writers

(setf lagged-clump-lag).

Structure: lumpy
Package

mgl-unroll.

Source

unroll.lisp.

Direct superclasses

structure-object.

Direct slots
Slot: depth
Readers

lumpy-depth.

Writers

(setf lumpy-depth).

Slot: chunk
Package

mgl-bm.

Readers

lumpy-chunk.

Writers

(setf lumpy-chunk).

Slot: incomings
Initform

(quote nil)

Readers

lumpy-incomings.

Writers

(setf lumpy-incomings).

Slot: original
Readers

lumpy-original.

Writers

(setf lumpy-original).

Slot: kind
Type

(member nil :reconstruction :map)

Readers

lumpy-kind.

Writers

(setf lumpy-kind).

Slot: symbol
Package

common-lisp.

Initform

(gensym)

Readers

lumpy-symbol.

Writers

(setf lumpy-symbol).


6.2.6 Classes

Class: ->constant
Package

mgl-unroll.

Source

unroll.lisp.

Direct superclasses

lump.

Direct methods
Direct slots
Slot: default-value
Package

mgl-common.

Initform

1

Class: base-gd-optimizer
Package

mgl-gd.

Source

gradient-descent.lisp.

Direct superclasses

iterative-optimizer.

Direct subclasses
Direct methods

minimize*.

Class: bm-mcmc-learner

Paramaters for Markov Chain Monte Carlo based trainers for BMs.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

bm-learner.

Direct subclasses
Direct methods
Direct slots
Slot: visible-sampling

Controls whether visible nodes are sampled during the learning or the mean field is used instead.

Initargs

:visible-sampling

Readers

visible-sampling.

Writers

(setf visible-sampling).

Slot: hidden-sampling

Controls whether and how hidden nodes are sampled
during the learning or mean field is used instead. :HALF-HEARTED, the default value, samples the hiddens but uses the hidden means to calculate the effect of the positive and negative phases on the gradient. The default should almost always be preferable to T, as it is a less noisy estimate.

Type

(member nil :half-hearted t)

Initform

:half-hearted

Initargs

:hidden-sampling

Readers

hidden-sampling.

Writers

(setf hidden-sampling).

Slot: n-gibbs

The number of steps of Gibbs sampling to perform.
This is how many full (HIDDEN -> VISIBLE -> HIDDEN) steps are taken for CD learning, and how many times each chunk is sampled for PCD.

Type

(integer 1)

Initform

1

Initargs

:n-gibbs

Readers

n-gibbs.

Writers

(setf n-gibbs).

Class: decayed-cg-optimizer-mixin

Mix this before a CG based optimizer to conveniently add decay on a per-segment basis.

Package

mgl-cg.

Source

conjugate-gradient.lisp.

Direct methods
Direct slots
Slot: segment-decay-fn

If not NIL, it’s a designator for a function that
returns the decay for a given segment. For convenience NIL is also treated as 0 decay.

Initargs

:segment-decay-fn

Readers

segment-decay-fn.

Writers

(setf segment-decay-fn).

Class: factored-cloud-shared-chunk
Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

chunk.

Direct methods

copy-nodes.

Class: gd-optimizer

Gradient descent optimizer with momentum, weight
decay, weight penalty. Batch size and all other parameters can be changed during training. One may even want to subclass this optimizer, define a method for BATCH-SIZE make it a function of N-INSTANCES.

Depending on BATCH-SIZE, this may be stochastic or non-stochastic gradient descent.

Package

mgl-gd.

Source

gradient-descent.lisp.

Direct superclasses

base-gd-optimizer.

Direct subclasses
Direct methods
Direct slots
Slot: segment-set

The set of segments that are to be trained. The
ACCUMULATOR, WEIGHT-DELTAS, etc vectors are indexed by SEGMENT-SET indices.

Package

mgl-opt.

Readers

segment-set.

Writers

This slot is read-only.

Slot: accumulator
Type

mgl-mat:mat

Readers

accumulator.

Writers

(setf accumulator).

Slot: use-segment-derivatives-p

Save memory if both the gradient source (the model
being optimized) and the optimizer support this feature. It works like this: the accumulator into which the gradient source is asked to place the derivatives of a segment will be SEGMENT-DERIVATIVES of the segment. This allows the optimizer not to allocate an accumulator matrix into which the derivatives are summed.

Initargs

:use-segment-derivatives-p

Readers

use-segment-derivatives-p.

Writers

This slot is read-only.

Slot: weight-deltas
Type

mgl-mat:mat

Readers

weight-deltas.

Writers

(setf weight-deltas).

Slot: batch-size

After having gone through BATCH-SIZE number of
inputs, weights are updated. With BATCH-SIZE 1, one gets Stochastics Gradient Descent. With BATCH-SIZE equal to the number of instances in the dataset, one gets standard, ’batch’ gradient descent. With BATCH-SIZE between these two extremes, one gets the most practical ’mini-batch’ compromise.

Package

mgl-common.

Initform

1

Initargs

:batch-size

Readers

batch-size.

Writers

(setf batch-size).

Slot: learning-rate

This is the step size along the gradient. Decrease
it if optimization diverges, increase it if it doesn’t make progress.

Initform

0.1

Initargs

:learning-rate

Readers

learning-rate.

Writers

(setf learning-rate).

Slot: momentum

A value in the [0, 1) interval. MOMENTUM times the
previous weight change is added to the gradient. 0 means no momentum.

Initform

0

Initargs

:momentum

Readers

momentum.

Writers

(setf momentum).

Slot: momentum-type

One of :NORMAL, :NESTEROV or :NONE. For pure
optimization Nesterov’s momentum may be better, but it may also increases chances of overfitting. Using :NONE is equivalent to 0 momentum, but it also uses less memory. Note that with :NONE, MOMENTUM is ignored even it it is non-zero.

Type

(member :none :normal :nesterov)

Initform

:normal

Initargs

:momentum-type

Readers

momentum-type.

Writers

This slot is read-only.

Slot: weight-decay

An L2 penalty. It discourages large weights, much
like a zero mean gaussian prior. WEIGHT-DECAY * WEIGHT is added to the gradient to penalize large weights. It’s as if the function whose minimum is sought had WEIGHT-DECAY*sum_i{0.5 * WEIGHT_i^2} added to it.

Initform

0

Initargs

:weight-decay

Readers

weight-decay.

Writers

(setf weight-decay).

Slot: weight-penalty

An L1 penalty. It encourages sparsity.
SIGN(WEIGHT) * WEIGHT-PENALTY is added to the gradient pushing the weight towards negative infinity. It’s as if the function whose minima is sought had WEIGHT-PENALTY*sum_i{abs(WEIGHT_i)} added to it. Putting it on feature biases consitutes a sparsity constraint on the features.

Initform

0

Initargs

:weight-penalty

Readers

weight-penalty.

Writers

(setf weight-penalty).

Slot: after-update-hook

A list of functions with no arguments called after each weight update.

Type

list

Initargs

:after-update-hook

Readers

after-update-hook.

Writers

(setf after-update-hook).

Class: normalized-group-chunk

Means are normalized to SCALE within node groups of GROUP-SIZE.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

chunk.

Direct subclasses

exp-normalized-group-chunk.

Direct methods
Direct slots
Slot: scale

The sum of the means after normalization. Can be
changed during training, for instance when clamping. If it is a vector then its length must be MAX-N-STRIPES which is automatically maintained when changing the number of stripes.

Package

mgl-common.

Type

(or mgl-util:flt mgl-mat:mat)

Initform

1.0d0

Initargs

:scale

Readers

scale.

Writers

(setf scale).

Slot: group-size
Package

mgl-common.

Initform

(error "group-size must be specified.")

Initargs

:group-size

Readers

group-size.

Writers

This slot is read-only.

Class: sparse-bm-learner

For the chunks with . Collect the average means
over samples in a batch and adjust weights in each cloud connected to it so that the average is closer to SPARSITY-TARGET. This is implemented by keeping track of the average means of the chunks connected to it. The derivative is (M* (MATLISP:TRANSPOSE (M.- C1-MEANS TARGET)) C2-MEANS) and this is added to derivative at the end of the batch. Batch size comes from the superclass.

Package

mgl-bm.

Source

boltzmann-machine.lisp.

Direct superclasses

bm-learner.

Direct subclasses
Direct methods
Direct slots
Slot: sparsity-gradient-sources
Type

list

Readers

sparsity-gradient-sources.

Writers

This slot is read-only.

Slot: sparser
Initargs

:sparser

Readers

sparser.

Writers

This slot is read-only.


6.2.7 Types

Type: limit ()
Package

mgl-cg.

Source

conjugate-gradient.lisp.


Appendix A Indexes


A.1 Concepts


A.2 Functions

Jump to:   (   -  
A   B   C   D   E   F   G   H   I   K   L   M   N   O   P   R   S   T   U   V   W   X   Y   Z  
Index Entry  Section

(
(setf accumulator): Private generic functions
(setf accumulator): Private generic functions
(setf adam-time-step): Private generic functions
(setf adam-time-step): Private generic functions
(setf after-update-hook): Public generic functions
(setf after-update-hook): Public generic functions
(setf attributes): Public generic functions
(setf attributes): Public generic functions
(setf batch-mean): Private generic functions
(setf batch-mean): Private generic functions
(setf batch-size): Public generic functions
(setf batch-size): Public generic functions
(setf batch-size): Public generic functions
(setf batch-variance): Private generic functions
(setf batch-variance): Private generic functions
(setf before-update-hook): Public generic functions
(setf before-update-hook): Public generic functions
(setf cached-version1): Private generic functions
(setf cached-version1): Private generic functions
(setf cached-version2): Private generic functions
(setf cached-version2): Private generic functions
(setf cg-args): Public generic functions
(setf cg-args): Public generic functions
(setf concatenation): Private generic functions
(setf concatenation): Private generic functions
(setf confusion-count): Public generic functions
(setf confusion-count): Public generic functions
(setf cuda-window-start-time): Public generic functions
(setf cuda-window-start-time): Public generic functions
(setf current-time): Private generic functions
(setf current-time): Private generic functions
(setf dropout): Public generic functions
(setf dropout): Public generic functions
(setf dropout): Public generic functions
(setf dropout): Public generic functions
(setf hidden-sampling): Public generic functions
(setf hidden-sampling): Public generic functions
(setf importance): Public generic functions
(setf importance): Public generic functions
(setf importances): Public generic functions
(setf importances): Public generic functions
(setf incoming-cloud): Private ordinary functions
(setf incoming-from-lumpy): Private ordinary functions
(setf incoming-transposep): Private ordinary functions
(setf indices-present): Public generic functions
(setf indices-present): Public generic functions
(setf input-row-indices): Public generic functions
(setf input-row-indices): Public generic functions
(setf input-seqs): Private generic functions
(setf input-seqs): Private generic functions
(setf lagged-clump-lag): Private ordinary functions
(setf lagged-clump-name): Private ordinary functions
(setf lagged-clump-path): Private ordinary functions
(setf last-cost): Private generic functions
(setf last-cost): Private generic functions
(setf last-eval): Public generic functions
(setf last-eval): Public generic functions
(setf learning-rate): Public generic functions
(setf learning-rate): Public generic functions
(setf learning-rate): Public generic functions
(setf lumpy-chunk): Private ordinary functions
(setf lumpy-depth): Private ordinary functions
(setf lumpy-incomings): Private ordinary functions
(setf lumpy-kind): Private ordinary functions
(setf lumpy-original): Private ordinary functions
(setf lumpy-symbol): Private ordinary functions
(setf mappings): Private generic functions
(setf mappings): Private generic functions
(setf max-n-samples): Public generic functions
(setf max-n-samples): Public generic functions
(setf max-n-stripes): Public setf expanders
(setf max-time): Private generic functions
(setf max-time): Private generic functions
(setf mean): Public generic functions
(setf mean): Public generic functions
(setf mean-decay): Public generic functions
(setf mean-decay): Public generic functions
(setf mean-decay-decay): Public generic functions
(setf mean-decay-decay): Public generic functions
(setf mean-estimates): Private generic functions
(setf mean-estimates): Private generic functions
(setf momentum): Public generic functions
(setf momentum): Public generic functions
(setf monitors): Public generic functions
(setf monitors): Public generic functions
(setf n-gibbs): Public generic functions
(setf n-gibbs): Public generic functions
(setf n-instances-in-batch): Private generic functions
(setf n-instances-in-batch): Private generic functions
(setf n-rbms): Public ordinary functions
(setf n-steps): Private generic functions
(setf n-steps): Private generic functions
(setf n-stripes): Public setf expanders
(setf n-weight-uses-in-batch): Public generic functions
(setf n-weight-uses-in-batch): Public generic functions
(setf n-weight-uses-in-batch): Public generic functions
(setf on-cg-batch-done): Public generic functions
(setf on-cg-batch-done): Public generic functions
(setf on-n-instances-changed): Public generic functions
(setf on-n-instances-changed): Public generic functions
(setf on-optimization-finished): Public generic functions
(setf on-optimization-finished): Public generic functions
(setf on-optimization-started): Public generic functions
(setf on-optimization-started): Public generic functions
(setf population-mean): Private generic functions
(setf population-mean): Private generic functions
(setf population-variance): Private generic functions
(setf population-variance): Private generic functions
(setf posterior-gps): Private generic functions
(setf posterior-gps): Private generic functions
(setf random-numbers): Private generic functions
(setf random-numbers): Private generic functions
(setf remove-trailing-nil-instances): Private generic functions
(setf remove-trailing-nil-instances): Private generic functions
(setf samples): Private generic functions
(setf samples): Private generic functions
(setf scale): Public generic functions
(setf scale): Public generic functions
(setf scale): Public generic functions
(setf scratch): Private generic functions
(setf scratch): Private generic functions
(setf segment-decay-fn): Private generic functions
(setf segment-decay-fn): Private generic functions
(setf seq-indices): Public generic functions
(setf seq-indices): Public generic functions
(setf spare-vectors): Private generic functions
(setf spare-vectors): Private generic functions
(setf step-monitors): Public generic functions
(setf step-monitors): Public generic functions
(setf target): Public generic functions
(setf target): Public generic functions
(setf termination): Public generic functions
(setf termination): Public generic functions
(setf variance): Public generic functions
(setf variance): Public generic functions
(setf variance-adjustment): Public generic functions
(setf variance-adjustment): Public generic functions
(setf variance-decay): Public generic functions
(setf variance-decay): Public generic functions
(setf variance-estimates): Private generic functions
(setf variance-estimates): Private generic functions
(setf variance-for-prediction): Public generic functions
(setf variance-for-prediction): Public generic functions
(setf visible-sampling): Public generic functions
(setf visible-sampling): Public generic functions
(setf weight-decay): Public generic functions
(setf weight-decay): Public generic functions
(setf weight-deltas): Private generic functions
(setf weight-deltas): Private generic functions
(setf weight-lumps): Private generic functions
(setf weight-lumps): Private generic functions
(setf weight-penalty): Public generic functions
(setf weight-penalty): Public generic functions
(setf weights): Public generic functions
(setf weights): Public generic functions

-
->*: Public ordinary functions
->+: Public ordinary functions
->abs: Public ordinary functions
->activation: Public ordinary functions
->batch-normalization: Public ordinary functions
->batch-normalized: Public ordinary functions
->batch-normalized-activation: Public ordinary functions
->chunk: Private ordinary functions
->cloud: Private ordinary functions
->clouds: Private ordinary functions
->constant: Private ordinary functions
->description: Private ordinary functions
->dropout: Public ordinary functions
->embedding: Public ordinary functions
->exp: Public ordinary functions
->gaussian-random: Public ordinary functions
->gp: Public ordinary functions
->input: Public ordinary functions
->loss: Public ordinary functions
->lstm: Public ordinary functions
->max: Public ordinary functions
->max-channel: Public ordinary functions
->min: Public ordinary functions
->normalized: Public ordinary functions
->periodic: Public ordinary functions
->ref: Public ordinary functions
->relu: Public ordinary functions
->rep: Public ordinary functions
->rough-exponential: Public ordinary functions
->sample-binary: Public ordinary functions
->scaled-tanh: Public ordinary functions
->seq-barrier: Public ordinary functions
->sigmoid: Public ordinary functions
->sin: Public ordinary functions
->softmax-xe-loss: Public ordinary functions
->squared-difference: Public ordinary functions
->stretch: Public ordinary functions
->sum: Public ordinary functions
->tanh: Public ordinary functions
->v*m: Public ordinary functions
->weight: Public ordinary functions

A
accumulate-cloud-statistics: Private generic functions
accumulate-cloud-statistics: Private generic functions
accumulate-cloud-statistics: Private generic functions
accumulate-cloud-statistics*: Private generic functions
accumulate-cloud-statistics*: Private generic functions
accumulate-gradients*: Public generic functions
accumulate-gradients*: Public generic functions
accumulate-gradients*: Public generic functions
accumulate-gradients*: Public generic functions
accumulate-gradients*: Public generic functions
accumulate-gradients*: Public generic functions
accumulate-gradients*: Public generic functions
accumulate-negative-phase-statistics: Private generic functions
accumulate-negative-phase-statistics: Private generic functions
accumulate-negative-phase-statistics: Private generic functions
accumulate-positive-phase-statistics: Private generic functions
accumulate-positive-phase-statistics: Private generic functions
accumulate-positive-phase-statistics: Private generic functions
accumulate-positive-phase-statistics: Private generic functions
accumulate-sparsity-statistics: Private generic functions
accumulate-sparsity-statistics: Private generic functions
accumulate-sparsity-statistics: Private generic functions
accumulated-in-sink-p: Private ordinary functions
accumulator: Private generic functions
accumulator: Private generic functions
accumulator: Private generic functions
accumulator: Private generic functions
activate-cloud: Private ordinary functions
activate-cloud*: Private generic functions
activate-cloud*: Private generic functions
activate-cloud*: Private generic functions
activate-cloud*: Private generic functions
activate-cloud*: Private generic functions
adam-time-step: Private generic functions
adam-time-step: Private generic functions
adam-update: Private ordinary functions
add-and-forget-derivatives: Private ordinary functions
add-chunk-nodes: Private ordinary functions
add-clump: Public ordinary functions
add-confusion-matrix: Public ordinary functions
add-connection: Private ordinary functions
add-diffun-gradients: Private ordinary functions
add-rbm: Private ordinary functions
add-to-counter: Public generic functions
add-to-counter: Public generic functions
add-to-counter: Public generic functions
add-to-counter: Public generic functions
add-to-running-stat: Public ordinary functions
after-update-hook: Public generic functions
after-update-hook: Public generic functions
all-document-classes: Private ordinary functions
all-rbms: Private ordinary functions
append1: Public ordinary functions
applies-to-p: Public ordinary functions
apply-key: Public macros
apply-monitor: Public generic functions
apply-monitor: Public generic functions
apply-monitor: Public generic functions
apply-monitor: Public generic functions
apply-monitors: Public ordinary functions
args: Private generic functions
args: Private generic functions
args: Private generic functions
arrange-for-clipping-gradients: Public ordinary functions
arrange-for-renormalizing-activations: Public ordinary functions
as-column-vector: Public ordinary functions
attribute-print-width: Private ordinary functions
attributes: Public generic functions
attributes: Public generic functions

B
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward: Public generic functions
backward-bpn: Private generic functions
backward-bpn: Private generic functions
backward-bpn: Private generic functions
bag: Public ordinary functions
bag-cv: Public ordinary functions
bag-of-words-kind: Public generic functions
bag-of-words-kind: Public generic functions
batch-mean: Private generic functions
batch-mean: Private generic functions
batch-normalization: Public generic functions
batch-normalization: Public generic functions
batch-size: Public generic functions
batch-size: Public generic functions
batch-size: Public generic functions
batch-size: Public generic functions
batch-variance: Private generic functions
batch-variance: Private generic functions
before-update-hook: Public generic functions
before-update-hook: Public generic functions
binarize-randomly: Public ordinary functions
binomial-log-likelihood-ratio: Public ordinary functions
bleu: Public ordinary functions
bm: Public generic functions
bm: Public generic functions
bm-type-name: Private ordinary functions
both-cloud-ends-in-p: Private ordinary functions
bpn: Public generic functions
bpn: Public generic functions
build-fnn: Public macros
build-rnn: Public macros

C
cached-activations1: Private generic functions
cached-activations1: Private generic functions
cached-activations2: Private generic functions
cached-activations2: Private generic functions
cached-version1: Private generic functions
cached-version1: Private generic functions
cached-version2: Private generic functions
cached-version2: Private generic functions
call-periodic-fn: Public ordinary functions
call-periodic-fn!: Public ordinary functions
call-repeatably: Public ordinary functions
call-with-sink-accumulator: Private generic functions
call-with-sink-accumulator: Private generic functions
call-with-sink-accumulator: Private generic functions
call-with-weights-copied: Private ordinary functions
centered-evidence-outputs: Private generic functions
centered-evidence-outputs: Private generic functions
cg: Public ordinary functions
cg-args: Public generic functions
cg-args: Public generic functions
check-dbm-clouds: Private ordinary functions
check-limit: Private ordinary functions
check-no-name-clashes: Private ordinary functions
check-no-self-connection: Private ordinary functions
check-rnn: Private ordinary functions
check-size-and-default-size: Private ordinary functions
check-stripes: Private ordinary functions
check-valuep: Private ordinary functions
check-weights-not-new: Private ordinary functions
chunk: Public generic functions
chunk: Public generic functions
chunk->fnn-definition: Private generic functions
chunk->fnn-definition: Private generic functions
chunk->fnn-definition: Private generic functions
chunk->fnn-definition: Private generic functions
chunk->fnn-definition: Private generic functions
chunk-activation-lump-name: Private ordinary functions
chunk-lump-name: Public ordinary functions
chunk1: Public generic functions
chunk1: Public generic functions
chunk2: Public generic functions
chunk2: Public generic functions
chunks: Public generic functions
chunks: Public generic functions
chunks: Public generic functions
clamp-cached-entry-on-fnn: Private ordinary functions
clamping-cache: Public generic functions
clamping-cache: Public generic functions
clear-running-stat: Public ordinary functions
clip-l2-norm: Public ordinary functions
cloud: Public generic functions
cloud: Public generic functions
cloud-a: Public generic functions
cloud-a: Public generic functions
cloud-b: Public generic functions
cloud-b: Public generic functions
cloud-between-chunks-p: Private ordinary functions
cloud-chunk-among-chunks: Public ordinary functions
cloud-linear-lump-name: Private ordinary functions
cloud-weight-lump-name: Private ordinary functions
clouds: Public generic functions
clouds: Public generic functions
clouds: Public generic functions
clouds-up-to-layers: Public generic functions
clouds-up-to-layers: Public generic functions
clumps: Public generic functions
clumps: Public generic functions
coerce-to-fresh-vector: Private ordinary functions
collect-distinct: Private ordinary functions
collect-map-chunks-and-lumps: Private ordinary functions
collect-non-constant-mats-for-cuda-sync: Private ordinary functions
command: Public ordinary functions
compact-binary-feature-vector: Private ordinary functions
compute-derivatives: Private ordinary functions
concatenation: Private generic functions
concatenation: Private generic functions
concatenation-type: Public generic functions
concatenation-type: Public generic functions
conditioning-chunk-p: Private ordinary functions
conditioning-chunks: Public generic functions
conditioning-chunks: Public generic functions
conditioning-cloud-p: Public ordinary functions
conditioning-clouds-to: Private ordinary functions
confusion-class-name: Public generic functions
confusion-class-name: Public generic functions
confusion-count: Public generic functions
confusion-count: Public generic functions
confusion-matrix-accuracy: Public ordinary functions
confusion-matrix-classes: Public generic functions
confusion-matrix-classes: Public generic functions
confusion-matrix-precision: Public ordinary functions
confusion-matrix-recall: Public ordinary functions
confusion-matrix-test: Private ordinary functions
connects-to-p: Private ordinary functions
copy: Public generic functions
copy: Public generic functions
copy: Public generic functions
copy: Public generic functions
copy: Public generic functions
copy-chunk-nodes: Private ordinary functions
copy-dbm-chunk-to-dbn: Private ordinary functions
copy-dbm-cloud-to-dbn: Private ordinary functions
copy-incoming: Private ordinary functions
copy-lagged-clump: Private ordinary functions
copy-lumpy: Private ordinary functions
copy-nodes: Private generic functions
copy-nodes: Private generic functions
copy-nodes: Private generic functions
copy-nodes: Private generic functions
copy-object-extra-initargs: Public generic functions
copy-object-extra-initargs: Public generic functions
copy-object-extra-initargs: Public generic functions
copy-object-extra-initargs: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
copy-object-slot: Public generic functions
cost: Public generic functions
cost: Public generic functions
cost: Public generic functions
cost: Public generic functions
cost: Public generic functions
cost: Public generic functions
count-features: Public ordinary functions
counter: Public generic functions
counter: Public generic functions
counter: Public generic functions
counter: Public generic functions
counter-raw-values: Public generic functions
counter-raw-values: Public generic functions
counter-raw-values: Public generic functions
counter-values: Public generic functions
counter-values: Public generic functions
counter-values: Public generic functions
counter-values: Public generic functions
counts: Private generic functions
counts: Private generic functions
covariance-fn: Private generic functions
covariance-fn: Private generic functions
covariance-lump-name: Public generic functions
covariance-lump-name: Public generic functions
covariances: Private generic functions
covariances: Private generic functions
cross-entropy-to-perplexity: Private ordinary functions
cross-validate: Public ordinary functions
cuda-adam-update: Private ordinary functions
cuda-adam-update-d: Private ordinary functions
cuda-adam-update-s: Private ordinary functions
cuda-dropout-xorwow: Private ordinary functions
cuda-dropout-xorwow-d: Private ordinary functions
cuda-dropout-xorwow-s: Private ordinary functions
cuda-exp-normalized: Private ordinary functions
cuda-exp-normalized-d: Private ordinary functions
cuda-exp-normalized-s: Private ordinary functions
cuda-max: Private ordinary functions
cuda-max-channel: Private ordinary functions
cuda-max-channel-d: Private ordinary functions
cuda-max-channel-derivative: Private ordinary functions
cuda-max-channel-derivative-d: Private ordinary functions
cuda-max-channel-derivative-s: Private ordinary functions
cuda-max-channel-s: Private ordinary functions
cuda-max-d: Private ordinary functions
cuda-max-derivative: Private ordinary functions
cuda-max-derivative-d: Private ordinary functions
cuda-max-derivative-s: Private ordinary functions
cuda-max-s: Private ordinary functions
cuda-maybe-renormalize-columns: Private ordinary functions
cuda-maybe-renormalize-columns-d: Private ordinary functions
cuda-maybe-renormalize-columns-s: Private ordinary functions
cuda-maybe-renormalize-rows: Private ordinary functions
cuda-maybe-renormalize-rows-d: Private ordinary functions
cuda-maybe-renormalize-rows-s: Private ordinary functions
cuda-min: Private ordinary functions
cuda-min-d: Private ordinary functions
cuda-min-derivative: Private ordinary functions
cuda-min-derivative-d: Private ordinary functions
cuda-min-derivative-s: Private ordinary functions
cuda-min-s: Private ordinary functions
cuda-rectify: Private ordinary functions
cuda-rectify-d: Private ordinary functions
cuda-rectify-derivative: Private ordinary functions
cuda-rectify-derivative-d: Private ordinary functions
cuda-rectify-derivative-s: Private ordinary functions
cuda-rectify-s: Private ordinary functions
cuda-sample-softmax: Private ordinary functions
cuda-sample-softmax-d: Private ordinary functions
cuda-sample-softmax-s: Private ordinary functions
cuda-scaled-tanh!: Private ordinary functions
cuda-scaled-tanh!-d: Private ordinary functions
cuda-scaled-tanh!-s: Private ordinary functions
cuda-scaled-tanh-derivative!: Private ordinary functions
cuda-scaled-tanh-derivative!-d: Private ordinary functions
cuda-scaled-tanh-derivative!-s: Private ordinary functions
cuda-sigmoid!: Private ordinary functions
cuda-sigmoid!-d: Private ordinary functions
cuda-sigmoid!-s: Private ordinary functions
cuda-sigmoid-derivative!: Private ordinary functions
cuda-sigmoid-derivative!-d: Private ordinary functions
cuda-sigmoid-derivative!-s: Private ordinary functions
cuda-sin!: Private ordinary functions
cuda-sin!-d: Private ordinary functions
cuda-sin!-s: Private ordinary functions
cuda-sin-derivative!: Private ordinary functions
cuda-sin-derivative!-d: Private ordinary functions
cuda-sin-derivative!-s: Private ordinary functions
cuda-softmax-xe: Private ordinary functions
cuda-softmax-xe-d: Private ordinary functions
cuda-softmax-xe-derivative: Private ordinary functions
cuda-softmax-xe-derivative-d: Private ordinary functions
cuda-softmax-xe-derivative-s: Private ordinary functions
cuda-softmax-xe-derivative/sparse: Private ordinary functions
cuda-softmax-xe-derivative/sparse-d: Private ordinary functions
cuda-softmax-xe-derivative/sparse-s: Private ordinary functions
cuda-softmax-xe-s: Private ordinary functions
cuda-tanh!: Private ordinary functions
cuda-tanh!-d: Private ordinary functions
cuda-tanh!-s: Private ordinary functions
cuda-tanh-derivative!: Private ordinary functions
cuda-tanh-derivative!-d: Private ordinary functions
cuda-tanh-derivative!-s: Private ordinary functions
cuda-window-start-time: Public generic functions
cuda-window-start-time: Public generic functions
current-time: Private generic functions
current-time: Private generic functions

D
damping: Public generic functions
damping: Public generic functions
data: Public macros
data*: Public ordinary functions
dbm->dbn: Public ordinary functions
dbn: Public generic functions
dbn: Public generic functions
decay-rate-to-half-life: Public ordinary functions
decay-to-half-life: Public ordinary functions
decode: Public generic functions
decode: Public generic functions
decode: Public generic functions
decode: Public generic functions
decodings: Private generic functions
decodings: Private generic functions
default-mean-field-supervisor: Public generic functions
default-mean-field-supervisor: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-size: Public generic functions
default-value: Public generic functions
default-value: Public generic functions
default-value: Public generic functions
defclass-now: Public macros
define-descriptions: Public macros
define-slot-not-to-be-copied: Private macros
define-slot-to-be-shallow-copied: Private macros
define-slots-not-to-be-copied: Public macros
define-slots-to-be-shallow-copied: Public macros
defmaker: Public macros
denominator*: Private generic functions
denominator*: Private generic functions
derivatives: Public generic functions
derivatives: Public generic functions
derivatives: Public generic functions
derive-rough-exponential: Private ordinary functions
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
describe-object: Public standalone methods
differentiate-numerically: Private ordinary functions
dimensions: Public generic functions
dimensions: Public generic functions
do-batches-for-model: Public macros
do-chunk: Private macros
do-cloud-runs: Private macros
do-cloud/chunk1: Private macros
do-clouds: Public macros
do-executors: Public macros
do-gradient-sink: Public macros
do-segment-set: Public macros
do-sparse-targets: Private macros
do-stripes: Private macros
document-features: Private ordinary functions
down-dbm: Public ordinary functions
down-mean-field: Public ordinary functions
drop-negative-index-p: Private generic functions
drop-negative-index-p: Private generic functions
dropout: Public generic functions
dropout: Public generic functions
dropout: Public generic functions
dropout: Public generic functions
dropout!: Private ordinary functions

E
effective-mean-decay: Private ordinary functions
encode: Public generic functions
encode: Public generic functions
encode: Public generic functions
encode: Public generic functions
encode: Public generic functions
encode/bag-of-words: Private ordinary functions
encoded-feature-test: Public generic functions
encoded-feature-test: Public generic functions
encoded-feature-type: Public generic functions
encoded-feature-type: Public generic functions
encodings: Private generic functions
encodings: Private generic functions
end-session: Public ordinary functions
ensure-batch-mean: Private ordinary functions
ensure-batch-variance: Private ordinary functions
ensure-lumpy: Private ordinary functions
ensure-mask: Private ordinary functions
ensure-mat-large-enough: Private ordinary functions
ensure-mean-estimates: Private ordinary functions
ensure-population-mean: Private ordinary functions
ensure-population-variance: Private ordinary functions
ensure-random-numbers: Private ordinary functions
ensure-randoms: Private ordinary functions
ensure-rnn-bpn: Private ordinary functions
ensure-sampler: Private ordinary functions
ensure-scratch: Private ordinary functions
ensure-seq: Private ordinary functions
ensure-softmax-target-matrix: Public ordinary functions
evaluate-diffun: Private ordinary functions
evidence-inputs: Private generic functions
evidence-inputs: Private generic functions
evidence-outputs: Private generic functions
evidence-outputs: Private generic functions
executor-cache: Private generic functions
executor-cache: Private generic functions
extract-covariances: Private ordinary functions
extract-means: Private ordinary functions

F
factored-cloud-shared-chunk: Private ordinary functions
feature-disambiguities: Public ordinary functions
feature-encoder: Public generic functions
feature-encoder: Public generic functions
feature-llrs: Public ordinary functions
feature-mapper: Public generic functions
feature-mapper: Public generic functions
file: Public macros
file*: Public ordinary functions
filename: Private generic functions
filename: Private generic functions
fill-chunk: Private ordinary functions
find-activation-lump-for-weight: Private ordinary functions
find-chunk: Public generic functions
find-chunk: Public generic functions
find-chunk: Public generic functions
find-cloud: Public generic functions
find-cloud: Public generic functions
find-cloud: Public generic functions
find-clump: Public ordinary functions
find-enough-weights: Private ordinary functions
find-gp-lump: Private ordinary functions
find-lumpy: Private ordinary functions
find-lumpy-by-name: Private ordinary functions
find-mm-lumps-from: Private ordinary functions
find-nested-bpn: Private ordinary functions
find-one-executor: Private generic functions
find-one-executor: Private generic functions
find-one-executor: Private generic functions
find-one-executor: Private generic functions
finishedp: Public generic functions
finishedp: Public generic functions
first-trained-clump: Private generic functions
first-trained-clump: Private generic functions
flt: Public ordinary functions
flt-vector: Public ordinary functions
flush-sparsity: Private generic functions
flush-sparsity: Private generic functions
flush-sparsity: Private generic functions
fn: Public macros
fn: Public generic functions
fn: Public generic functions
fn: Public generic functions
fn*: Public ordinary functions
format-description: Private ordinary functions
format-full-cloud-norm: Private ordinary functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward: Public generic functions
forward-bpn: Private generic functions
forward-bpn: Private generic functions
forward-bpn: Private generic functions
fracture: Public ordinary functions
fracture-stratified: Public ordinary functions
full-cloud-norm: Private ordinary functions
full-clouds-everywhere: Private ordinary functions
full-clouds-everywhere-between-layers: Private ordinary functions
Function, (setf incoming-cloud): Private ordinary functions
Function, (setf incoming-from-lumpy): Private ordinary functions
Function, (setf incoming-transposep): Private ordinary functions
Function, (setf lagged-clump-lag): Private ordinary functions
Function, (setf lagged-clump-name): Private ordinary functions
Function, (setf lagged-clump-path): Private ordinary functions
Function, (setf lumpy-chunk): Private ordinary functions
Function, (setf lumpy-depth): Private ordinary functions
Function, (setf lumpy-incomings): Private ordinary functions
Function, (setf lumpy-kind): Private ordinary functions
Function, (setf lumpy-original): Private ordinary functions
Function, (setf lumpy-symbol): Private ordinary functions
Function, ->*: Public ordinary functions
Function, ->+: Public ordinary functions
Function, ->abs: Public ordinary functions
Function, ->activation: Public ordinary functions
Function, ->batch-normalization: Public ordinary functions
Function, ->batch-normalized: Public ordinary functions
Function, ->batch-normalized-activation: Public ordinary functions
Function, ->chunk: Private ordinary functions
Function, ->cloud: Private ordinary functions
Function, ->clouds: Private ordinary functions
Function, ->constant: Private ordinary functions
Function, ->description: Private ordinary functions
Function, ->dropout: Public ordinary functions
Function, ->embedding: Public ordinary functions
Function, ->exp: Public ordinary functions
Function, ->gaussian-random: Public ordinary functions
Function, ->gp: Public ordinary functions
Function, ->input: Public ordinary functions
Function, ->loss: Public ordinary functions
Function, ->lstm: Public ordinary functions
Function, ->max: Public ordinary functions
Function, ->max-channel: Public ordinary functions
Function, ->min: Public ordinary functions
Function, ->normalized: Public ordinary functions
Function, ->periodic: Public ordinary functions
Function, ->ref: Public ordinary functions
Function, ->relu: Public ordinary functions
Function, ->rep: Public ordinary functions
Function, ->rough-exponential: Public ordinary functions
Function, ->sample-binary: Public ordinary functions
Function, ->scaled-tanh: Public ordinary functions
Function, ->seq-barrier: Public ordinary functions
Function, ->sigmoid: Public ordinary functions
Function, ->sin: Public ordinary functions
Function, ->softmax-xe-loss: Public ordinary functions
Function, ->squared-difference: Public ordinary functions
Function, ->stretch: Public ordinary functions
Function, ->sum: Public ordinary functions
Function, ->tanh: Public ordinary functions
Function, ->v*m: Public ordinary functions
Function, ->weight: Public ordinary functions
Function, accumulated-in-sink-p: Private ordinary functions
Function, activate-cloud: Private ordinary functions
Function, adam-update: Private ordinary functions
Function, add-and-forget-derivatives: Private ordinary functions
Function, add-chunk-nodes: Private ordinary functions
Function, add-clump: Public ordinary functions
Function, add-confusion-matrix: Public ordinary functions
Function, add-connection: Private ordinary functions
Function, add-diffun-gradients: Private ordinary functions
Function, add-rbm: Private ordinary functions
Function, add-to-running-stat: Public ordinary functions
Function, all-document-classes: Private ordinary functions
Function, all-rbms: Private ordinary functions
Function, append1: Public ordinary functions
Function, applies-to-p: Public ordinary functions
Function, apply-monitors: Public ordinary functions
Function, arrange-for-clipping-gradients: Public ordinary functions
Function, arrange-for-renormalizing-activations: Public ordinary functions
Function, as-column-vector: Public ordinary functions
Function, attribute-print-width: Private ordinary functions
Function, bag: Public ordinary functions
Function, bag-cv: Public ordinary functions
Function, binarize-randomly: Public ordinary functions
Function, binomial-log-likelihood-ratio: Public ordinary functions
Function, bleu: Public ordinary functions
Function, bm-type-name: Private ordinary functions
Function, both-cloud-ends-in-p: Private ordinary functions
Function, call-periodic-fn: Public ordinary functions
Function, call-periodic-fn!: Public ordinary functions
Function, call-repeatably: Public ordinary functions
Function, call-with-weights-copied: Private ordinary functions
Function, cg: Public ordinary functions
Function, check-dbm-clouds: Private ordinary functions
Function, check-limit: Private ordinary functions
Function, check-no-name-clashes: Private ordinary functions
Function, check-no-self-connection: Private ordinary functions
Function, check-rnn: Private ordinary functions
Function, check-size-and-default-size: Private ordinary functions
Function, check-stripes: Private ordinary functions
Function, check-valuep: Private ordinary functions
Function, check-weights-not-new: Private ordinary functions
Function, chunk-activation-lump-name: Private ordinary functions
Function, chunk-lump-name: Public ordinary functions
Function, clamp-cached-entry-on-fnn: Private ordinary functions
Function, clear-running-stat: Public ordinary functions
Function, clip-l2-norm: Public ordinary functions
Function, cloud-between-chunks-p: Private ordinary functions
Function, cloud-chunk-among-chunks: Public ordinary functions
Function, cloud-linear-lump-name: Private ordinary functions
Function, cloud-weight-lump-name: Private ordinary functions
Function, coerce-to-fresh-vector: Private ordinary functions
Function, collect-distinct: Private ordinary functions
Function, collect-map-chunks-and-lumps: Private ordinary functions
Function, collect-non-constant-mats-for-cuda-sync: Private ordinary functions
Function, command: Public ordinary functions
Function, compact-binary-feature-vector: Private ordinary functions
Function, compute-derivatives: Private ordinary functions
Function, conditioning-chunk-p: Private ordinary functions
Function, conditioning-cloud-p: Public ordinary functions
Function, conditioning-clouds-to: Private ordinary functions
Function, confusion-matrix-accuracy: Public ordinary functions
Function, confusion-matrix-precision: Public ordinary functions
Function, confusion-matrix-recall: Public ordinary functions
Function, confusion-matrix-test: Private ordinary functions
Function, connects-to-p: Private ordinary functions
Function, copy-chunk-nodes: Private ordinary functions
Function, copy-dbm-chunk-to-dbn: Private ordinary functions
Function, copy-dbm-cloud-to-dbn: Private ordinary functions
Function, copy-incoming: Private ordinary functions
Function, copy-lagged-clump: Private ordinary functions
Function, copy-lumpy: Private ordinary functions
Function, count-features: Public ordinary functions
Function, cross-entropy-to-perplexity: Private ordinary functions
Function, cross-validate: Public ordinary functions
Function, cuda-adam-update: Private ordinary functions
Function, cuda-adam-update-d: Private ordinary functions
Function, cuda-adam-update-s: Private ordinary functions
Function, cuda-dropout-xorwow: Private ordinary functions
Function, cuda-dropout-xorwow-d: Private ordinary functions
Function, cuda-dropout-xorwow-s: Private ordinary functions
Function, cuda-exp-normalized: Private ordinary functions
Function, cuda-exp-normalized-d: Private ordinary functions
Function, cuda-exp-normalized-s: Private ordinary functions
Function, cuda-max: Private ordinary functions
Function, cuda-max-channel: Private ordinary functions
Function, cuda-max-channel-d: Private ordinary functions
Function, cuda-max-channel-derivative: Private ordinary functions
Function, cuda-max-channel-derivative-d: Private ordinary functions
Function, cuda-max-channel-derivative-s: Private ordinary functions
Function, cuda-max-channel-s: Private ordinary functions
Function, cuda-max-d: Private ordinary functions
Function, cuda-max-derivative: Private ordinary functions
Function, cuda-max-derivative-d: Private ordinary functions
Function, cuda-max-derivative-s: Private ordinary functions
Function, cuda-max-s: Private ordinary functions
Function, cuda-maybe-renormalize-columns: Private ordinary functions
Function, cuda-maybe-renormalize-columns-d: Private ordinary functions
Function, cuda-maybe-renormalize-columns-s: Private ordinary functions
Function, cuda-maybe-renormalize-rows: Private ordinary functions
Function, cuda-maybe-renormalize-rows-d: Private ordinary functions
Function, cuda-maybe-renormalize-rows-s: Private ordinary functions
Function, cuda-min: Private ordinary functions
Function, cuda-min-d: Private ordinary functions
Function, cuda-min-derivative: Private ordinary functions
Function, cuda-min-derivative-d: Private ordinary functions
Function, cuda-min-derivative-s: Private ordinary functions
Function, cuda-min-s: Private ordinary functions
Function, cuda-rectify: Private ordinary functions
Function, cuda-rectify-d: Private ordinary functions
Function, cuda-rectify-derivative: Private ordinary functions
Function, cuda-rectify-derivative-d: Private ordinary functions
Function, cuda-rectify-derivative-s: Private ordinary functions
Function, cuda-rectify-s: Private ordinary functions
Function, cuda-sample-softmax: Private ordinary functions
Function, cuda-sample-softmax-d: Private ordinary functions
Function, cuda-sample-softmax-s: Private ordinary functions
Function, cuda-scaled-tanh!: Private ordinary functions
Function, cuda-scaled-tanh!-d: Private ordinary functions
Function, cuda-scaled-tanh!-s: Private ordinary functions
Function, cuda-scaled-tanh-derivative!: Private ordinary functions
Function, cuda-scaled-tanh-derivative!-d: Private ordinary functions
Function, cuda-scaled-tanh-derivative!-s: Private ordinary functions
Function, cuda-sigmoid!: Private ordinary functions
Function, cuda-sigmoid!-d: Private ordinary functions
Function, cuda-sigmoid!-s: Private ordinary functions
Function, cuda-sigmoid-derivative!: Private ordinary functions
Function, cuda-sigmoid-derivative!-d: Private ordinary functions
Function, cuda-sigmoid-derivative!-s: Private ordinary functions
Function, cuda-sin!: Private ordinary functions
Function, cuda-sin!-d: Private ordinary functions
Function, cuda-sin!-s: Private ordinary functions
Function, cuda-sin-derivative!: Private ordinary functions
Function, cuda-sin-derivative!-d: Private ordinary functions
Function, cuda-sin-derivative!-s: Private ordinary functions
Function, cuda-softmax-xe: Private ordinary functions
Function, cuda-softmax-xe-d: Private ordinary functions
Function, cuda-softmax-xe-derivative: Private ordinary functions
Function, cuda-softmax-xe-derivative-d: Private ordinary functions
Function, cuda-softmax-xe-derivative-s: Private ordinary functions
Function, cuda-softmax-xe-derivative/sparse: Private ordinary functions
Function, cuda-softmax-xe-derivative/sparse-d: Private ordinary functions
Function, cuda-softmax-xe-derivative/sparse-s: Private ordinary functions
Function, cuda-softmax-xe-s: Private ordinary functions
Function, cuda-tanh!: Private ordinary functions
Function, cuda-tanh!-d: Private ordinary functions
Function, cuda-tanh!-s: Private ordinary functions
Function, cuda-tanh-derivative!: Private ordinary functions
Function, cuda-tanh-derivative!-d: Private ordinary functions
Function, cuda-tanh-derivative!-s: Private ordinary functions
Function, data*: Public ordinary functions
Function, dbm->dbn: Public ordinary functions
Function, decay-rate-to-half-life: Public ordinary functions
Function, decay-to-half-life: Public ordinary functions
Function, derive-rough-exponential: Private ordinary functions
Function, differentiate-numerically: Private ordinary functions
Function, document-features: Private ordinary functions
Function, down-dbm: Public ordinary functions
Function, down-mean-field: Public ordinary functions
Function, dropout!: Private ordinary functions
Function, effective-mean-decay: Private ordinary functions
Function, encode/bag-of-words: Private ordinary functions
Function, end-session: Public ordinary functions
Function, ensure-batch-mean: Private ordinary functions
Function, ensure-batch-variance: Private ordinary functions
Function, ensure-lumpy: Private ordinary functions
Function, ensure-mask: Private ordinary functions
Function, ensure-mat-large-enough: Private ordinary functions
Function, ensure-mean-estimates: Private ordinary functions
Function, ensure-population-mean: Private ordinary functions
Function, ensure-population-variance: Private ordinary functions
Function, ensure-random-numbers: Private ordinary functions
Function, ensure-randoms: Private ordinary functions
Function, ensure-rnn-bpn: Private ordinary functions
Function, ensure-sampler: Private ordinary functions
Function, ensure-scratch: Private ordinary functions
Function, ensure-seq: Private ordinary functions
Function, ensure-softmax-target-matrix: Public ordinary functions
Function, evaluate-diffun: Private ordinary functions
Function, extract-covariances: Private ordinary functions
Function, extract-means: Private ordinary functions
Function, factored-cloud-shared-chunk: Private ordinary functions
Function, feature-disambiguities: Public ordinary functions
Function, feature-llrs: Public ordinary functions
Function, file*: Public ordinary functions
Function, fill-chunk: Private ordinary functions
Function, find-activation-lump-for-weight: Private ordinary functions
Function, find-clump: Public ordinary functions
Function, find-enough-weights: Private ordinary functions
Function, find-gp-lump: Private ordinary functions
Function, find-lumpy: Private ordinary functions
Function, find-lumpy-by-name: Private ordinary functions
Function, find-mm-lumps-from: Private ordinary functions
Function, find-nested-bpn: Private ordinary functions
Function, flt: Public ordinary functions
Function, flt-vector: Public ordinary functions
Function, fn*: Public ordinary functions
Function, format-description: Private ordinary functions
Function, format-full-cloud-norm: Private ordinary functions
Function, fracture: Public ordinary functions
Function, fracture-stratified: Public ordinary functions
Function, full-cloud-norm: Private ordinary functions
Function, full-clouds-everywhere: Private ordinary functions
Function, full-clouds-everywhere-between-layers: Private ordinary functions
Function, gaussian-kernel: Private ordinary functions
Function, gaussian-random-1: Public ordinary functions
Function, get-all: Private ordinary functions
Function, gp-confidences-as-plot-data: Public ordinary functions
Function, gp-covariances: Public ordinary functions
Function, gp-data-matrix: Private ordinary functions
Function, gp-data-matrix-for-level: Private ordinary functions
Function, gp-means-and-covariances: Public ordinary functions
Function, gp-samples-as-plot-data: Public ordinary functions
Function, group: Public ordinary functions
Function, half-life-to-decay: Public ordinary functions
Function, half-life-to-decay-rate: Public ordinary functions
Function, hash-table->vector: Public ordinary functions
Function, hijack-means-to-activation: Private ordinary functions
Function, incoming-cloud: Private ordinary functions
Function, incoming-from-lumpy: Private ordinary functions
Function, incoming-list->fnn-definition: Private ordinary functions
Function, incoming-p: Private ordinary functions
Function, incoming-transposep: Private ordinary functions
Function, index-scored-features: Private ordinary functions
Function, initialize-fnn-from-bm: Public ordinary functions
Function, inputs->nodes: Public ordinary functions
Function, insert-into-executor-cache: Private ordinary functions
Function, insert-into-sorted-vector: Public ordinary functions
Function, invert-permutation: Public ordinary functions
Function, kind-to-string: Private ordinary functions
Function, lag: Public ordinary functions
Function, lagged-clump-lag: Private ordinary functions
Function, lagged-clump-name: Private ordinary functions
Function, lagged-clump-p: Private ordinary functions
Function, lagged-clump-path: Private ordinary functions
Function, last1: Public ordinary functions
Function, lisp-adam-update: Private ordinary functions
Function, lisp-adam-update-d: Private ordinary functions
Function, lisp-adam-update-s: Private ordinary functions
Function, lisp-dropout: Private ordinary functions
Function, lisp-dropout-d: Private ordinary functions
Function, lisp-dropout-s: Private ordinary functions
Function, lisp-max: Private ordinary functions
Function, lisp-max-channel: Private ordinary functions
Function, lisp-max-channel-d: Private ordinary functions
Function, lisp-max-channel-derivative: Private ordinary functions
Function, lisp-max-channel-derivative-d: Private ordinary functions
Function, lisp-max-channel-derivative-s: Private ordinary functions
Function, lisp-max-channel-s: Private ordinary functions
Function, lisp-max-d: Private ordinary functions
Function, lisp-max-derivative: Private ordinary functions
Function, lisp-max-derivative-d: Private ordinary functions
Function, lisp-max-derivative-s: Private ordinary functions
Function, lisp-max-s: Private ordinary functions
Function, lisp-maybe-renormalize-columns: Private ordinary functions
Function, lisp-maybe-renormalize-columns-d: Private ordinary functions
Function, lisp-maybe-renormalize-columns-s: Private ordinary functions
Function, lisp-maybe-renormalize-rows: Private ordinary functions
Function, lisp-maybe-renormalize-rows-d: Private ordinary functions
Function, lisp-maybe-renormalize-rows-s: Private ordinary functions
Function, lisp-min: Private ordinary functions
Function, lisp-min-d: Private ordinary functions
Function, lisp-min-derivative: Private ordinary functions
Function, lisp-min-derivative-d: Private ordinary functions
Function, lisp-min-derivative-s: Private ordinary functions
Function, lisp-min-s: Private ordinary functions
Function, lisp-rectify: Private ordinary functions
Function, lisp-rectify-d: Private ordinary functions
Function, lisp-rectify-derivative: Private ordinary functions
Function, lisp-rectify-derivative-d: Private ordinary functions
Function, lisp-rectify-derivative-s: Private ordinary functions
Function, lisp-rectify-s: Private ordinary functions
Function, lisp-scaled-tanh!: Private ordinary functions
Function, lisp-scaled-tanh!-d: Private ordinary functions
Function, lisp-scaled-tanh!-s: Private ordinary functions
Function, lisp-scaled-tanh-derivative!: Private ordinary functions
Function, lisp-scaled-tanh-derivative!-d: Private ordinary functions
Function, lisp-scaled-tanh-derivative!-s: Private ordinary functions
Function, lisp-sigmoid!: Private ordinary functions
Function, lisp-sigmoid!-d: Private ordinary functions
Function, lisp-sigmoid!-s: Private ordinary functions
Function, lisp-sigmoid-derivative!: Private ordinary functions
Function, lisp-sigmoid-derivative!-d: Private ordinary functions
Function, lisp-sigmoid-derivative!-s: Private ordinary functions
Function, lisp-sin!: Private ordinary functions
Function, lisp-sin!-d: Private ordinary functions
Function, lisp-sin!-s: Private ordinary functions
Function, lisp-sin-derivative!: Private ordinary functions
Function, lisp-sin-derivative!-d: Private ordinary functions
Function, lisp-sin-derivative!-s: Private ordinary functions
Function, lisp-softmax-xe: Private ordinary functions
Function, lisp-softmax-xe-d: Private ordinary functions
Function, lisp-softmax-xe-derivative: Private ordinary functions
Function, lisp-softmax-xe-derivative-d: Private ordinary functions
Function, lisp-softmax-xe-derivative-s: Private ordinary functions
Function, lisp-softmax-xe-derivative/sparse: Private ordinary functions
Function, lisp-softmax-xe-derivative/sparse-d: Private ordinary functions
Function, lisp-softmax-xe-derivative/sparse-s: Private ordinary functions
Function, lisp-softmax-xe-s: Private ordinary functions
Function, lisp-tanh!: Private ordinary functions
Function, lisp-tanh!-d: Private ordinary functions
Function, lisp-tanh!-s: Private ordinary functions
Function, lisp-tanh-derivative!: Private ordinary functions
Function, lisp-tanh-derivative!-d: Private ordinary functions
Function, lisp-tanh-derivative!-s: Private ordinary functions
Function, list-samples: Public ordinary functions
Function, list-segments: Public ordinary functions
Function, load-state: Public ordinary functions
Function, log-mat-room: Public ordinary functions
Function, log-msg: Public ordinary functions
Function, log-padded: Public ordinary functions
Function, lookup-executor-cache: Private ordinary functions
Function, lumpies->fnn-definition: Private ordinary functions
Function, lumpy-activation-name: Private ordinary functions
Function, lumpy-chunk: Private ordinary functions
Function, lumpy-depth: Private ordinary functions
Function, lumpy-incomings: Private ordinary functions
Function, lumpy-kind: Private ordinary functions
Function, lumpy-name: Private ordinary functions
Function, lumpy-original: Private ordinary functions
Function, lumpy-p: Private ordinary functions
Function, lumpy-symbol: Private ordinary functions
Function, make-classification-accuracy-monitors: Public ordinary functions
Function, make-confusion-matrix: Public ordinary functions
Function, make-cost-monitors: Public ordinary functions
Function, make-cross-entropy-monitors: Public ordinary functions
Function, make-do-cloud/chunk2: Private ordinary functions
Function, make-incoming: Private ordinary functions
Function, make-indexer: Public ordinary functions
Function, make-instance-args: Private ordinary functions
Function, make-label-monitors: Public ordinary functions
Function, make-lagged-clump: Private ordinary functions
Function, make-lumpy: Private ordinary functions
Function, make-matrix-from-lump-stripe: Private ordinary functions
Function, make-n-gram-mappee: Public ordinary functions
Function, make-random-generator: Public ordinary functions
Function, make-random-sampler: Public ordinary functions
Function, make-reconstruction-monitors: Public ordinary functions
Function, make-sequence-generator: Public ordinary functions
Function, make-sequence-sampler: Public ordinary functions
Function, make-sorted-group-generator: Public ordinary functions
Function, make-step-monitor-monitors: Public ordinary functions
Function, make-vector-from-lump-stripe: Private ordinary functions
Function, map-batches-for-model: Public ordinary functions
Function, map-dataset: Public ordinary functions
Function, map-datasets: Public ordinary functions
Function, map-sparser: Private ordinary functions
Function, mark-everything-present: Public ordinary functions
Function, mat-and-row/column-sum-size: Private ordinary functions
Function, max-position: Public ordinary functions
Function, max-row-positions: Public ordinary functions
Function, maybe-add-name-to-init: Private ordinary functions
Function, maybe-copy-weight: Private ordinary functions
Function, maybe-remember: Private ordinary functions
Function, maybe-renormalize-columns: Private ordinary functions
Function, maybe-renormalize-rows: Private ordinary functions
Function, maybe-use-remembered: Private ordinary functions
Function, means-or-samples: Private ordinary functions
Function, measure-classification-accuracy: Public ordinary functions
Function, measure-confusion: Public ordinary functions
Function, measure-cross-entropy: Public ordinary functions
Function, measure-roc-auc: Public ordinary functions
Function, merge-cloud-specs: Public ordinary functions
Function, merge-weights-and-arguments: Private ordinary functions
Function, minimize: Public ordinary functions
Function, monitor-bm-mean-field-bottom-up: Public ordinary functions
Function, monitor-bm-mean-field-reconstructions: Public ordinary functions
Function, monitor-bpn-results: Public ordinary functions
Function, monitor-dbn-mean-field-bottom-up: Public ordinary functions
Function, monitor-dbn-mean-field-reconstructions: Public ordinary functions
Function, monitor-model-results: Public ordinary functions
Function, monitor-optimization-periodically: Public ordinary functions
Function, monitor-optimization-periodically*: Private ordinary functions
Function, multinomial-log-likelihood-ratio: Public ordinary functions
Function, n-rbms: Public ordinary functions
Function, name-clashes: Private ordinary functions
Function, name=: Public ordinary functions
Function, negate-vector: Private ordinary functions
Function, node-change: Private ordinary functions
Function, nodes->inputs: Public ordinary functions
Function, nodes->means: Private ordinary functions
Function, norm: Private ordinary functions
Function, other-chunk: Private ordinary functions
Function, pad-to-width: Private ordinary functions
Function, pax-pages: Private ordinary functions
Function, pax-sections: Private ordinary functions
Function, permute: Public ordinary functions
Function, perplexity-to-cross-entropy: Private ordinary functions
Function, pipe-to-gnuplot: Private ordinary functions
Function, plists-to-column-widths: Private ordinary functions
Function, plot*: Public ordinary functions
Function, poisson-random: Public ordinary functions
Function, polack-ribiere: Private ordinary functions
Function, populate-map-cache: Public ordinary functions
Function, posterior-gp-means-and-covariances: Private ordinary functions
Function, pprint-descriptions: Private ordinary functions
Function, previous-rbm: Private ordinary functions
Function, print-name: Private ordinary functions
Function, print-table: Public ordinary functions
Function, process-batch: Private ordinary functions
Function, rank: Public ordinary functions
Function, read-state: Public ordinary functions
Function, reconstruction-error: Public ordinary functions
Function, reconstruction-rmse: Public ordinary functions
Function, rectify!: Private ordinary functions
Function, remove-if*: Private ordinary functions
Function, remove-trailing-nils: Private ordinary functions
Function, remove-unkeyword-args: Private ordinary functions
Function, renormalize-activations: Public ordinary functions
Function, renormalize-mats: Private ordinary functions
Function, resolve-clumps: Private ordinary functions
Function, resolve-step-monitors: Private ordinary functions
Function, reverse-map: Private ordinary functions
Function, rnn-backward-cuda-syncs: Private ordinary functions
Function, rnn-forward-cuda-syncs: Private ordinary functions
Function, rough-exponential: Private ordinary functions
Function, rows-to-arrays: Public ordinary functions
Function, running-stat-mean: Public ordinary functions
Function, running-stat-variance: Public ordinary functions
Function, sample-from: Public ordinary functions
Function, sample-gp: Private ordinary functions
Function, sample-hidden: Public ordinary functions
Function, sample-stratified: Public ordinary functions
Function, sample-visible: Public ordinary functions
Function, sample-with-replacement*: Private ordinary functions
Function, sample-without-replacement*: Private ordinary functions
Function, save-state: Public ordinary functions
Function, scaled-tanh: Public ordinary functions
Function, scaled-tanh!: Private ordinary functions
Function, scaled-tanh-derivative!: Private ordinary functions
Function, sech: Public ordinary functions
Function, segment-set->mat: Public ordinary functions
Function, segment-set<-mat: Public ordinary functions
Function, select-random-element: Public ordinary functions
Function, set-dropout-and-rescale-activation-weights: Public ordinary functions
Function, set-hidden-mean/1: Public ordinary functions
Function, set-mean: Private ordinary functions
Function, set-mean*: Private ordinary functions
Function, set-n-instances: Public ordinary functions
Function, set-n-rbms: Private ordinary functions
Function, set-up-n-weight-uses: Private ordinary functions
Function, set-visible-mean/1: Public ordinary functions
Function, settle-hidden-mean-field: Public ordinary functions
Function, settle-mean-field: Public ordinary functions
Function, settle-visible-mean-field: Public ordinary functions
Function, shuffle: Public ordinary functions
Function, shuffle!: Public ordinary functions
Function, shuffle-groups: Public ordinary functions
Function, shuffle-vector!: Private ordinary functions
Function, sigmoid: Public ordinary functions
Function, sigmoid!: Private ordinary functions
Function, sigmoid-derivative!: Private ordinary functions
Function, sign: Public ordinary functions
Function, sin!: Private ordinary functions
Function, sin-derivative!: Private ordinary functions
Function, sort-groups!: Private ordinary functions
Function, sorting-permutation: Public ordinary functions
Function, split-by-index: Private ordinary functions
Function, split-fold/cont: Public ordinary functions
Function, split-fold/mod: Public ordinary functions
Function, split-plist: Public ordinary functions
Function, split-stratified: Public ordinary functions
Function, splot*: Public ordinary functions
Function, spread-strata: Public ordinary functions
Function, stable-set-difference: Private ordinary functions
Function, start-session: Public ordinary functions
Function, stratify: Public ordinary functions
Function, stripe-binding: Private ordinary functions
Function, subseq*: Public ordinary functions
Function, sum-chunk-nodes-and-old-nodes: Private ordinary functions
Function, sum-nodes-and-old-nodes: Private ordinary functions
Function, supervise-mean-field/default: Public ordinary functions
Function, swap-nodes: Private ordinary functions
Function, swap-nodes*: Private ordinary functions
Function, tanh!: Private ordinary functions
Function, tanh-derivative!: Private ordinary functions
Function, terminate-optimization-p: Public ordinary functions
Function, time->string: Private ordinary functions
Function, time-step: Public ordinary functions
Function, train-batch: Private ordinary functions
Function, trivially-map-over-executors: Private ordinary functions
Function, try-chance: Public ordinary functions
Function, uninterned-symbol-p: Public ordinary functions
Function, unroll-dbm: Public ordinary functions
Function, unroll-dbn: Public ordinary functions
Function, up-dbm: Public ordinary functions
Function, update-all-weights/adam: Private ordinary functions
Function, update-all-weights/adam-use-segment-derivatives: Private ordinary functions
Function, update-all-weights/sgd-nesterov: Private ordinary functions
Function, update-all-weights/sgd-no-momentum: Private ordinary functions
Function, update-all-weights/sgd-normal: Private ordinary functions
Function, update-direction: Private ordinary functions
Function, update-gp: Public ordinary functions
Function, update-gp*: Private ordinary functions
Function, use-blas-on-chunk-p: Private ordinary functions
Function, v1=v2+c*v3: Private ordinary functions
Function, version: Private ordinary functions
Function, visible-nodes->means: Private ordinary functions
Function, warped-time: Public ordinary functions
Function, write-options: Private ordinary functions
Function, write-state: Public ordinary functions
Function, zero-chunk: Private ordinary functions
Function, zip-evenly: Public ordinary functions
function-expression: Private generic functions
function-expression: Private generic functions

G
gaussian-kernel: Private ordinary functions
gaussian-random-1: Public ordinary functions
generator: Public generic functions
generator: Public generic functions
Generic Function, (setf accumulator): Private generic functions
Generic Function, (setf adam-time-step): Private generic functions
Generic Function, (setf after-update-hook): Public generic functions
Generic Function, (setf attributes): Public generic functions
Generic Function, (setf batch-mean): Private generic functions
Generic Function, (setf batch-size): Public generic functions
Generic Function, (setf batch-variance): Private generic functions
Generic Function, (setf before-update-hook): Public generic functions
Generic Function, (setf cached-version1): Private generic functions
Generic Function, (setf cached-version2): Private generic functions
Generic Function, (setf cg-args): Public generic functions
Generic Function, (setf concatenation): Private generic functions
Generic Function, (setf confusion-count): Public generic functions
Generic Function, (setf cuda-window-start-time): Public generic functions
Generic Function, (setf current-time): Private generic functions
Generic Function, (setf dropout): Public generic functions
Generic Function, (setf hidden-sampling): Public generic functions
Generic Function, (setf importance): Public generic functions
Generic Function, (setf importances): Public generic functions
Generic Function, (setf indices-present): Public generic functions
Generic Function, (setf input-row-indices): Public generic functions
Generic Function, (setf input-seqs): Private generic functions
Generic Function, (setf last-cost): Private generic functions
Generic Function, (setf last-eval): Public generic functions
Generic Function, (setf learning-rate): Public generic functions
Generic Function, (setf mappings): Private generic functions
Generic Function, (setf max-n-samples): Public generic functions
Generic Function, (setf max-time): Private generic functions
Generic Function, (setf mean): Public generic functions
Generic Function, (setf mean-decay): Public generic functions
Generic Function, (setf mean-decay-decay): Public generic functions
Generic Function, (setf mean-estimates): Private generic functions
Generic Function, (setf momentum): Public generic functions
Generic Function, (setf monitors): Public generic functions
Generic Function, (setf n-gibbs): Public generic functions
Generic Function, (setf n-instances-in-batch): Private generic functions
Generic Function, (setf n-steps): Private generic functions
Generic Function, (setf n-weight-uses-in-batch): Public generic functions
Generic Function, (setf on-cg-batch-done): Public generic functions
Generic Function, (setf on-n-instances-changed): Public generic functions
Generic Function, (setf on-optimization-finished): Public generic functions
Generic Function, (setf on-optimization-started): Public generic functions
Generic Function, (setf population-mean): Private generic functions
Generic Function, (setf population-variance): Private generic functions
Generic Function, (setf posterior-gps): Private generic functions
Generic Function, (setf random-numbers): Private generic functions
Generic Function, (setf remove-trailing-nil-instances): Private generic functions
Generic Function, (setf samples): Private generic functions
Generic Function, (setf scale): Public generic functions
Generic Function, (setf scratch): Private generic functions
Generic Function, (setf segment-decay-fn): Private generic functions
Generic Function, (setf seq-indices): Public generic functions
Generic Function, (setf spare-vectors): Private generic functions
Generic Function, (setf step-monitors): Public generic functions
Generic Function, (setf target): Public generic functions
Generic Function, (setf termination): Public generic functions
Generic Function, (setf variance): Public generic functions
Generic Function, (setf variance-adjustment): Public generic functions
Generic Function, (setf variance-decay): Public generic functions
Generic Function, (setf variance-estimates): Private generic functions
Generic Function, (setf variance-for-prediction): Public generic functions
Generic Function, (setf visible-sampling): Public generic functions
Generic Function, (setf weight-decay): Public generic functions
Generic Function, (setf weight-deltas): Private generic functions
Generic Function, (setf weight-lumps): Private generic functions
Generic Function, (setf weight-penalty): Public generic functions
Generic Function, (setf weights): Public generic functions
Generic Function, accumulate-cloud-statistics: Private generic functions
Generic Function, accumulate-cloud-statistics*: Private generic functions
Generic Function, accumulate-gradients*: Public generic functions
Generic Function, accumulate-negative-phase-statistics: Private generic functions
Generic Function, accumulate-positive-phase-statistics: Private generic functions
Generic Function, accumulate-sparsity-statistics: Private generic functions
Generic Function, accumulator: Private generic functions
Generic Function, accumulator: Private generic functions
Generic Function, activate-cloud*: Private generic functions
Generic Function, adam-time-step: Private generic functions
Generic Function, add-to-counter: Public generic functions
Generic Function, after-update-hook: Public generic functions
Generic Function, apply-monitor: Public generic functions
Generic Function, args: Private generic functions
Generic Function, attributes: Public generic functions
Generic Function, backward: Public generic functions
Generic Function, backward-bpn: Private generic functions
Generic Function, bag-of-words-kind: Public generic functions
Generic Function, batch-mean: Private generic functions
Generic Function, batch-normalization: Public generic functions
Generic Function, batch-size: Public generic functions
Generic Function, batch-variance: Private generic functions
Generic Function, before-update-hook: Public generic functions
Generic Function, bm: Public generic functions
Generic Function, bpn: Public generic functions
Generic Function, cached-activations1: Private generic functions
Generic Function, cached-activations2: Private generic functions
Generic Function, cached-version1: Private generic functions
Generic Function, cached-version2: Private generic functions
Generic Function, call-with-sink-accumulator: Private generic functions
Generic Function, centered-evidence-outputs: Private generic functions
Generic Function, cg-args: Public generic functions
Generic Function, chunk: Public generic functions
Generic Function, chunk->fnn-definition: Private generic functions
Generic Function, chunk1: Public generic functions
Generic Function, chunk2: Public generic functions
Generic Function, chunks: Public generic functions
Generic Function, clamping-cache: Public generic functions
Generic Function, cloud: Public generic functions
Generic Function, cloud-a: Public generic functions
Generic Function, cloud-b: Public generic functions
Generic Function, clouds: Public generic functions
Generic Function, clouds-up-to-layers: Public generic functions
Generic Function, clumps: Public generic functions
Generic Function, concatenation: Private generic functions
Generic Function, concatenation-type: Public generic functions
Generic Function, conditioning-chunks: Public generic functions
Generic Function, confusion-class-name: Public generic functions
Generic Function, confusion-count: Public generic functions
Generic Function, confusion-matrix-classes: Public generic functions
Generic Function, copy: Public generic functions
Generic Function, copy-nodes: Private generic functions
Generic Function, copy-object-extra-initargs: Public generic functions
Generic Function, copy-object-slot: Public generic functions
Generic Function, cost: Public generic functions
Generic Function, counter: Public generic functions
Generic Function, counter-raw-values: Public generic functions
Generic Function, counter-values: Public generic functions
Generic Function, counts: Private generic functions
Generic Function, covariance-fn: Private generic functions
Generic Function, covariance-lump-name: Public generic functions
Generic Function, covariances: Private generic functions
Generic Function, cuda-window-start-time: Public generic functions
Generic Function, current-time: Private generic functions
Generic Function, damping: Public generic functions
Generic Function, dbn: Public generic functions
Generic Function, decode: Public generic functions
Generic Function, decodings: Private generic functions
Generic Function, default-mean-field-supervisor: Public generic functions
Generic Function, default-size: Public generic functions
Generic Function, default-value: Public generic functions
Generic Function, denominator*: Private generic functions
Generic Function, derivatives: Public generic functions
Generic Function, dimensions: Public generic functions
Generic Function, drop-negative-index-p: Private generic functions
Generic Function, dropout: Public generic functions
Generic Function, encode: Public generic functions
Generic Function, encoded-feature-test: Public generic functions
Generic Function, encoded-feature-type: Public generic functions
Generic Function, encodings: Private generic functions
Generic Function, evidence-inputs: Private generic functions
Generic Function, evidence-outputs: Private generic functions
Generic Function, executor-cache: Private generic functions
Generic Function, feature-encoder: Public generic functions
Generic Function, feature-mapper: Public generic functions
Generic Function, filename: Private generic functions
Generic Function, find-chunk: Public generic functions
Generic Function, find-cloud: Public generic functions
Generic Function, find-one-executor: Private generic functions
Generic Function, finishedp: Public generic functions
Generic Function, first-trained-clump: Private generic functions
Generic Function, flush-sparsity: Private generic functions
Generic Function, fn: Public generic functions
Generic Function, forward: Public generic functions
Generic Function, forward-bpn: Private generic functions
Generic Function, function-expression: Private generic functions
Generic Function, generator: Public generic functions
Generic Function, gp-covariances*: Private generic functions
Generic Function, gp-means: Public generic functions
Generic Function, gp-means-and-covariances*: Public generic functions
Generic Function, group-size: Public generic functions
Generic Function, has-hidden-to-hidden-p: Private generic functions
Generic Function, has-inputs-p: Private generic functions
Generic Function, has-visible-to-visible-p: Private generic functions
Generic Function, hidden-and-conditioning-chunks: Private generic functions
Generic Function, hidden-chunks: Public generic functions
Generic Function, hidden-sampling: Public generic functions
Generic Function, hidden-source-chunk: Private generic functions
Generic Function, importance: Public generic functions
Generic Function, importances: Public generic functions
Generic Function, inactive-rbms: Private generic functions
Generic Function, incoming->fnn-defintion: Private generic functions
Generic Function, index: Public generic functions
Generic Function, indices-present: Public generic functions
Generic Function, initialize-from-cloud: Private generic functions
Generic Function, initialize-gradient-source*: Public generic functions
Generic Function, initialize-optimizer*: Public generic functions
Generic Function, input-row-indices: Public generic functions
Generic Function, input-seqs: Private generic functions
Generic Function, inputs: Public generic functions
Generic Function, instance-to-executor-parameters: Public generic functions
Generic Function, into: Private generic functions
Generic Function, inverted-covariances: Private generic functions
Generic Function, kind: Private generic functions
Generic Function, label-index: Public generic functions
Generic Function, label-index-distribution: Public generic functions
Generic Function, label-index-distributions: Public generic functions
Generic Function, label-indices: Public generic functions
Generic Function, last-cost: Private generic functions
Generic Function, last-eval: Public generic functions
Generic Function, layers: Public generic functions
Generic Function, learning-rate: Public generic functions
Generic Function, length-scale: Private generic functions
Generic Function, log-cg-batch-done: Public generic functions
Generic Function, make-classification-accuracy-monitors*: Public generic functions
Generic Function, make-cost-monitors*: Public generic functions
Generic Function, make-cross-entropy-monitors*: Public generic functions
Generic Function, make-executor-with-parameters: Public generic functions
Generic Function, make-reconstruction-monitors*: Public generic functions
Generic Function, make-step-monitor-monitor-counter: Public generic functions
Generic Function, map-clumps: Private generic functions
Generic Function, map-confusion-matrix: Public generic functions
Generic Function, map-gradient-sink: Public generic functions
Generic Function, map-over-executors: Public generic functions
Generic Function, map-segment-runs: Public generic functions
Generic Function, map-segments: Public generic functions
Generic Function, mapping-data: Private generic functions
Generic Function, mappings: Private generic functions
Generic Function, mask: Private generic functions
Generic Function, max-lag: Public generic functions
Generic Function, max-n-samples: Public generic functions
Generic Function, max-n-stripes: Public generic functions
Generic Function, max-time: Private generic functions
Generic Function, maybe-update-weights: Private generic functions
Generic Function, mean: Public generic functions
Generic Function, mean-decay: Public generic functions
Generic Function, mean-decay-decay: Public generic functions
Generic Function, mean-estimates: Private generic functions
Generic Function, mean-fn: Private generic functions
Generic Function, mean-lump-name: Public generic functions
Generic Function, means: Public generic functions
Generic Function, means: Private generic functions
Generic Function, measurer: Public generic functions
Generic Function, minimize*: Public generic functions
Generic Function, momentum: Public generic functions
Generic Function, momentum-type: Public generic functions
Generic Function, monitors: Public generic functions
Generic Function, n: Private generic functions
Generic Function, n-gibbs: Public generic functions
Generic Function, n-instances: Public generic functions
Generic Function, n-instances-in-batch: Private generic functions
Generic Function, n-instances-until-update: Private generic functions
Generic Function, n-particles: Public generic functions
Generic Function, n-samples: Public generic functions
Generic Function, n-steps: Private generic functions
Generic Function, n-stripes: Public generic functions
Generic Function, n-weight-uses-in-batch: Public generic functions
Generic Function, name: Public generic functions
Generic Function, negative-phase: Public generic functions
Generic Function, next-node-inputs: Private generic functions
Generic Function, nodes: Public generic functions
Generic Function, non-constant-mats: Private generic functions
Generic Function, numerator*: Private generic functions
Generic Function, old-nodes: Private generic functions
Generic Function, old-products: Private generic functions
Generic Function, old-sum1: Private generic functions
Generic Function, on-cg-batch-done: Public generic functions
Generic Function, on-n-instances-changed: Public generic functions
Generic Function, on-optimization-finished: Public generic functions
Generic Function, on-optimization-started: Public generic functions
Generic Function, optimizers: Private generic functions
Generic Function, options: Private generic functions
Generic Function, parameter-indices: Public generic functions
Generic Function, per-label-counters: Private generic functions
Generic Function, period: Private generic functions
Generic Function, period: Private generic functions
Generic Function, persistent-chains: Public generic functions
Generic Function, populate-convert-to-dbm-sample-fn: Private generic functions
Generic Function, populate-key: Public generic functions
Generic Function, populate-map-cache-lazily-from-dbm: Public generic functions
Generic Function, populate-periodic-fn: Private generic functions
Generic Function, population-decay: Public generic functions
Generic Function, population-mean: Private generic functions
Generic Function, population-variance: Private generic functions
Generic Function, positive-phase: Public generic functions
Generic Function, posterior-gps: Private generic functions
Generic Function, print-lump-parts: Private generic functions
Generic Function, prior-gp: Public generic functions
Generic Function, products: Private generic functions
Generic Function, random-numbers: Private generic functions
Generic Function, randoms: Private generic functions
Generic Function, rbm: Public generic functions
Generic Function, rbms: Public generic functions
Generic Function, read-state*: Public generic functions
Generic Function, remove-trailing-nil-instances: Private generic functions
Generic Function, report-optimization-parameters: Public generic functions
Generic Function, reset-counter: Public generic functions
Generic Function, reset-optimization-monitors: Public generic functions
Generic Function, resize-chunk: Private generic functions
Generic Function, resolve-clump: Private generic functions
Generic Function, roughness: Private generic functions
Generic Function, sample: Public generic functions
Generic Function, sample-chunk: Public generic functions
Generic Function, samples: Private generic functions
Generic Function, scale: Public generic functions
Generic Function, scale1: Private generic functions
Generic Function, scale2: Private generic functions
Generic Function, scratch: Private generic functions
Generic Function, segment-decay-fn: Private generic functions
Generic Function, segment-derivatives: Public generic functions
Generic Function, segment-filter: Public generic functions
Generic Function, segment-set: Public generic functions
Generic Function, segment-weights: Public generic functions
Generic Function, segmenter: Public generic functions
Generic Function, segments: Public generic functions
Generic Function, seq-elt-fn: Public generic functions
Generic Function, seq-indices: Public generic functions
Generic Function, set-chunk-mean: Public generic functions
Generic Function, set-hidden-mean: Public generic functions
Generic Function, set-input: Public generic functions
Generic Function, set-max-n-stripes: Public generic functions
Generic Function, set-n-stripes: Public generic functions
Generic Function, set-visible-mean: Public generic functions
Generic Function, shared-with-clump: Private generic functions
Generic Function, shift: Public generic functions
Generic Function, signal-variance: Private generic functions
Generic Function, size: Public generic functions
Generic Function, sort-confusion-classes: Public generic functions
Generic Function, spare-vectors: Private generic functions
Generic Function, sparser: Public generic functions
Generic Function, sparsity-gradient-sources: Private generic functions
Generic Function, sparsity-target: Private generic functions
Generic Function, start-indices: Private generic functions
Generic Function, step-monitors: Public generic functions
Generic Function, stripe-end: Public generic functions
Generic Function, stripe-start: Public generic functions
Generic Function, stripedp: Public generic functions
Generic Function, sum1: Private generic functions
Generic Function, sum2: Private generic functions
Generic Function, target: Public generic functions
Generic Function, termination: Public generic functions
Generic Function, transpose-weights-p: Public generic functions
Generic Function, unfolder: Public generic functions
Generic Function, use-segment-derivatives-p: Public generic functions
Generic Function, variance: Public generic functions
Generic Function, variance-adjustment: Public generic functions
Generic Function, variance-decay: Public generic functions
Generic Function, variance-estimates: Private generic functions
Generic Function, variance-for-prediction: Public generic functions
Generic Function, visible-and-conditioning-chunks: Private generic functions
Generic Function, visible-chunks: Public generic functions
Generic Function, visible-sampling: Public generic functions
Generic Function, warp-length: Public generic functions
Generic Function, warp-start: Public generic functions
Generic Function, weight-decay: Public generic functions
Generic Function, weight-deltas: Private generic functions
Generic Function, weight-indices: Public generic functions
Generic Function, weight-lumps: Private generic functions
Generic Function, weight-penalty: Public generic functions
Generic Function, weights: Public generic functions
Generic Function, write-command: Public generic functions
Generic Function, write-data: Public generic functions
Generic Function, write-mapping: Public generic functions
Generic Function, write-state*: Public generic functions
Generic Function, x: Private generic functions
Generic Function, x: Private generic functions
Generic Function, y: Private generic functions
Generic Function, zero-weight-to-self: Private generic functions
get-all: Private ordinary functions
gp-confidences-as-plot-data: Public ordinary functions
gp-covariances: Public ordinary functions
gp-covariances*: Private generic functions
gp-covariances*: Private generic functions
gp-covariances*: Private generic functions
gp-covariances*: Private generic functions
gp-data-matrix: Private ordinary functions
gp-data-matrix-for-level: Private ordinary functions
gp-means: Public generic functions
gp-means: Public generic functions
gp-means: Public generic functions
gp-means: Public generic functions
gp-means-and-covariances: Public ordinary functions
gp-means-and-covariances*: Public generic functions
gp-means-and-covariances*: Public generic functions
gp-means-and-covariances*: Public generic functions
gp-means-and-covariances*: Public generic functions
gp-samples-as-plot-data: Public ordinary functions
group: Public ordinary functions
group-size: Public generic functions
group-size: Public generic functions
group-size: Public generic functions
group-size: Public generic functions
group-size: Public generic functions
group-size: Public generic functions
group-size: Public generic functions

H
half-life-to-decay: Public ordinary functions
half-life-to-decay-rate: Public ordinary functions
has-hidden-to-hidden-p: Private generic functions
has-hidden-to-hidden-p: Private generic functions
has-inputs-p: Private generic functions
has-inputs-p: Private generic functions
has-visible-to-visible-p: Private generic functions
has-visible-to-visible-p: Private generic functions
hash-table->vector: Public ordinary functions
hidden-and-conditioning-chunks: Private generic functions
hidden-and-conditioning-chunks: Private generic functions
hidden-chunks: Public generic functions
hidden-chunks: Public generic functions
hidden-chunks: Public generic functions
hidden-sampling: Public generic functions
hidden-sampling: Public generic functions
hidden-source-chunk: Private generic functions
hidden-source-chunk: Private generic functions
hijack-means-to-activation: Private ordinary functions

I
importance: Public generic functions
importance: Public generic functions
importances: Public generic functions
importances: Public generic functions
inactive-rbms: Private generic functions
inactive-rbms: Private generic functions
incoming->fnn-defintion: Private generic functions
incoming->fnn-defintion: Private generic functions
incoming->fnn-defintion: Private generic functions
incoming-cloud: Private ordinary functions
incoming-from-lumpy: Private ordinary functions
incoming-list->fnn-definition: Private ordinary functions
incoming-p: Private ordinary functions
incoming-transposep: Private ordinary functions
index: Public generic functions
index: Public generic functions
index-scored-features: Private ordinary functions
indices-present: Public generic functions
indices-present: Public generic functions
initialize-fnn-from-bm: Public ordinary functions
initialize-from-cloud: Private generic functions
initialize-from-cloud: Private generic functions
initialize-from-cloud: Private generic functions
initialize-gradient-source*: Public generic functions
initialize-gradient-source*: Public generic functions
initialize-gradient-source*: Public generic functions
initialize-gradient-source*: Public generic functions
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-instance: Public standalone methods
initialize-optimizer*: Public generic functions
initialize-optimizer*: Public generic functions
initialize-optimizer*: Public generic functions
initialize-optimizer*: Public generic functions
initialize-optimizer*: Public generic functions
initialize-optimizer*: Public generic functions
initialize-optimizer*: Public generic functions
input-row-indices: Public generic functions
input-row-indices: Public generic functions
input-seqs: Private generic functions
input-seqs: Private generic functions
inputs: Public generic functions
inputs: Public generic functions
inputs->nodes: Public ordinary functions
insert-into-executor-cache: Private ordinary functions
insert-into-sorted-vector: Public ordinary functions
instance-to-executor-parameters: Public generic functions
instance-to-executor-parameters: Public generic functions
into: Private generic functions
into: Private generic functions
invert-permutation: Public ordinary functions
inverted-covariances: Private generic functions
inverted-covariances: Private generic functions

K
kind: Private generic functions
kind: Private generic functions
kind-to-string: Private ordinary functions

L
label-index: Public generic functions
label-index-distribution: Public generic functions
label-index-distributions: Public generic functions
label-index-distributions: Public generic functions
label-index-distributions: Public generic functions
label-indices: Public generic functions
label-indices: Public generic functions
label-indices: Public generic functions
lag: Public ordinary functions
lagged-clump-lag: Private ordinary functions
lagged-clump-name: Private ordinary functions
lagged-clump-p: Private ordinary functions
lagged-clump-path: Private ordinary functions
last-cost: Private generic functions
last-cost: Private generic functions
last-eval: Public generic functions
last-eval: Public generic functions
last1: Public ordinary functions
layers: Public generic functions
layers: Public generic functions
learning-rate: Public generic functions
learning-rate: Public generic functions
learning-rate: Public generic functions
length-scale: Private generic functions
length-scale: Private generic functions
lisp-adam-update: Private ordinary functions
lisp-adam-update-d: Private ordinary functions
lisp-adam-update-s: Private ordinary functions
lisp-dropout: Private ordinary functions
lisp-dropout-d: Private ordinary functions
lisp-dropout-s: Private ordinary functions
lisp-max: Private ordinary functions
lisp-max-channel: Private ordinary functions
lisp-max-channel-d: Private ordinary functions
lisp-max-channel-derivative: Private ordinary functions
lisp-max-channel-derivative-d: Private ordinary functions
lisp-max-channel-derivative-s: Private ordinary functions
lisp-max-channel-s: Private ordinary functions
lisp-max-d: Private ordinary functions
lisp-max-derivative: Private ordinary functions
lisp-max-derivative-d: Private ordinary functions
lisp-max-derivative-s: Private ordinary functions
lisp-max-s: Private ordinary functions
lisp-maybe-renormalize-columns: Private ordinary functions
lisp-maybe-renormalize-columns-d: Private ordinary functions
lisp-maybe-renormalize-columns-s: Private ordinary functions
lisp-maybe-renormalize-rows: Private ordinary functions
lisp-maybe-renormalize-rows-d: Private ordinary functions
lisp-maybe-renormalize-rows-s: Private ordinary functions
lisp-min: Private ordinary functions
lisp-min-d: Private ordinary functions
lisp-min-derivative: Private ordinary functions
lisp-min-derivative-d: Private ordinary functions
lisp-min-derivative-s: Private ordinary functions
lisp-min-s: Private ordinary functions
lisp-rectify: Private ordinary functions
lisp-rectify-d: Private ordinary functions
lisp-rectify-derivative: Private ordinary functions
lisp-rectify-derivative-d: Private ordinary functions
lisp-rectify-derivative-s: Private ordinary functions
lisp-rectify-s: Private ordinary functions
lisp-scaled-tanh!: Private ordinary functions
lisp-scaled-tanh!-d: Private ordinary functions
lisp-scaled-tanh!-s: Private ordinary functions
lisp-scaled-tanh-derivative!: Private ordinary functions
lisp-scaled-tanh-derivative!-d: Private ordinary functions
lisp-scaled-tanh-derivative!-s: Private ordinary functions
lisp-sigmoid!: Private ordinary functions
lisp-sigmoid!-d: Private ordinary functions
lisp-sigmoid!-s: Private ordinary functions
lisp-sigmoid-derivative!: Private ordinary functions
lisp-sigmoid-derivative!-d: Private ordinary functions
lisp-sigmoid-derivative!-s: Private ordinary functions
lisp-sin!: Private ordinary functions
lisp-sin!-d: Private ordinary functions
lisp-sin!-s: Private ordinary functions
lisp-sin-derivative!: Private ordinary functions
lisp-sin-derivative!-d: Private ordinary functions
lisp-sin-derivative!-s: Private ordinary functions
lisp-softmax-xe: Private ordinary functions
lisp-softmax-xe-d: Private ordinary functions
lisp-softmax-xe-derivative: Private ordinary functions
lisp-softmax-xe-derivative-d: Private ordinary functions
lisp-softmax-xe-derivative-s: Private ordinary functions
lisp-softmax-xe-derivative/sparse: Private ordinary functions
lisp-softmax-xe-derivative/sparse-d: Private ordinary functions
lisp-softmax-xe-derivative/sparse-s: Private ordinary functions
lisp-softmax-xe-s: Private ordinary functions
lisp-tanh!: Private ordinary functions
lisp-tanh!-d: Private ordinary functions
lisp-tanh!-s: Private ordinary functions
lisp-tanh-derivative!: Private ordinary functions
lisp-tanh-derivative!-d: Private ordinary functions
lisp-tanh-derivative!-s: Private ordinary functions
list-samples: Public ordinary functions
list-segments: Public ordinary functions
load-state: Public ordinary functions
log-cg-batch-done: Public generic functions
log-cg-batch-done: Public generic functions
log-mat-room: Public ordinary functions
log-msg: Public ordinary functions
log-padded: Public ordinary functions
lookup-executor-cache: Private ordinary functions
lumpies->fnn-definition: Private ordinary functions
lumpy-activation-name: Private ordinary functions
lumpy-chunk: Private ordinary functions
lumpy-depth: Private ordinary functions
lumpy-incomings: Private ordinary functions
lumpy-kind: Private ordinary functions
lumpy-name: Private ordinary functions
lumpy-original: Private ordinary functions
lumpy-p: Private ordinary functions
lumpy-symbol: Private ordinary functions

M
Macro, apply-key: Public macros
Macro, build-fnn: Public macros
Macro, build-rnn: Public macros
Macro, data: Public macros
Macro, defclass-now: Public macros
Macro, define-descriptions: Public macros
Macro, define-slot-not-to-be-copied: Private macros
Macro, define-slot-to-be-shallow-copied: Private macros
Macro, define-slots-not-to-be-copied: Public macros
Macro, define-slots-to-be-shallow-copied: Public macros
Macro, defmaker: Public macros
Macro, do-batches-for-model: Public macros
Macro, do-chunk: Private macros
Macro, do-cloud-runs: Private macros
Macro, do-cloud/chunk1: Private macros
Macro, do-clouds: Public macros
Macro, do-executors: Public macros
Macro, do-gradient-sink: Public macros
Macro, do-segment-set: Public macros
Macro, do-sparse-targets: Private macros
Macro, do-stripes: Private macros
Macro, file: Public macros
Macro, fn: Public macros
Macro, maybe-print-unreadable-object: Private macros
Macro, plot: Public macros
Macro, push-all: Public macros
Macro, repeatably: Public macros
Macro, repeatedly: Public macros
Macro, special-case: Public macros
Macro, splot: Public macros
Macro, the!: Public macros
Macro, while: Public macros
Macro, with-command-stream: Public macros
Macro, with-copying: Public macros
Macro, with-logging-entry: Public macros
Macro, with-nil-on-arithmetic-error: Private macros
Macro, with-padded-attribute-printing: Public macros
Macro, with-safe-printing: Private macros
Macro, with-session: Public macros
Macro, with-sink-accumulator: Private macros
Macro, with-stripes: Public macros
Macro, with-versions: Private macros
Macro, with-weights-copied: Public macros
Macro, with-zero-on-underflow: Public macros
make-classification-accuracy-monitors: Public ordinary functions
make-classification-accuracy-monitors*: Public generic functions
make-classification-accuracy-monitors*: Public generic functions
make-classification-accuracy-monitors*: Public generic functions
make-classification-accuracy-monitors*: Public generic functions
make-classification-accuracy-monitors*: Public generic functions
make-confusion-matrix: Public ordinary functions
make-cost-monitors: Public ordinary functions
make-cost-monitors*: Public generic functions
make-cost-monitors*: Public generic functions
make-cross-entropy-monitors: Public ordinary functions
make-cross-entropy-monitors*: Public generic functions
make-cross-entropy-monitors*: Public generic functions
make-cross-entropy-monitors*: Public generic functions
make-cross-entropy-monitors*: Public generic functions
make-cross-entropy-monitors*: Public generic functions
make-do-cloud/chunk2: Private ordinary functions
make-executor-with-parameters: Public generic functions
make-incoming: Private ordinary functions
make-indexer: Public ordinary functions
make-instance-args: Private ordinary functions
make-label-monitors: Public ordinary functions
make-lagged-clump: Private ordinary functions
make-lumpy: Private ordinary functions
make-matrix-from-lump-stripe: Private ordinary functions
make-n-gram-mappee: Public ordinary functions
make-random-generator: Public ordinary functions
make-random-sampler: Public ordinary functions
make-reconstruction-monitors: Public ordinary functions
make-reconstruction-monitors*: Public generic functions
make-reconstruction-monitors*: Public generic functions
make-reconstruction-monitors*: Public generic functions
make-reconstruction-monitors*: Public generic functions
make-sequence-generator: Public ordinary functions
make-sequence-sampler: Public ordinary functions
make-sorted-group-generator: Public ordinary functions
make-step-monitor-monitor-counter: Public generic functions
make-step-monitor-monitor-counter: Public generic functions
make-step-monitor-monitors: Public ordinary functions
make-vector-from-lump-stripe: Private ordinary functions
map-batches-for-model: Public ordinary functions
map-clumps: Private generic functions
map-clumps: Private generic functions
map-clumps: Private generic functions
map-confusion-matrix: Public generic functions
map-confusion-matrix: Public generic functions
map-dataset: Public ordinary functions
map-datasets: Public ordinary functions
map-gradient-sink: Public generic functions
map-gradient-sink: Public generic functions
map-gradient-sink: Public generic functions
map-gradient-sink: Public generic functions
map-over-executors: Public generic functions
map-over-executors: Public generic functions
map-over-executors: Public generic functions
map-segment-runs: Public generic functions
map-segment-runs: Public generic functions
map-segment-runs: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-segments: Public generic functions
map-sparser: Private ordinary functions
mapping-data: Private generic functions
mapping-data: Private generic functions
mappings: Private generic functions
mappings: Private generic functions
mark-everything-present: Public ordinary functions
mask: Private generic functions
mask: Private generic functions
mat-and-row/column-sum-size: Private ordinary functions
max-lag: Public generic functions
max-lag: Public generic functions
max-n-samples: Public generic functions
max-n-samples: Public generic functions
max-n-stripes: Public generic functions
max-n-stripes: Public generic functions
max-n-stripes: Public generic functions
max-n-stripes: Public generic functions
max-n-stripes: Public generic functions
max-n-stripes: Public generic functions
max-n-stripes: Public generic functions
max-position: Public ordinary functions
max-row-positions: Public ordinary functions
max-time: Private generic functions
max-time: Private generic functions
maybe-add-name-to-init: Private ordinary functions
maybe-copy-weight: Private ordinary functions
maybe-print-unreadable-object: Private macros
maybe-remember: Private ordinary functions
maybe-renormalize-columns: Private ordinary functions
maybe-renormalize-rows: Private ordinary functions
maybe-update-weights: Private generic functions
maybe-update-weights: Private generic functions
maybe-update-weights: Private generic functions
maybe-update-weights: Private generic functions
maybe-update-weights: Private generic functions
maybe-update-weights: Private generic functions
maybe-use-remembered: Private ordinary functions
mean: Public generic functions
mean: Public generic functions
mean-decay: Public generic functions
mean-decay: Public generic functions
mean-decay-decay: Public generic functions
mean-decay-decay: Public generic functions
mean-estimates: Private generic functions
mean-estimates: Private generic functions
mean-fn: Private generic functions
mean-fn: Private generic functions
mean-lump-name: Public generic functions
mean-lump-name: Public generic functions
means: Public generic functions
means: Public generic functions
means: Private generic functions
means: Private generic functions
means-or-samples: Private ordinary functions
measure-classification-accuracy: Public ordinary functions
measure-confusion: Public ordinary functions
measure-cross-entropy: Public ordinary functions
measure-roc-auc: Public ordinary functions
measurer: Public generic functions
measurer: Public generic functions
merge-cloud-specs: Public ordinary functions
merge-weights-and-arguments: Private ordinary functions
Method, (setf accumulator): Private generic functions
Method, (setf adam-time-step): Private generic functions
Method, (setf after-update-hook): Public generic functions
Method, (setf attributes): Public generic functions
Method, (setf batch-mean): Private generic functions
Method, (setf batch-size): Public generic functions
Method, (setf batch-size): Public generic functions
Method, (setf batch-variance): Private generic functions
Method, (setf before-update-hook): Public generic functions
Method, (setf cached-version1): Private generic functions
Method, (setf cached-version2): Private generic functions
Method, (setf cg-args): Public generic functions
Method, (setf concatenation): Private generic functions
Method, (setf confusion-count): Public generic functions
Method, (setf cuda-window-start-time): Public generic functions
Method, (setf current-time): Private generic functions
Method, (setf dropout): Public generic functions
Method, (setf dropout): Public generic functions
Method, (setf dropout): Public generic functions
Method, (setf hidden-sampling): Public generic functions
Method, (setf importance): Public generic functions
Method, (setf importances): Public generic functions
Method, (setf indices-present): Public generic functions
Method, (setf input-row-indices): Public generic functions
Method, (setf input-seqs): Private generic functions
Method, (setf last-cost): Private generic functions
Method, (setf last-eval): Public generic functions
Method, (setf learning-rate): Public generic functions
Method, (setf learning-rate): Public generic functions
Method, (setf mappings): Private generic functions
Method, (setf max-n-samples): Public generic functions
Method, (setf max-time): Private generic functions
Method, (setf mean): Public generic functions
Method, (setf mean-decay): Public generic functions
Method, (setf mean-decay-decay): Public generic functions
Method, (setf mean-estimates): Private generic functions
Method, (setf momentum): Public generic functions
Method, (setf monitors): Public generic functions
Method, (setf n-gibbs): Public generic functions
Method, (setf n-instances-in-batch): Private generic functions
Method, (setf n-steps): Private generic functions
Method, (setf n-weight-uses-in-batch): Public generic functions
Method, (setf n-weight-uses-in-batch): Public generic functions
Method, (setf on-cg-batch-done): Public generic functions
Method, (setf on-n-instances-changed): Public generic functions
Method, (setf on-optimization-finished): Public generic functions
Method, (setf on-optimization-started): Public generic functions
Method, (setf population-mean): Private generic functions
Method, (setf population-variance): Private generic functions
Method, (setf posterior-gps): Private generic functions
Method, (setf random-numbers): Private generic functions
Method, (setf remove-trailing-nil-instances): Private generic functions
Method, (setf samples): Private generic functions
Method, (setf scale): Public generic functions
Method, (setf scale): Public generic functions
Method, (setf scratch): Private generic functions
Method, (setf segment-decay-fn): Private generic functions
Method, (setf seq-indices): Public generic functions
Method, (setf spare-vectors): Private generic functions
Method, (setf step-monitors): Public generic functions
Method, (setf target): Public generic functions
Method, (setf termination): Public generic functions
Method, (setf variance): Public generic functions
Method, (setf variance-adjustment): Public generic functions
Method, (setf variance-decay): Public generic functions
Method, (setf variance-estimates): Private generic functions
Method, (setf variance-for-prediction): Public generic functions
Method, (setf visible-sampling): Public generic functions
Method, (setf weight-decay): Public generic functions
Method, (setf weight-deltas): Private generic functions
Method, (setf weight-lumps): Private generic functions
Method, (setf weight-penalty): Public generic functions
Method, (setf weights): Public generic functions
Method, accumulate-cloud-statistics: Private generic functions
Method, accumulate-cloud-statistics: Private generic functions
Method, accumulate-cloud-statistics*: Private generic functions
Method, accumulate-gradients*: Public generic functions
Method, accumulate-gradients*: Public generic functions
Method, accumulate-gradients*: Public generic functions
Method, accumulate-gradients*: Public generic functions
Method, accumulate-gradients*: Public generic functions
Method, accumulate-gradients*: Public generic functions
Method, accumulate-negative-phase-statistics: Private generic functions
Method, accumulate-negative-phase-statistics: Private generic functions
Method, accumulate-positive-phase-statistics: Private generic functions
Method, accumulate-positive-phase-statistics: Private generic functions
Method, accumulate-positive-phase-statistics: Private generic functions
Method, accumulate-sparsity-statistics: Private generic functions
Method, accumulate-sparsity-statistics: Private generic functions
Method, accumulator: Private generic functions
Method, accumulator: Private generic functions
Method, activate-cloud*: Private generic functions
Method, activate-cloud*: Private generic functions
Method, activate-cloud*: Private generic functions
Method, activate-cloud*: Private generic functions
Method, adam-time-step: Private generic functions
Method, add-to-counter: Public generic functions
Method, add-to-counter: Public generic functions
Method, add-to-counter: Public generic functions
Method, after-update-hook: Public generic functions
Method, apply-monitor: Public generic functions
Method, apply-monitor: Public generic functions
Method, apply-monitor: Public generic functions
Method, args: Private generic functions
Method, args: Private generic functions
Method, attributes: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward: Public generic functions
Method, backward-bpn: Private generic functions
Method, backward-bpn: Private generic functions
Method, bag-of-words-kind: Public generic functions
Method, batch-mean: Private generic functions
Method, batch-normalization: Public generic functions
Method, batch-size: Public generic functions
Method, batch-size: Public generic functions
Method, batch-size: Public generic functions
Method, batch-variance: Private generic functions
Method, before-update-hook: Public generic functions
Method, bm: Public generic functions
Method, bpn: Public generic functions
Method, cached-activations1: Private generic functions
Method, cached-activations2: Private generic functions
Method, cached-version1: Private generic functions
Method, cached-version2: Private generic functions
Method, call-with-sink-accumulator: Private generic functions
Method, call-with-sink-accumulator: Private generic functions
Method, centered-evidence-outputs: Private generic functions
Method, cg-args: Public generic functions
Method, chunk: Public generic functions
Method, chunk->fnn-definition: Private generic functions
Method, chunk->fnn-definition: Private generic functions
Method, chunk->fnn-definition: Private generic functions
Method, chunk->fnn-definition: Private generic functions
Method, chunk1: Public generic functions
Method, chunk2: Public generic functions
Method, chunks: Public generic functions
Method, chunks: Public generic functions
Method, clamping-cache: Public generic functions
Method, cloud: Public generic functions
Method, cloud-a: Public generic functions
Method, cloud-b: Public generic functions
Method, clouds: Public generic functions
Method, clouds: Public generic functions
Method, clouds-up-to-layers: Public generic functions
Method, clumps: Public generic functions
Method, concatenation: Private generic functions
Method, concatenation-type: Public generic functions
Method, conditioning-chunks: Public generic functions
Method, confusion-class-name: Public generic functions
Method, confusion-count: Public generic functions
Method, confusion-matrix-classes: Public generic functions
Method, copy: Public generic functions
Method, copy: Public generic functions
Method, copy: Public generic functions
Method, copy: Public generic functions
Method, copy-nodes: Private generic functions
Method, copy-nodes: Private generic functions
Method, copy-nodes: Private generic functions
Method, copy-object-extra-initargs: Public generic functions
Method, copy-object-extra-initargs: Public generic functions
Method, copy-object-extra-initargs: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, copy-object-slot: Public generic functions
Method, cost: Public generic functions
Method, cost: Public generic functions
Method, cost: Public generic functions
Method, cost: Public generic functions
Method, cost: Public generic functions
Method, counter: Public generic functions
Method, counter: Public generic functions
Method, counter: Public generic functions
Method, counter-raw-values: Public generic functions
Method, counter-raw-values: Public generic functions
Method, counter-values: Public generic functions
Method, counter-values: Public generic functions
Method, counter-values: Public generic functions
Method, counts: Private generic functions
Method, covariance-fn: Private generic functions
Method, covariance-lump-name: Public generic functions
Method, covariances: Private generic functions
Method, cuda-window-start-time: Public generic functions
Method, current-time: Private generic functions
Method, damping: Public generic functions
Method, dbn: Public generic functions
Method, decode: Public generic functions
Method, decode: Public generic functions
Method, decode: Public generic functions
Method, decodings: Private generic functions
Method, default-mean-field-supervisor: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-size: Public generic functions
Method, default-value: Public generic functions
Method, default-value: Public generic functions
Method, denominator*: Private generic functions
Method, derivatives: Public generic functions
Method, derivatives: Public generic functions
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, describe-object: Public standalone methods
Method, dimensions: Public generic functions
Method, drop-negative-index-p: Private generic functions
Method, dropout: Public generic functions
Method, dropout: Public generic functions
Method, dropout: Public generic functions
Method, encode: Public generic functions
Method, encode: Public generic functions
Method, encode: Public generic functions
Method, encode: Public generic functions
Method, encoded-feature-test: Public generic functions
Method, encoded-feature-type: Public generic functions
Method, encodings: Private generic functions
Method, evidence-inputs: Private generic functions
Method, evidence-outputs: Private generic functions
Method, executor-cache: Private generic functions
Method, feature-encoder: Public generic functions
Method, feature-mapper: Public generic functions
Method, filename: Private generic functions
Method, find-chunk: Public generic functions
Method, find-chunk: Public generic functions
Method, find-cloud: Public generic functions
Method, find-cloud: Public generic functions
Method, find-one-executor: Private generic functions
Method, find-one-executor: Private generic functions
Method, find-one-executor: Private generic functions
Method, finishedp: Public generic functions
Method, first-trained-clump: Private generic functions
Method, flush-sparsity: Private generic functions
Method, flush-sparsity: Private generic functions
Method, fn: Public generic functions
Method, fn: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward: Public generic functions
Method, forward-bpn: Private generic functions
Method, forward-bpn: Private generic functions
Method, function-expression: Private generic functions
Method, generator: Public generic functions
Method, gp-covariances*: Private generic functions
Method, gp-covariances*: Private generic functions
Method, gp-covariances*: Private generic functions
Method, gp-means: Public generic functions
Method, gp-means: Public generic functions
Method, gp-means: Public generic functions
Method, gp-means-and-covariances*: Public generic functions
Method, gp-means-and-covariances*: Public generic functions
Method, gp-means-and-covariances*: Public generic functions
Method, group-size: Public generic functions
Method, group-size: Public generic functions
Method, group-size: Public generic functions
Method, group-size: Public generic functions
Method, group-size: Public generic functions
Method, group-size: Public generic functions
Method, has-hidden-to-hidden-p: Private generic functions
Method, has-inputs-p: Private generic functions
Method, has-visible-to-visible-p: Private generic functions
Method, hidden-and-conditioning-chunks: Private generic functions
Method, hidden-chunks: Public generic functions
Method, hidden-chunks: Public generic functions
Method, hidden-sampling: Public generic functions
Method, hidden-source-chunk: Private generic functions
Method, importance: Public generic functions
Method, importances: Public generic functions
Method, inactive-rbms: Private generic functions
Method, incoming->fnn-defintion: Private generic functions
Method, incoming->fnn-defintion: Private generic functions
Method, index: Public generic functions
Method, indices-present: Public generic functions
Method, initialize-from-cloud: Private generic functions
Method, initialize-from-cloud: Private generic functions
Method, initialize-gradient-source*: Public generic functions
Method, initialize-gradient-source*: Public generic functions
Method, initialize-gradient-source*: Public generic functions
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-instance: Public standalone methods
Method, initialize-optimizer*: Public generic functions
Method, initialize-optimizer*: Public generic functions
Method, initialize-optimizer*: Public generic functions
Method, initialize-optimizer*: Public generic functions
Method, initialize-optimizer*: Public generic functions
Method, initialize-optimizer*: Public generic functions
Method, input-row-indices: Public generic functions
Method, input-seqs: Private generic functions
Method, inputs: Public generic functions
Method, instance-to-executor-parameters: Public generic functions
Method, into: Private generic functions
Method, inverted-covariances: Private generic functions
Method, kind: Private generic functions
Method, label-index-distributions: Public generic functions
Method, label-index-distributions: Public generic functions
Method, label-indices: Public generic functions
Method, label-indices: Public generic functions
Method, last-cost: Private generic functions
Method, last-eval: Public generic functions
Method, layers: Public generic functions
Method, learning-rate: Public generic functions
Method, learning-rate: Public generic functions
Method, length-scale: Private generic functions
Method, log-cg-batch-done: Public generic functions
Method, make-classification-accuracy-monitors*: Public generic functions
Method, make-classification-accuracy-monitors*: Public generic functions
Method, make-classification-accuracy-monitors*: Public generic functions
Method, make-classification-accuracy-monitors*: Public generic functions
Method, make-cost-monitors*: Public generic functions
Method, make-cross-entropy-monitors*: Public generic functions
Method, make-cross-entropy-monitors*: Public generic functions
Method, make-cross-entropy-monitors*: Public generic functions
Method, make-cross-entropy-monitors*: Public generic functions
Method, make-reconstruction-monitors*: Public generic functions
Method, make-reconstruction-monitors*: Public generic functions
Method, make-reconstruction-monitors*: Public generic functions
Method, make-step-monitor-monitor-counter: Public generic functions
Method, map-clumps: Private generic functions
Method, map-clumps: Private generic functions
Method, map-confusion-matrix: Public generic functions
Method, map-gradient-sink: Public generic functions
Method, map-gradient-sink: Public generic functions
Method, map-gradient-sink: Public generic functions
Method, map-over-executors: Public generic functions
Method, map-over-executors: Public generic functions
Method, map-segment-runs: Public generic functions
Method, map-segment-runs: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, map-segments: Public generic functions
Method, mapping-data: Private generic functions
Method, mappings: Private generic functions
Method, mask: Private generic functions
Method, max-lag: Public generic functions
Method, max-n-samples: Public generic functions
Method, max-n-stripes: Public generic functions
Method, max-n-stripes: Public generic functions
Method, max-n-stripes: Public generic functions
Method, max-n-stripes: Public generic functions
Method, max-n-stripes: Public generic functions
Method, max-n-stripes: Public generic functions
Method, max-time: Private generic functions
Method, maybe-update-weights: Private generic functions
Method, maybe-update-weights: Private generic functions
Method, maybe-update-weights: Private generic functions
Method, maybe-update-weights: Private generic functions
Method, maybe-update-weights: Private generic functions
Method, mean: Public generic functions
Method, mean-decay: Public generic functions
Method, mean-decay-decay: Public generic functions
Method, mean-estimates: Private generic functions
Method, mean-fn: Private generic functions
Method, mean-lump-name: Public generic functions
Method, means: Public generic functions
Method, means: Private generic functions
Method, measurer: Public generic functions
Method, minimize*: Public generic functions
Method, minimize*: Public generic functions
Method, minimize*: Public generic functions
Method, momentum: Public generic functions
Method, momentum-type: Public generic functions
Method, monitors: Public generic functions
Method, monitors: Public generic functions
Method, monitors: Public generic functions
Method, n: Private generic functions
Method, n: Private generic functions
Method, n-gibbs: Public generic functions
Method, n-instances: Public generic functions
Method, n-instances-in-batch: Private generic functions
Method, n-instances-until-update: Private generic functions
Method, n-instances-until-update: Private generic functions
Method, n-instances-until-update: Private generic functions
Method, n-instances-until-update: Private generic functions
Method, n-particles: Public generic functions
Method, n-samples: Public generic functions
Method, n-steps: Private generic functions
Method, n-stripes: Public generic functions
Method, n-stripes: Public generic functions
Method, n-stripes: Public generic functions
Method, n-stripes: Public generic functions
Method, n-stripes: Public generic functions
Method, n-stripes: Public generic functions
Method, n-weight-uses-in-batch: Public generic functions
Method, n-weight-uses-in-batch: Public generic functions
Method, name: Public generic functions
Method, name: Public generic functions
Method, name: Public generic functions
Method, name: Public generic functions
Method, name: Public generic functions
Method, name: Public generic functions
Method, negative-phase: Public generic functions
Method, negative-phase: Public generic functions
Method, next-node-inputs: Private generic functions
Method, nodes: Public generic functions
Method, nodes: Public generic functions
Method, nodes: Public generic functions
Method, non-constant-mats: Private generic functions
Method, non-constant-mats: Private generic functions
Method, non-constant-mats: Private generic functions
Method, non-constant-mats: Private generic functions
Method, numerator*: Private generic functions
Method, old-nodes: Private generic functions
Method, old-products: Private generic functions
Method, old-sum1: Private generic functions
Method, on-cg-batch-done: Public generic functions
Method, on-n-instances-changed: Public generic functions
Method, on-optimization-finished: Public generic functions
Method, on-optimization-started: Public generic functions
Method, optimizers: Private generic functions
Method, options: Private generic functions
Method, parameter-indices: Public generic functions
Method, per-label-counters: Private generic functions
Method, period: Private generic functions
Method, period: Private generic functions
Method, persistent-chains: Public generic functions
Method, populate-convert-to-dbm-sample-fn: Private generic functions
Method, populate-key: Public generic functions
Method, populate-map-cache-lazily-from-dbm: Public generic functions
Method, populate-periodic-fn: Private generic functions
Method, population-decay: Public generic functions
Method, population-mean: Private generic functions
Method, population-variance: Private generic functions
Method, positive-phase: Public generic functions
Method, positive-phase: Public generic functions
Method, posterior-gps: Private generic functions
Method, print-lump-parts: Private generic functions
Method, print-lump-parts: Private generic functions
Method, print-lump-parts: Private generic functions
Method, print-lump-parts: Private generic functions
Method, print-lump-parts: Private generic functions
Method, print-lump-parts: Private generic functions
Method, print-lump-parts: Private generic functions
Method, print-lump-parts: Private generic functions
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, print-object: Public standalone methods
Method, prior-gp: Public generic functions
Method, products: Private generic functions
Method, random-numbers: Private generic functions
Method, randoms: Private generic functions
Method, rbm: Public generic functions
Method, rbms: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, read-state*: Public generic functions
Method, remove-trailing-nil-instances: Private generic functions
Method, report-optimization-parameters: Public generic functions
Method, reset-counter: Public generic functions
Method, reset-counter: Public generic functions
Method, reset-counter: Public generic functions
Method, reset-optimization-monitors: Public generic functions
Method, resize-chunk: Private generic functions
Method, resize-chunk: Private generic functions
Method, resize-chunk: Private generic functions
Method, resize-chunk: Private generic functions
Method, resolve-clump: Private generic functions
Method, resolve-clump: Private generic functions
Method, roughness: Private generic functions
Method, sample: Public generic functions
Method, sample-chunk: Public generic functions
Method, sample-chunk: Public generic functions
Method, sample-chunk: Public generic functions
Method, sample-chunk: Public generic functions
Method, sample-chunk: Public generic functions
Method, sample-chunk: Public generic functions
Method, samples: Private generic functions
Method, scale: Public generic functions
Method, scale: Public generic functions
Method, scale: Public generic functions
Method, scale1: Private generic functions
Method, scale2: Private generic functions
Method, scratch: Private generic functions
Method, segment-decay-fn: Private generic functions
Method, segment-derivatives: Public generic functions
Method, segment-filter: Public generic functions
Method, segment-set: Public generic functions
Method, segment-set: Public generic functions
Method, segment-weights: Public generic functions
Method, segment-weights: Public generic functions
Method, segment-weights: Public generic functions
Method, segmenter: Public generic functions
Method, segments: Public generic functions
Method, segments: Public generic functions
Method, segments: Public generic functions
Method, segments: Public generic functions
Method, seq-elt-fn: Public generic functions
Method, seq-indices: Public generic functions
Method, set-chunk-mean: Public generic functions
Method, set-chunk-mean: Public generic functions
Method, set-chunk-mean: Public generic functions
Method, set-chunk-mean: Public generic functions
Method, set-chunk-mean: Public generic functions
Method, set-chunk-mean: Public generic functions
Method, set-chunk-mean: Public generic functions
Method, set-hidden-mean: Public generic functions
Method, set-hidden-mean: Public generic functions
Method, set-hidden-mean: Public generic functions
Method, set-input: Public generic functions
Method, set-input: Public generic functions
Method, set-input: Public generic functions
Method, set-input: Public generic functions
Method, set-input: Public generic functions
Method, set-input: Public generic functions
Method, set-input: Public generic functions
Method, set-input: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-max-n-stripes: Public generic functions
Method, set-n-stripes: Public generic functions
Method, set-n-stripes: Public generic functions
Method, set-n-stripes: Public generic functions
Method, set-n-stripes: Public generic functions
Method, set-n-stripes: Public generic functions
Method, set-n-stripes: Public generic functions
Method, set-n-stripes: Public generic functions
Method, set-n-stripes: Public generic functions
Method, set-visible-mean: Public generic functions
Method, set-visible-mean: Public generic functions
Method, shared-with-clump: Private generic functions
Method, shift: Public generic functions
Method, signal-variance: Private generic functions
Method, size: Public generic functions
Method, size: Public generic functions
Method, size: Public generic functions
Method, size: Public generic functions
Method, size: Public generic functions
Method, sort-confusion-classes: Public generic functions
Method, spare-vectors: Private generic functions
Method, sparser: Public generic functions
Method, sparsity-gradient-sources: Private generic functions
Method, sparsity-target: Private generic functions
Method, start-indices: Private generic functions
Method, step-monitors: Public generic functions
Method, stripe-end: Public generic functions
Method, stripe-end: Public generic functions
Method, stripe-start: Public generic functions
Method, stripe-start: Public generic functions
Method, stripedp: Public generic functions
Method, stripedp: Public generic functions
Method, sum1: Private generic functions
Method, sum2: Private generic functions
Method, target: Public generic functions
Method, target: Public generic functions
Method, termination: Public generic functions
Method, transpose-weights-p: Public generic functions
Method, unfolder: Public generic functions
Method, use-segment-derivatives-p: Public generic functions
Method, variance: Public generic functions
Method, variance-adjustment: Public generic functions
Method, variance-adjustment: Public generic functions
Method, variance-decay: Public generic functions
Method, variance-estimates: Private generic functions
Method, variance-for-prediction: Public generic functions
Method, visible-and-conditioning-chunks: Private generic functions
Method, visible-chunks: Public generic functions
Method, visible-chunks: Public generic functions
Method, visible-sampling: Public generic functions
Method, warp-length: Public generic functions
Method, warp-start: Public generic functions
Method, weight-decay: Public generic functions
Method, weight-deltas: Private generic functions
Method, weight-indices: Public generic functions
Method, weight-lumps: Private generic functions
Method, weight-penalty: Public generic functions
Method, weights: Public generic functions
Method, weights: Public generic functions
Method, weights: Public generic functions
Method, weights: Public generic functions
Method, write-command: Public generic functions
Method, write-command: Public generic functions
Method, write-data: Public generic functions
Method, write-data: Public generic functions
Method, write-data: Public generic functions
Method, write-data: Public generic functions
Method, write-data: Public generic functions
Method, write-mapping: Public generic functions
Method, write-mapping: Public generic functions
Method, write-mapping: Public generic functions
Method, write-mapping: Public generic functions
Method, write-mapping: Public generic functions
Method, write-mapping: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, write-state*: Public generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, x: Private generic functions
Method, y: Private generic functions
Method, y: Private generic functions
Method, y: Private generic functions
Method, zero-weight-to-self: Private generic functions
Method, zero-weight-to-self: Private generic functions
minimize: Public ordinary functions
minimize*: Public generic functions
minimize*: Public generic functions
minimize*: Public generic functions
minimize*: Public generic functions
momentum: Public generic functions
momentum: Public generic functions
momentum-type: Public generic functions
momentum-type: Public generic functions
monitor-bm-mean-field-bottom-up: Public ordinary functions
monitor-bm-mean-field-reconstructions: Public ordinary functions
monitor-bpn-results: Public ordinary functions
monitor-dbn-mean-field-bottom-up: Public ordinary functions
monitor-dbn-mean-field-reconstructions: Public ordinary functions
monitor-model-results: Public ordinary functions
monitor-optimization-periodically: Public ordinary functions
monitor-optimization-periodically*: Private ordinary functions
monitors: Public generic functions
monitors: Public generic functions
monitors: Public generic functions
monitors: Public generic functions
multinomial-log-likelihood-ratio: Public ordinary functions

N
n: Private generic functions
n: Private generic functions
n: Private generic functions
n-gibbs: Public generic functions
n-gibbs: Public generic functions
n-instances: Public generic functions
n-instances: Public generic functions
n-instances-in-batch: Private generic functions
n-instances-in-batch: Private generic functions
n-instances-until-update: Private generic functions
n-instances-until-update: Private generic functions
n-instances-until-update: Private generic functions
n-instances-until-update: Private generic functions
n-instances-until-update: Private generic functions
n-particles: Public generic functions
n-particles: Public generic functions
n-rbms: Public ordinary functions
n-samples: Public generic functions
n-samples: Public generic functions
n-steps: Private generic functions
n-steps: Private generic functions
n-stripes: Public generic functions
n-stripes: Public generic functions
n-stripes: Public generic functions
n-stripes: Public generic functions
n-stripes: Public generic functions
n-stripes: Public generic functions
n-stripes: Public generic functions
n-weight-uses-in-batch: Public generic functions
n-weight-uses-in-batch: Public generic functions
n-weight-uses-in-batch: Public generic functions
name: Public generic functions
name: Public generic functions
name: Public generic functions
name: Public generic functions
name: Public generic functions
name: Public generic functions
name: Public generic functions
name-clashes: Private ordinary functions
name=: Public ordinary functions
negate-vector: Private ordinary functions
negative-phase: Public generic functions
negative-phase: Public generic functions
negative-phase: Public generic functions
next-node-inputs: Private generic functions
next-node-inputs: Private generic functions
node-change: Private ordinary functions
nodes: Public generic functions
nodes: Public generic functions
nodes: Public generic functions
nodes: Public generic functions
nodes->inputs: Public ordinary functions
nodes->means: Private ordinary functions
non-constant-mats: Private generic functions
non-constant-mats: Private generic functions
non-constant-mats: Private generic functions
non-constant-mats: Private generic functions
non-constant-mats: Private generic functions
norm: Private ordinary functions
numerator*: Private generic functions
numerator*: Private generic functions

O
old-nodes: Private generic functions
old-nodes: Private generic functions
old-products: Private generic functions
old-products: Private generic functions
old-sum1: Private generic functions
old-sum1: Private generic functions
on-cg-batch-done: Public generic functions
on-cg-batch-done: Public generic functions
on-n-instances-changed: Public generic functions
on-n-instances-changed: Public generic functions
on-optimization-finished: Public generic functions
on-optimization-finished: Public generic functions
on-optimization-started: Public generic functions
on-optimization-started: Public generic functions
optimizers: Private generic functions
optimizers: Private generic functions
options: Private generic functions
options: Private generic functions
other-chunk: Private ordinary functions

P
pad-to-width: Private ordinary functions
parameter-indices: Public generic functions
parameter-indices: Public generic functions
pax-pages: Private ordinary functions
pax-sections: Private ordinary functions
per-label-counters: Private generic functions
per-label-counters: Private generic functions
period: Private generic functions
period: Private generic functions
period: Private generic functions
period: Private generic functions
permute: Public ordinary functions
perplexity-to-cross-entropy: Private ordinary functions
persistent-chains: Public generic functions
persistent-chains: Public generic functions
pipe-to-gnuplot: Private ordinary functions
plists-to-column-widths: Private ordinary functions
plot: Public macros
plot*: Public ordinary functions
poisson-random: Public ordinary functions
polack-ribiere: Private ordinary functions
populate-convert-to-dbm-sample-fn: Private generic functions
populate-convert-to-dbm-sample-fn: Private generic functions
populate-key: Public generic functions
populate-key: Public generic functions
populate-map-cache: Public ordinary functions
populate-map-cache-lazily-from-dbm: Public generic functions
populate-map-cache-lazily-from-dbm: Public generic functions
populate-periodic-fn: Private generic functions
populate-periodic-fn: Private generic functions
population-decay: Public generic functions
population-decay: Public generic functions
population-mean: Private generic functions
population-mean: Private generic functions
population-variance: Private generic functions
population-variance: Private generic functions
positive-phase: Public generic functions
positive-phase: Public generic functions
positive-phase: Public generic functions
posterior-gp-means-and-covariances: Private ordinary functions
posterior-gps: Private generic functions
posterior-gps: Private generic functions
pprint-descriptions: Private ordinary functions
previous-rbm: Private ordinary functions
print-lump-parts: Private generic functions
print-lump-parts: Private generic functions
print-lump-parts: Private generic functions
print-lump-parts: Private generic functions
print-lump-parts: Private generic functions
print-lump-parts: Private generic functions
print-lump-parts: Private generic functions
print-lump-parts: Private generic functions
print-lump-parts: Private generic functions
print-name: Private ordinary functions
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-object: Public standalone methods
print-table: Public ordinary functions
prior-gp: Public generic functions
prior-gp: Public generic functions
process-batch: Private ordinary functions
products: Private generic functions
products: Private generic functions
push-all: Public macros

R
random-numbers: Private generic functions
random-numbers: Private generic functions
randoms: Private generic functions
randoms: Private generic functions
rank: Public ordinary functions
rbm: Public generic functions
rbm: Public generic functions
rbms: Public generic functions
rbms: Public generic functions
read-state: Public ordinary functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
read-state*: Public generic functions
reconstruction-error: Public ordinary functions
reconstruction-rmse: Public ordinary functions
rectify!: Private ordinary functions
remove-if*: Private ordinary functions
remove-trailing-nil-instances: Private generic functions
remove-trailing-nil-instances: Private generic functions
remove-trailing-nils: Private ordinary functions
remove-unkeyword-args: Private ordinary functions
renormalize-activations: Public ordinary functions
renormalize-mats: Private ordinary functions
repeatably: Public macros
repeatedly: Public macros
report-optimization-parameters: Public generic functions
report-optimization-parameters: Public generic functions
reset-counter: Public generic functions
reset-counter: Public generic functions
reset-counter: Public generic functions
reset-counter: Public generic functions
reset-optimization-monitors: Public generic functions
reset-optimization-monitors: Public generic functions
resize-chunk: Private generic functions
resize-chunk: Private generic functions
resize-chunk: Private generic functions
resize-chunk: Private generic functions
resize-chunk: Private generic functions
resolve-clump: Private generic functions
resolve-clump: Private generic functions
resolve-clump: Private generic functions
resolve-clumps: Private ordinary functions
resolve-step-monitors: Private ordinary functions
reverse-map: Private ordinary functions
rnn-backward-cuda-syncs: Private ordinary functions
rnn-forward-cuda-syncs: Private ordinary functions
rough-exponential: Private ordinary functions
roughness: Private generic functions
roughness: Private generic functions
rows-to-arrays: Public ordinary functions
running-stat-mean: Public ordinary functions
running-stat-variance: Public ordinary functions

S
sample: Public generic functions
sample: Public generic functions
sample-chunk: Public generic functions
sample-chunk: Public generic functions
sample-chunk: Public generic functions
sample-chunk: Public generic functions
sample-chunk: Public generic functions
sample-chunk: Public generic functions
sample-chunk: Public generic functions
sample-from: Public ordinary functions
sample-gp: Private ordinary functions
sample-hidden: Public ordinary functions
sample-stratified: Public ordinary functions
sample-visible: Public ordinary functions
sample-with-replacement*: Private ordinary functions
sample-without-replacement*: Private ordinary functions
samples: Private generic functions
samples: Private generic functions
save-state: Public ordinary functions
scale: Public generic functions
scale: Public generic functions
scale: Public generic functions
scale: Public generic functions
scale1: Private generic functions
scale1: Private generic functions
scale2: Private generic functions
scale2: Private generic functions
scaled-tanh: Public ordinary functions
scaled-tanh!: Private ordinary functions
scaled-tanh-derivative!: Private ordinary functions
scratch: Private generic functions
scratch: Private generic functions
sech: Public ordinary functions
segment-decay-fn: Private generic functions
segment-decay-fn: Private generic functions
segment-derivatives: Public generic functions
segment-derivatives: Public generic functions
segment-filter: Public generic functions
segment-filter: Public generic functions
segment-set: Public generic functions
segment-set: Public generic functions
segment-set: Public generic functions
segment-set->mat: Public ordinary functions
segment-set<-mat: Public ordinary functions
segment-weights: Public generic functions
segment-weights: Public generic functions
segment-weights: Public generic functions
segment-weights: Public generic functions
segmenter: Public generic functions
segmenter: Public generic functions
segments: Public generic functions
segments: Public generic functions
segments: Public generic functions
segments: Public generic functions
segments: Public generic functions
select-random-element: Public ordinary functions
seq-elt-fn: Public generic functions
seq-elt-fn: Public generic functions
seq-indices: Public generic functions
seq-indices: Public generic functions
set-chunk-mean: Public generic functions
set-chunk-mean: Public generic functions
set-chunk-mean: Public generic functions
set-chunk-mean: Public generic functions
set-chunk-mean: Public generic functions
set-chunk-mean: Public generic functions
set-chunk-mean: Public generic functions
set-chunk-mean: Public generic functions
set-dropout-and-rescale-activation-weights: Public ordinary functions
set-hidden-mean: Public generic functions
set-hidden-mean: Public generic functions
set-hidden-mean: Public generic functions
set-hidden-mean: Public generic functions
set-hidden-mean/1: Public ordinary functions
set-input: Public generic functions
set-input: Public generic functions
set-input: Public generic functions
set-input: Public generic functions
set-input: Public generic functions
set-input: Public generic functions
set-input: Public generic functions
set-input: Public generic functions
set-input: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-max-n-stripes: Public generic functions
set-mean: Private ordinary functions
set-mean*: Private ordinary functions
set-n-instances: Public ordinary functions
set-n-rbms: Private ordinary functions
set-n-stripes: Public generic functions
set-n-stripes: Public generic functions
set-n-stripes: Public generic functions
set-n-stripes: Public generic functions
set-n-stripes: Public generic functions
set-n-stripes: Public generic functions
set-n-stripes: Public generic functions
set-n-stripes: Public generic functions
set-n-stripes: Public generic functions
set-up-n-weight-uses: Private ordinary functions
set-visible-mean: Public generic functions
set-visible-mean: Public generic functions
set-visible-mean: Public generic functions
set-visible-mean/1: Public ordinary functions
Setf Expander, (setf max-n-stripes): Public setf expanders
Setf Expander, (setf n-rbms): Public ordinary functions
Setf Expander, (setf n-stripes): Public setf expanders
settle-hidden-mean-field: Public ordinary functions
settle-mean-field: Public ordinary functions
settle-visible-mean-field: Public ordinary functions
shared-with-clump: Private generic functions
shared-with-clump: Private generic functions
shift: Public generic functions
shift: Public generic functions
shuffle: Public ordinary functions
shuffle!: Public ordinary functions
shuffle-groups: Public ordinary functions
shuffle-vector!: Private ordinary functions
sigmoid: Public ordinary functions
sigmoid!: Private ordinary functions
sigmoid-derivative!: Private ordinary functions
sign: Public ordinary functions
signal-variance: Private generic functions
signal-variance: Private generic functions
sin!: Private ordinary functions
sin-derivative!: Private ordinary functions
size: Public generic functions
size: Public generic functions
size: Public generic functions
size: Public generic functions
size: Public generic functions
size: Public generic functions
sort-confusion-classes: Public generic functions
sort-confusion-classes: Public generic functions
sort-groups!: Private ordinary functions
sorting-permutation: Public ordinary functions
spare-vectors: Private generic functions
spare-vectors: Private generic functions
sparser: Public generic functions
sparser: Public generic functions
sparsity-gradient-sources: Private generic functions
sparsity-gradient-sources: Private generic functions
sparsity-target: Private generic functions
sparsity-target: Private generic functions
special-case: Public macros
split-by-index: Private ordinary functions
split-fold/cont: Public ordinary functions
split-fold/mod: Public ordinary functions
split-plist: Public ordinary functions
split-stratified: Public ordinary functions
splot: Public macros
splot*: Public ordinary functions
spread-strata: Public ordinary functions
stable-set-difference: Private ordinary functions
start-indices: Private generic functions
start-indices: Private generic functions
start-session: Public ordinary functions
step-monitors: Public generic functions
step-monitors: Public generic functions
stratify: Public ordinary functions
stripe-binding: Private ordinary functions
stripe-end: Public generic functions
stripe-end: Public generic functions
stripe-end: Public generic functions
stripe-start: Public generic functions
stripe-start: Public generic functions
stripe-start: Public generic functions
stripedp: Public generic functions
stripedp: Public generic functions
stripedp: Public generic functions
subseq*: Public ordinary functions
sum-chunk-nodes-and-old-nodes: Private ordinary functions
sum-nodes-and-old-nodes: Private ordinary functions
sum1: Private generic functions
sum1: Private generic functions
sum2: Private generic functions
sum2: Private generic functions
supervise-mean-field/default: Public ordinary functions
swap-nodes: Private ordinary functions
swap-nodes*: Private ordinary functions

T
tanh!: Private ordinary functions
tanh-derivative!: Private ordinary functions
target: Public generic functions
target: Public generic functions
target: Public generic functions
terminate-optimization-p: Public ordinary functions
termination: Public generic functions
termination: Public generic functions
the!: Public macros
time->string: Private ordinary functions
time-step: Public ordinary functions
train-batch: Private ordinary functions
transpose-weights-p: Public generic functions
transpose-weights-p: Public generic functions
trivially-map-over-executors: Private ordinary functions
try-chance: Public ordinary functions

U
unfolder: Public generic functions
unfolder: Public generic functions
uninterned-symbol-p: Public ordinary functions
unroll-dbm: Public ordinary functions
unroll-dbn: Public ordinary functions
up-dbm: Public ordinary functions
update-all-weights/adam: Private ordinary functions
update-all-weights/adam-use-segment-derivatives: Private ordinary functions
update-all-weights/sgd-nesterov: Private ordinary functions
update-all-weights/sgd-no-momentum: Private ordinary functions
update-all-weights/sgd-normal: Private ordinary functions
update-direction: Private ordinary functions
update-gp: Public ordinary functions
update-gp*: Private ordinary functions
use-blas-on-chunk-p: Private ordinary functions
use-segment-derivatives-p: Public generic functions
use-segment-derivatives-p: Public generic functions

V
v1=v2+c*v3: Private ordinary functions
variance: Public generic functions
variance: Public generic functions
variance-adjustment: Public generic functions
variance-adjustment: Public generic functions
variance-adjustment: Public generic functions
variance-decay: Public generic functions
variance-decay: Public generic functions
variance-estimates: Private generic functions
variance-estimates: Private generic functions
variance-for-prediction: Public generic functions
variance-for-prediction: Public generic functions
version: Private ordinary functions
visible-and-conditioning-chunks: Private generic functions
visible-and-conditioning-chunks: Private generic functions
visible-chunks: Public generic functions
visible-chunks: Public generic functions
visible-chunks: Public generic functions
visible-nodes->means: Private ordinary functions
visible-sampling: Public generic functions
visible-sampling: Public generic functions

W
warp-length: Public generic functions
warp-length: Public generic functions
warp-start: Public generic functions
warp-start: Public generic functions
warped-time: Public ordinary functions
weight-decay: Public generic functions
weight-decay: Public generic functions
weight-deltas: Private generic functions
weight-deltas: Private generic functions
weight-indices: Public generic functions
weight-indices: Public generic functions
weight-lumps: Private generic functions
weight-lumps: Private generic functions
weight-penalty: Public generic functions
weight-penalty: Public generic functions
weights: Public generic functions
weights: Public generic functions
weights: Public generic functions
weights: Public generic functions
weights: Public generic functions
while: Public macros
with-command-stream: Public macros
with-copying: Public macros
with-logging-entry: Public macros
with-nil-on-arithmetic-error: Private macros
with-padded-attribute-printing: Public macros
with-safe-printing: Private macros
with-session: Public macros
with-sink-accumulator: Private macros
with-stripes: Public macros
with-versions: Private macros
with-weights-copied: Public macros
with-zero-on-underflow: Public macros
write-command: Public generic functions
write-command: Public generic functions
write-command: Public generic functions
write-data: Public generic functions
write-data: Public generic functions
write-data: Public generic functions
write-data: Public generic functions
write-data: Public generic functions
write-data: Public generic functions
write-mapping: Public generic functions
write-mapping: Public generic functions
write-mapping: Public generic functions
write-mapping: Public generic functions
write-mapping: Public generic functions
write-mapping: Public generic functions
write-mapping: Public generic functions
write-options: Private ordinary functions
write-state: Public ordinary functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions
write-state*: Public generic functions

X
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions
x: Private generic functions

Y
y: Private generic functions
y: Private generic functions
y: Private generic functions
y: Private generic functions

Z
zero-chunk: Private ordinary functions
zero-weight-to-self: Private generic functions
zero-weight-to-self: Private generic functions
zero-weight-to-self: Private generic functions
zip-evenly: Public ordinary functions


A.3 Variables

Jump to:   *   +   @  
A   B   C   D   E   F   G   H   I   K   L   M   N   O   P   R   S   T   U   V   W   X   Y  
Index Entry  Section

*
*attribute-print-widths*: Private special variables
*bpn-being-built*: Private special variables
*command-stream*: Public special variables
*cuda-window-start-time*: Public special variables
*current-stripe*: Private special variables
*default-ext*: Public special variables
*default-int*: Public special variables
*default-max-n-evaluations*: Public special variables
*default-max-n-evaluations-per-line-search*: Public special variables
*default-max-n-line-searches*: Public special variables
*default-ratio*: Public special variables
*default-rho*: Public special variables
*default-sig*: Public special variables
*experiment-random-seed*: Public special variables
*gnuplot-binary*: Private special variables
*in-training-p*: Private special variables
*infinitely-empty-dataset*: Public special variables
*log-file*: Public special variables
*log-time*: Public special variables
*lumps-to-copy*: Private special variables
*n-softmax-threads*: Private special variables
*names-of-nested-bpns-in-rnn*: Private special variables
*no-array-bounds-check*: Public special variables
*objects-copied*: Private special variables
*print-label-counters*: Private special variables
*rnn*: Private special variables
*versions*: Private special variables
*warp-time*: Public special variables

+
+default-population-decay+: Private special variables
+default-variance-adjustment+: Private special variables

@
@mgl-attributes: Private special variables
@mgl-bm: Private special variables
@mgl-bp: Private special variables
@mgl-bp-*-lump: Private special variables
@mgl-bp-+-lump: Private special variables
@mgl-bp-abs-lump: Private special variables
@mgl-bp-activation-functions: Private special variables
@mgl-bp-activation-subnet: Private special variables
@mgl-bp-activations: Private special variables
@mgl-bp-arithmetic: Private special variables
@mgl-bp-batch-normalization: Private special variables
@mgl-bp-dropout-lump: Private special variables
@mgl-bp-embedding-lump: Private special variables
@mgl-bp-exp-lump: Private special variables
@mgl-bp-extension-api: Private special variables
@mgl-bp-gaussian-random-lump: Private special variables
@mgl-bp-input-lump: Private special variables
@mgl-bp-inputs: Private special variables
@mgl-bp-loss-lump: Private special variables
@mgl-bp-losses: Private special variables
@mgl-bp-lstm-subnet: Private special variables
@mgl-bp-lump: Private special variables
@mgl-bp-lumps: Private special variables
@mgl-bp-max-channel-lump: Private special variables
@mgl-bp-max-lump: Private special variables
@mgl-bp-min-lump: Private special variables
@mgl-bp-monitoring: Private special variables
@mgl-bp-normalized-lump: Private special variables
@mgl-bp-overview: Private special variables
@mgl-bp-relu-lump: Private special variables
@mgl-bp-rnn-operations: Private special variables
@mgl-bp-sample-binary-lump: Private special variables
@mgl-bp-scaled-tanh-lump: Private special variables
@mgl-bp-seq-barrier-lump: Private special variables
@mgl-bp-sigmoid-lump: Private special variables
@mgl-bp-sine-lump: Private special variables
@mgl-bp-softmax-xe-loss-lump: Private special variables
@mgl-bp-squared-difference-lump: Private special variables
@mgl-bp-stochasticity: Private special variables
@mgl-bp-sum-lump: Private special variables
@mgl-bp-tanh-lump: Private special variables
@mgl-bp-training: Private special variables
@mgl-bp-utilities: Private special variables
@mgl-bp-v*m-lump: Private special variables
@mgl-bp-weight-lump: Private special variables
@mgl-bpn: Private special variables
@mgl-cg: Private special variables
@mgl-classification: Private special variables
@mgl-classification-counter: Private special variables
@mgl-classification-measurer: Private special variables
@mgl-classification-monitor: Private special variables
@mgl-code-organization: Private special variables
@mgl-common: Private special variables
@mgl-confusion-matrix: Private special variables
@mgl-core: Private special variables
@mgl-counter: Private special variables
@mgl-counter-classes: Private special variables
@mgl-dataset: Private special variables
@mgl-dependencies: Private special variables
@mgl-diffun: Private special variables
@mgl-executors: Private special variables
@mgl-feature-encoding: Private special variables
@mgl-feature-selection: Private special variables
@mgl-features: Private special variables
@mgl-fnn: Private special variables
@mgl-fnn-tutorial: Private special variables
@mgl-gd: Private special variables
@mgl-gd-adam-optimizer: Private special variables
@mgl-gd-batch-gd-optimizer: Private special variables
@mgl-gd-normalized-batch-gd-optimizer: Private special variables
@mgl-gd-per-weight-optimization: Private special variables
@mgl-gd-segmented-gd-optimizer: Private special variables
@mgl-gd-sgd-optimizer: Private special variables
@mgl-gd-utilities: Private special variables
@mgl-glossary: Private special variables
@mgl-gp: Private special variables
@mgl-introduction: Private special variables
@mgl-links: Private special variables
@mgl-log: Private special variables
@mgl-manual: Private special variables
@mgl-measurer: Private special variables
@mgl-model-stripe: Private special variables
@mgl-monitor: Private special variables
@mgl-monitoring: Private special variables
@mgl-nlp: Private special variables
@mgl-nlp-bag-of-words: Private special variables
@mgl-opt: Private special variables
@mgl-opt-cost: Private special variables
@mgl-opt-extension-api: Private special variables
@mgl-opt-gradient-sink: Private special variables
@mgl-opt-gradient-source: Private special variables
@mgl-opt-iterative-optimizer: Private special variables
@mgl-opt-optimizer: Private special variables
@mgl-overview: Private special variables
@mgl-parameterized-executor-cache: Private special variables
@mgl-persistence: Private special variables
@mgl-resample: Private special variables
@mgl-resample-bagging: Private special variables
@mgl-resample-cross-validation: Private special variables
@mgl-resample-cv-bagging: Private special variables
@mgl-resample-misc: Private special variables
@mgl-resample-partitions: Private special variables
@mgl-resample-shuffling: Private special variables
@mgl-rnn: Private special variables
@mgl-rnn-time-warp: Private special variables
@mgl-rnn-tutorial: Private special variables
@mgl-sampler: Private special variables
@mgl-sampler-function-sampler: Private special variables

A
accumulator: Public classes
accumulator: Private classes
adam-time-step: Public classes
after-update-hook: Private classes
args: Public classes
attributes: Public classes
attributes: Public classes
attributes: Public classes
attributes: Public classes

B
batch-mean: Public classes
batch-size: Public classes
batch-size: Public classes
batch-size: Private classes
batch-variance: Public classes
before-update-hook: Public classes
bm: Public classes
bm: Public classes
bpn: Public classes

C
cached-activations1: Public classes
cached-activations2: Public classes
cached-version1: Public classes
cached-version2: Public classes
centered-evidence-outputs: Public classes
cg-args: Public classes
chunk: Public classes
chunk: Private structures
chunk1: Public classes
chunk2: Public classes
chunks: Public classes
clamping-cache: Public classes
cloud: Public classes
cloud: Private structures
cloud-a: Public classes
cloud-b: Public classes
clouds: Public classes
clouds-up-to-layers: Public classes
clumps: Public classes
concatenation: Public classes
concatenation-type: Public classes
conditioning-chunks: Public classes
Constant, flt-ctype: Public constants
Constant, least-negative-flt: Public constants
Constant, least-positive-flt: Public constants
Constant, most-negative-flt: Public constants
Constant, most-positive-flt: Public constants
cost: Public classes
counter: Public classes
counts: Public classes
covariance-fn: Public classes
covariance-lump-name: Public classes
covariances: Public classes
cuda-window-start-time: Public classes
current-time: Public classes

D
damping: Public classes
data: Public classes
dbn: Public classes
decodings: Public classes
default-value: Public classes
default-value: Public classes
default-value: Private classes
denominator: Public classes
depth: Private structures
derivatives: Public classes
derivatives: Public classes
dimensions: Public classes
drop-negative-index-p: Public classes
dropout: Public classes
dropout: Public classes
dropout: Public classes

E
encoded-feature-test: Public classes
encoded-feature-type: Public classes
encodings: Public classes
evidence-inputs: Public classes
evidence-outputs: Public classes
executor-cache: Public classes
expression: Public classes

F
feature-encoder: Public classes
feature-mapper: Public classes
filename: Public classes
first-trained-clump: Public classes
flt-ctype: Public constants
fn: Public classes
fn: Public classes
from-lumpy: Private structures

G
generator: Public classes
group-size: Public classes
group-size: Public classes
group-size: Public classes
group-size: Public classes
group-size: Public classes
group-size: Private classes

H
has-hidden-to-hidden-p: Public classes
has-inputs-p: Public classes
has-visible-to-visible-p: Public classes
hidden-and-conditioning-chunks: Public classes
hidden-chunks: Public classes
hidden-sampling: Private classes
hidden-source-chunk: Public classes

I
importance: Public classes
importances: Public classes
inactive-rbms: Public classes
incomings: Private structures
index: Public classes
indices-present: Public classes
input-row-indices: Public classes
input-seqs: Public classes
inputs: Public classes
into: Public classes
inverted-covariances: Public classes

K
kind: Public classes
kind: Public classes
kind: Private structures

L
lag: Private structures
last-cost: Public classes
last-eval: Public classes
layers: Public classes
learning-rate: Public classes
learning-rate: Private classes
least-negative-flt: Public constants
least-positive-flt: Public constants
length-scale: Public classes

M
m2: Public classes
mappings: Public classes
mask: Public classes
max-lag: Public classes
max-n-samples: Public classes
max-n-stripes: Public classes
max-n-stripes: Public classes
max-n-stripes: Public classes
max-time: Public classes
mean: Public classes
mean: Public classes
mean-decay: Public classes
mean-decay-decay: Public classes
mean-estimates: Public classes
mean-fn: Public classes
mean-lump-name: Public classes
means: Public classes
means: Public classes
measurer: Public classes
momentum: Public classes
momentum: Private classes
momentum-type: Public classes
momentum-type: Private classes
monitors: Public classes
monitors: Public classes
most-negative-flt: Public constants
most-positive-flt: Public constants

N
n: Public classes
n: Public classes
n: Public classes
n-gibbs: Private classes
n-instances: Public classes
n-instances-in-batch: Public classes
n-particles: Public classes
n-samples: Public classes
n-steps: Public classes
n-stripes: Public classes
n-stripes: Public classes
n-weight-uses-in-batch: Public classes
n-weight-uses-in-batch: Public classes
name: Public classes
name: Public classes
name: Public classes
name: Public classes
name: Private structures
next-node-inputs: Public classes
nodes: Public classes
nodes: Public classes
normalization: Public classes
numerator: Public classes

O
old-nodes: Public classes
old-products: Public classes
old-sum1: Public classes
on-cg-batch-done: Public classes
on-n-instances-changed: Public classes
on-optimization-finished: Public classes
on-optimization-started: Public classes
optimizers: Public classes
options: Public classes
original: Private structures

P
parameter-indices: Public classes
path: Private structures
per-label-counters: Public classes
period: Public classes
period: Public classes
persistent-chains: Public classes
populate-convert-to-dbm-sample-fn: Public classes
populate-key: Public classes
populate-map-cache-lazily-from-dbm: Public classes
populate-periodic-fn: Public classes
population-decay: Public classes
population-mean: Public classes
population-variance: Public classes
posterior-gps: Public classes
prior-gp: Public classes
products: Public classes

R
random-numbers: Public classes
randoms: Public classes
rbms: Public classes
remove-trailing-nil-instances: Public classes
roughness: Public classes

S
samples: Public classes
scale: Public classes
scale: Public classes
scale: Private classes
scale1: Public classes
scale2: Public classes
scratch: Public classes
segment-decay-fn: Private classes
segment-filter: Public classes
segment-set: Public classes
segment-set: Private classes
segmenter: Public classes
segments: Public classes
segments: Public classes
seq-elt-fn: Public classes
seq-indices: Public classes
shared-with-clump: Public classes
shift: Public classes
signal-variance: Public classes
size: Public classes
size: Public classes
size: Public classes
Slot, accumulator: Public classes
Slot, accumulator: Private classes
Slot, adam-time-step: Public classes
Slot, after-update-hook: Private classes
Slot, args: Public classes
Slot, attributes: Public classes
Slot, attributes: Public classes
Slot, attributes: Public classes
Slot, attributes: Public classes
Slot, batch-mean: Public classes
Slot, batch-size: Public classes
Slot, batch-size: Public classes
Slot, batch-size: Private classes
Slot, batch-variance: Public classes
Slot, before-update-hook: Public classes
Slot, bm: Public classes
Slot, bm: Public classes
Slot, bpn: Public classes
Slot, cached-activations1: Public classes
Slot, cached-activations2: Public classes
Slot, cached-version1: Public classes
Slot, cached-version2: Public classes
Slot, centered-evidence-outputs: Public classes
Slot, cg-args: Public classes
Slot, chunk: Public classes
Slot, chunk: Private structures
Slot, chunk1: Public classes
Slot, chunk2: Public classes
Slot, chunks: Public classes
Slot, clamping-cache: Public classes
Slot, cloud: Public classes
Slot, cloud: Private structures
Slot, cloud-a: Public classes
Slot, cloud-b: Public classes
Slot, clouds: Public classes
Slot, clouds-up-to-layers: Public classes
Slot, clumps: Public classes
Slot, concatenation: Public classes
Slot, concatenation-type: Public classes
Slot, conditioning-chunks: Public classes
Slot, cost: Public classes
Slot, counter: Public classes
Slot, counts: Public classes
Slot, covariance-fn: Public classes
Slot, covariance-lump-name: Public classes
Slot, covariances: Public classes
Slot, cuda-window-start-time: Public classes
Slot, current-time: Public classes
Slot, damping: Public classes
Slot, data: Public classes
Slot, dbn: Public classes
Slot, decodings: Public classes
Slot, default-value: Public classes
Slot, default-value: Public classes
Slot, default-value: Private classes
Slot, denominator: Public classes
Slot, depth: Private structures
Slot, derivatives: Public classes
Slot, derivatives: Public classes
Slot, dimensions: Public classes
Slot, drop-negative-index-p: Public classes
Slot, dropout: Public classes
Slot, dropout: Public classes
Slot, dropout: Public classes
Slot, encoded-feature-test: Public classes
Slot, encoded-feature-type: Public classes
Slot, encodings: Public classes
Slot, evidence-inputs: Public classes
Slot, evidence-outputs: Public classes
Slot, executor-cache: Public classes
Slot, expression: Public classes
Slot, feature-encoder: Public classes
Slot, feature-mapper: Public classes
Slot, filename: Public classes
Slot, first-trained-clump: Public classes
Slot, fn: Public classes
Slot, fn: Public classes
Slot, from-lumpy: Private structures
Slot, generator: Public classes
Slot, group-size: Public classes
Slot, group-size: Public classes
Slot, group-size: Public classes
Slot, group-size: Public classes
Slot, group-size: Public classes
Slot, group-size: Private classes
Slot, has-hidden-to-hidden-p: Public classes
Slot, has-inputs-p: Public classes
Slot, has-visible-to-visible-p: Public classes
Slot, hidden-and-conditioning-chunks: Public classes
Slot, hidden-chunks: Public classes
Slot, hidden-sampling: Private classes
Slot, hidden-source-chunk: Public classes
Slot, importance: Public classes
Slot, importances: Public classes
Slot, inactive-rbms: Public classes
Slot, incomings: Private structures
Slot, index: Public classes
Slot, indices-present: Public classes
Slot, input-row-indices: Public classes
Slot, input-seqs: Public classes
Slot, inputs: Public classes
Slot, into: Public classes
Slot, inverted-covariances: Public classes
Slot, kind: Public classes
Slot, kind: Public classes
Slot, kind: Private structures
Slot, lag: Private structures
Slot, last-cost: Public classes
Slot, last-eval: Public classes
Slot, layers: Public classes
Slot, learning-rate: Public classes
Slot, learning-rate: Private classes
Slot, length-scale: Public classes
Slot, m2: Public classes
Slot, mappings: Public classes
Slot, mask: Public classes
Slot, max-lag: Public classes
Slot, max-n-samples: Public classes
Slot, max-n-stripes: Public classes
Slot, max-n-stripes: Public classes
Slot, max-n-stripes: Public classes
Slot, max-time: Public classes
Slot, mean: Public classes
Slot, mean: Public classes
Slot, mean-decay: Public classes
Slot, mean-decay-decay: Public classes
Slot, mean-estimates: Public classes
Slot, mean-fn: Public classes
Slot, mean-lump-name: Public classes
Slot, means: Public classes
Slot, means: Public classes
Slot, measurer: Public classes
Slot, momentum: Public classes
Slot, momentum: Private classes
Slot, momentum-type: Public classes
Slot, momentum-type: Private classes
Slot, monitors: Public classes
Slot, monitors: Public classes
Slot, n: Public classes
Slot, n: Public classes
Slot, n: Public classes
Slot, n-gibbs: Private classes
Slot, n-instances: Public classes
Slot, n-instances-in-batch: Public classes
Slot, n-particles: Public classes
Slot, n-samples: Public classes
Slot, n-steps: Public classes
Slot, n-stripes: Public classes
Slot, n-stripes: Public classes
Slot, n-weight-uses-in-batch: Public classes
Slot, n-weight-uses-in-batch: Public classes
Slot, name: Public classes
Slot, name: Public classes
Slot, name: Public classes
Slot, name: Public classes
Slot, name: Private structures
Slot, next-node-inputs: Public classes
Slot, nodes: Public classes
Slot, nodes: Public classes
Slot, normalization: Public classes
Slot, numerator: Public classes
Slot, old-nodes: Public classes
Slot, old-products: Public classes
Slot, old-sum1: Public classes
Slot, on-cg-batch-done: Public classes
Slot, on-n-instances-changed: Public classes
Slot, on-optimization-finished: Public classes
Slot, on-optimization-started: Public classes
Slot, optimizers: Public classes
Slot, options: Public classes
Slot, original: Private structures
Slot, parameter-indices: Public classes
Slot, path: Private structures
Slot, per-label-counters: Public classes
Slot, period: Public classes
Slot, period: Public classes
Slot, persistent-chains: Public classes
Slot, populate-convert-to-dbm-sample-fn: Public classes
Slot, populate-key: Public classes
Slot, populate-map-cache-lazily-from-dbm: Public classes
Slot, populate-periodic-fn: Public classes
Slot, population-decay: Public classes
Slot, population-mean: Public classes
Slot, population-variance: Public classes
Slot, posterior-gps: Public classes
Slot, prior-gp: Public classes
Slot, products: Public classes
Slot, random-numbers: Public classes
Slot, randoms: Public classes
Slot, rbms: Public classes
Slot, remove-trailing-nil-instances: Public classes
Slot, roughness: Public classes
Slot, samples: Public classes
Slot, scale: Public classes
Slot, scale: Public classes
Slot, scale: Private classes
Slot, scale1: Public classes
Slot, scale2: Public classes
Slot, scratch: Public classes
Slot, segment-decay-fn: Private classes
Slot, segment-filter: Public classes
Slot, segment-set: Public classes
Slot, segment-set: Private classes
Slot, segmenter: Public classes
Slot, segments: Public classes
Slot, segments: Public classes
Slot, seq-elt-fn: Public classes
Slot, seq-indices: Public classes
Slot, shared-with-clump: Public classes
Slot, shift: Public classes
Slot, signal-variance: Public classes
Slot, size: Public classes
Slot, size: Public classes
Slot, size: Public classes
Slot, spare-vectors: Public classes
Slot, sparser: Private classes
Slot, sparsity-gradient-sources: Private classes
Slot, sparsity-target: Public classes
Slot, start-indices: Public classes
Slot, step-monitors: Public classes
Slot, sum1: Public classes
Slot, sum2: Public classes
Slot, symbol: Private structures
Slot, target: Public classes
Slot, termination: Public classes
Slot, transpose-weights-p: Public classes
Slot, transposep: Private structures
Slot, unfolder: Public classes
Slot, use-segment-derivatives-p: Private classes
Slot, variance: Public classes
Slot, variance-adjustment: Public classes
Slot, variance-adjustment: Public classes
Slot, variance-decay: Public classes
Slot, variance-estimates: Public classes
Slot, variance-for-prediction: Public classes
Slot, visible-and-conditioning-chunks: Public classes
Slot, visible-chunks: Public classes
Slot, visible-sampling: Private classes
Slot, warp-length: Public classes
Slot, warp-start: Public classes
Slot, weight-decay: Private classes
Slot, weight-deltas: Private classes
Slot, weight-indices: Public classes
Slot, weight-lumps: Public classes
Slot, weight-penalty: Private classes
Slot, weights: Public classes
Slot, weights: Public classes
Slot, weights: Public classes
Slot, weights: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, x: Public classes
Slot, y: Public classes
Slot, y: Public classes
spare-vectors: Public classes
sparser: Private classes
sparsity-gradient-sources: Private classes
sparsity-target: Public classes
Special Variable, *attribute-print-widths*: Private special variables
Special Variable, *bpn-being-built*: Private special variables
Special Variable, *command-stream*: Public special variables
Special Variable, *cuda-window-start-time*: Public special variables
Special Variable, *current-stripe*: Private special variables
Special Variable, *default-ext*: Public special variables
Special Variable, *default-int*: Public special variables
Special Variable, *default-max-n-evaluations*: Public special variables
Special Variable, *default-max-n-evaluations-per-line-search*: Public special variables
Special Variable, *default-max-n-line-searches*: Public special variables
Special Variable, *default-ratio*: Public special variables
Special Variable, *default-rho*: Public special variables
Special Variable, *default-sig*: Public special variables
Special Variable, *experiment-random-seed*: Public special variables
Special Variable, *gnuplot-binary*: Private special variables
Special Variable, *in-training-p*: Private special variables
Special Variable, *infinitely-empty-dataset*: Public special variables
Special Variable, *log-file*: Public special variables
Special Variable, *log-time*: Public special variables
Special Variable, *lumps-to-copy*: Private special variables
Special Variable, *n-softmax-threads*: Private special variables
Special Variable, *names-of-nested-bpns-in-rnn*: Private special variables
Special Variable, *no-array-bounds-check*: Public special variables
Special Variable, *objects-copied*: Private special variables
Special Variable, *print-label-counters*: Private special variables
Special Variable, *rnn*: Private special variables
Special Variable, *versions*: Private special variables
Special Variable, *warp-time*: Public special variables
Special Variable, +default-population-decay+: Private special variables
Special Variable, +default-variance-adjustment+: Private special variables
Special Variable, @mgl-attributes: Private special variables
Special Variable, @mgl-bm: Private special variables
Special Variable, @mgl-bp: Private special variables
Special Variable, @mgl-bp-*-lump: Private special variables
Special Variable, @mgl-bp-+-lump: Private special variables
Special Variable, @mgl-bp-abs-lump: Private special variables
Special Variable, @mgl-bp-activation-functions: Private special variables
Special Variable, @mgl-bp-activation-subnet: Private special variables
Special Variable, @mgl-bp-activations: Private special variables
Special Variable, @mgl-bp-arithmetic: Private special variables
Special Variable, @mgl-bp-batch-normalization: Private special variables
Special Variable, @mgl-bp-dropout-lump: Private special variables
Special Variable, @mgl-bp-embedding-lump: Private special variables
Special Variable, @mgl-bp-exp-lump: Private special variables
Special Variable, @mgl-bp-extension-api: Private special variables
Special Variable, @mgl-bp-gaussian-random-lump: Private special variables
Special Variable, @mgl-bp-input-lump: Private special variables
Special Variable, @mgl-bp-inputs: Private special variables
Special Variable, @mgl-bp-loss-lump: Private special variables
Special Variable, @mgl-bp-losses: Private special variables
Special Variable, @mgl-bp-lstm-subnet: Private special variables
Special Variable, @mgl-bp-lump: Private special variables
Special Variable, @mgl-bp-lumps: Private special variables
Special Variable, @mgl-bp-max-channel-lump: Private special variables
Special Variable, @mgl-bp-max-lump: Private special variables
Special Variable, @mgl-bp-min-lump: Private special variables
Special Variable, @mgl-bp-monitoring: Private special variables
Special Variable, @mgl-bp-normalized-lump: Private special variables
Special Variable, @mgl-bp-overview: Private special variables
Special Variable, @mgl-bp-relu-lump: Private special variables
Special Variable, @mgl-bp-rnn-operations: Private special variables
Special Variable, @mgl-bp-sample-binary-lump: Private special variables
Special Variable, @mgl-bp-scaled-tanh-lump: Private special variables
Special Variable, @mgl-bp-seq-barrier-lump: Private special variables
Special Variable, @mgl-bp-sigmoid-lump: Private special variables
Special Variable, @mgl-bp-sine-lump: Private special variables
Special Variable, @mgl-bp-softmax-xe-loss-lump: Private special variables
Special Variable, @mgl-bp-squared-difference-lump: Private special variables
Special Variable, @mgl-bp-stochasticity: Private special variables
Special Variable, @mgl-bp-sum-lump: Private special variables
Special Variable, @mgl-bp-tanh-lump: Private special variables
Special Variable, @mgl-bp-training: Private special variables
Special Variable, @mgl-bp-utilities: Private special variables
Special Variable, @mgl-bp-v*m-lump: Private special variables
Special Variable, @mgl-bp-weight-lump: Private special variables
Special Variable, @mgl-bpn: Private special variables
Special Variable, @mgl-cg: Private special variables
Special Variable, @mgl-classification: Private special variables
Special Variable, @mgl-classification-counter: Private special variables
Special Variable, @mgl-classification-measurer: Private special variables
Special Variable, @mgl-classification-monitor: Private special variables
Special Variable, @mgl-code-organization: Private special variables
Special Variable, @mgl-common: Private special variables
Special Variable, @mgl-confusion-matrix: Private special variables
Special Variable, @mgl-core: Private special variables
Special Variable, @mgl-counter: Private special variables
Special Variable, @mgl-counter-classes: Private special variables
Special Variable, @mgl-dataset: Private special variables
Special Variable, @mgl-dependencies: Private special variables
Special Variable, @mgl-diffun: Private special variables
Special Variable, @mgl-executors: Private special variables
Special Variable, @mgl-feature-encoding: Private special variables
Special Variable, @mgl-feature-selection: Private special variables
Special Variable, @mgl-features: Private special variables
Special Variable, @mgl-fnn: Private special variables
Special Variable, @mgl-fnn-tutorial: Private special variables
Special Variable, @mgl-gd: Private special variables
Special Variable, @mgl-gd-adam-optimizer: Private special variables
Special Variable, @mgl-gd-batch-gd-optimizer: Private special variables
Special Variable, @mgl-gd-normalized-batch-gd-optimizer: Private special variables
Special Variable, @mgl-gd-per-weight-optimization: Private special variables
Special Variable, @mgl-gd-segmented-gd-optimizer: Private special variables
Special Variable, @mgl-gd-sgd-optimizer: Private special variables
Special Variable, @mgl-gd-utilities: Private special variables
Special Variable, @mgl-glossary: Private special variables
Special Variable, @mgl-gp: Private special variables
Special Variable, @mgl-introduction: Private special variables
Special Variable, @mgl-links: Private special variables
Special Variable, @mgl-log: Private special variables
Special Variable, @mgl-manual: Private special variables
Special Variable, @mgl-measurer: Private special variables
Special Variable, @mgl-model-stripe: Private special variables
Special Variable, @mgl-monitor: Private special variables
Special Variable, @mgl-monitoring: Private special variables
Special Variable, @mgl-nlp: Private special variables
Special Variable, @mgl-nlp-bag-of-words: Private special variables
Special Variable, @mgl-opt: Private special variables
Special Variable, @mgl-opt-cost: Private special variables
Special Variable, @mgl-opt-extension-api: Private special variables
Special Variable, @mgl-opt-gradient-sink: Private special variables
Special Variable, @mgl-opt-gradient-source: Private special variables
Special Variable, @mgl-opt-iterative-optimizer: Private special variables
Special Variable, @mgl-opt-optimizer: Private special variables
Special Variable, @mgl-overview: Private special variables
Special Variable, @mgl-parameterized-executor-cache: Private special variables
Special Variable, @mgl-persistence: Private special variables
Special Variable, @mgl-resample: Private special variables
Special Variable, @mgl-resample-bagging: Private special variables
Special Variable, @mgl-resample-cross-validation: Private special variables
Special Variable, @mgl-resample-cv-bagging: Private special variables
Special Variable, @mgl-resample-misc: Private special variables
Special Variable, @mgl-resample-partitions: Private special variables
Special Variable, @mgl-resample-shuffling: Private special variables
Special Variable, @mgl-rnn: Private special variables
Special Variable, @mgl-rnn-time-warp: Private special variables
Special Variable, @mgl-rnn-tutorial: Private special variables
Special Variable, @mgl-sampler: Private special variables
Special Variable, @mgl-sampler-function-sampler: Private special variables
start-indices: Public classes
step-monitors: Public classes
sum1: Public classes
sum2: Public classes
symbol: Private structures

T
target: Public classes
termination: Public classes
transpose-weights-p: Public classes
transposep: Private structures

U
unfolder: Public classes
use-segment-derivatives-p: Private classes

V
variance: Public classes
variance-adjustment: Public classes
variance-adjustment: Public classes
variance-decay: Public classes
variance-estimates: Public classes
variance-for-prediction: Public classes
visible-and-conditioning-chunks: Public classes
visible-chunks: Public classes
visible-sampling: Private classes

W
warp-length: Public classes
warp-start: Public classes
weight-decay: Private classes
weight-deltas: Private classes
weight-indices: Public classes
weight-lumps: Public classes
weight-penalty: Private classes
weights: Public classes
weights: Public classes
weights: Public classes
weights: Public classes

X
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes
x: Public classes

Y
y: Public classes
y: Public classes


A.4 Data types

Jump to:   -  
A   B   C   D   E   F   G   I   L   M   N   O   P   R   S   T   U  
Index Entry  Section

-
->*: Public classes
->+: Public classes
->abs: Public classes
->activation: Public classes
->batch-normalization: Public classes
->batch-normalized: Public classes
->constant: Private classes
->dropout: Public classes
->embedding: Public classes
->exp: Public classes
->gaussian-random: Public classes
->gp: Public classes
->input: Public classes
->loss: Public classes
->lstm: Public classes
->max: Public classes
->max-channel: Public classes
->min: Public classes
->normalized: Public classes
->periodic: Public classes
->ref: Public classes
->relu: Public classes
->rep: Public classes
->rough-exponential: Public classes
->sample-binary: Public classes
->scaled-tanh: Public classes
->seq-barrier: Public classes
->sigmoid: Public classes
->sin: Public classes
->softmax-xe-loss: Public classes
->squared-difference: Public classes
->stretch: Public classes
->sum: Public classes
->tanh: Public classes
->v*m: Public classes
->weight: Public classes

A
adam-optimizer: Public classes
attributed: Public classes

B
backprop.lisp: The mgl/src/backprop․lisp file
bag-of-words-encoder: Public classes
base-gd-optimizer: Private classes
basic-counter: Public classes
batch-gd-optimizer: Public classes
bm: Public classes
bm-learner: Public classes
bm-mcmc-learner: Private classes
bm-pcd-learner: Public classes
boltzmann-machine.lisp: The mgl/src/boltzmann-machine․lisp file
bp-learner: Public classes
bpn: Public classes

C
cg-optimizer: Public classes
cheating-sparsity-gradient-source: Public classes
chunk: Public classes
Class, ->*: Public classes
Class, ->+: Public classes
Class, ->abs: Public classes
Class, ->activation: Public classes
Class, ->batch-normalization: Public classes
Class, ->batch-normalized: Public classes
Class, ->constant: Private classes
Class, ->dropout: Public classes
Class, ->embedding: Public classes
Class, ->exp: Public classes
Class, ->gaussian-random: Public classes
Class, ->gp: Public classes
Class, ->input: Public classes
Class, ->loss: Public classes
Class, ->lstm: Public classes
Class, ->max: Public classes
Class, ->max-channel: Public classes
Class, ->min: Public classes
Class, ->normalized: Public classes
Class, ->periodic: Public classes
Class, ->ref: Public classes
Class, ->relu: Public classes
Class, ->rep: Public classes
Class, ->rough-exponential: Public classes
Class, ->sample-binary: Public classes
Class, ->scaled-tanh: Public classes
Class, ->seq-barrier: Public classes
Class, ->sigmoid: Public classes
Class, ->sin: Public classes
Class, ->softmax-xe-loss: Public classes
Class, ->squared-difference: Public classes
Class, ->stretch: Public classes
Class, ->sum: Public classes
Class, ->tanh: Public classes
Class, ->v*m: Public classes
Class, ->weight: Public classes
Class, adam-optimizer: Public classes
Class, attributed: Public classes
Class, bag-of-words-encoder: Public classes
Class, base-gd-optimizer: Private classes
Class, basic-counter: Public classes
Class, batch-gd-optimizer: Public classes
Class, bm: Public classes
Class, bm-learner: Public classes
Class, bm-mcmc-learner: Private classes
Class, bm-pcd-learner: Public classes
Class, bp-learner: Public classes
Class, bpn: Public classes
Class, cg-optimizer: Public classes
Class, cheating-sparsity-gradient-source: Public classes
Class, chunk: Public classes
Class, classification-accuracy-counter: Public classes
Class, cloud: Public classes
Class, clump: Public classes
Class, command: Public classes
Class, concat-counter: Public classes
Class, conditioning-chunk: Public classes
Class, confusion-matrix: Public classes
Class, constant-chunk: Public classes
Class, constrained-poisson-chunk: Public classes
Class, cross-entropy-counter: Public classes
Class, data-mapping: Public classes
Class, dbm: Public classes
Class, dbn: Public classes
Class, decayed-cg-optimizer-mixin: Private classes
Class, diffun: Public classes
Class, encoder/decoder: Public classes
Class, exp-normalized-group-chunk: Public classes
Class, factored-cloud: Public classes
Class, factored-cloud-shared-chunk: Private classes
Class, file-mapping: Public classes
Class, fnn: Public classes
Class, fnn-clamping-cache: Public classes
Class, fnn-gp: Public classes
Class, full-cloud: Public classes
Class, function-mapping: Public classes
Class, function-sampler: Public classes
Class, gaussian-chunk: Public classes
Class, gd-optimizer: Private classes
Class, gp: Public classes
Class, iterative-optimizer: Public classes
Class, lump: Public classes
Class, mapping: Public classes
Class, monitor: Public classes
Class, normal-sparsity-gradient-source: Public classes
Class, normalized-batch-gd-optimizer: Public classes
Class, normalized-group-chunk: Private classes
Class, parameterized-executor-cache-mixin: Public classes
Class, per-weight-batch-gd-optimizer: Public classes
Class, periodic-fn: Public classes
Class, plot: Public classes
Class, posterior-gp: Public classes
Class, prior-gp: Public classes
Class, rbm: Public classes
Class, rbm-cd-learner: Public classes
Class, relu-chunk: Public classes
Class, rmse-counter: Public classes
Class, rnn: Public classes
Class, running-stat: Public classes
Class, segment-set: Public classes
Class, segmented-gd-optimizer: Public classes
Class, sgd-optimizer: Public classes
Class, sigmoid-chunk: Public classes
Class, softmax-chunk: Public classes
Class, softmax-label-chunk: Public classes
Class, sparse-bm-learner: Private classes
Class, sparsity-gradient-source: Public classes
Class, temporal-chunk: Public classes
classification-accuracy-counter: Public classes
classification.lisp: The mgl/src/classification․lisp file
cloud: Public classes
clump: Public classes
command: Public classes
common.lisp: The mgl/src/common․lisp file
concat-counter: Public classes
conditioning-chunk: Public classes
confusion-matrix: Public classes
conjugate-gradient.lisp: The mgl/src/conjugate-gradient․lisp file
constant-chunk: Public classes
constrained-poisson-chunk: Public classes
copy.lisp: The mgl/src/copy․lisp file
core.lisp: The mgl/src/core․lisp file
counter.lisp: The mgl/src/counter․lisp file
cross-entropy-counter: Public classes

D
data-mapping: Public classes
dataset.lisp: The mgl/src/dataset․lisp file
dbm: Public classes
dbn: Public classes
decayed-cg-optimizer-mixin: Private classes
deep-belief-network.lisp: The mgl/src/deep-belief-network․lisp file
differentiable-function.lisp: The mgl/src/differentiable-function․lisp file
diffun: Public classes
doc.lisp: The mgl/src/doc․lisp file

E
encoder/decoder: Public classes
exp-normalized-group-chunk: Public classes

F
factored-cloud: Public classes
factored-cloud-shared-chunk: Private classes
feature.lisp: The mgl/src/feature․lisp file
File, backprop.lisp: The mgl/src/backprop․lisp file
File, boltzmann-machine.lisp: The mgl/src/boltzmann-machine․lisp file
File, classification.lisp: The mgl/src/classification․lisp file
File, common.lisp: The mgl/src/common․lisp file
File, conjugate-gradient.lisp: The mgl/src/conjugate-gradient․lisp file
File, copy.lisp: The mgl/src/copy․lisp file
File, core.lisp: The mgl/src/core․lisp file
File, counter.lisp: The mgl/src/counter․lisp file
File, dataset.lisp: The mgl/src/dataset․lisp file
File, deep-belief-network.lisp: The mgl/src/deep-belief-network․lisp file
File, differentiable-function.lisp: The mgl/src/differentiable-function․lisp file
File, doc.lisp: The mgl/src/doc․lisp file
File, feature.lisp: The mgl/src/feature․lisp file
File, gaussian-process.lisp: The mgl/src/gaussian-process․lisp file
File, gnuplot.lisp: The mgl-gnuplot/src/gnuplot․lisp file
File, gradient-descent.lisp: The mgl/src/gradient-descent․lisp file
File, log.lisp: The mgl/src/log․lisp file
File, lumps.lisp: The mgl/src/lumps․lisp file
File, measure.lisp: The mgl/src/measure․lisp file
File, mgl-gnuplot.asd: The mgl-gnuplot/mgl-gnuplot․asd file
File, mgl.asd: The mgl/mgl․asd file
File, mgl.lisp: The mgl/src/mgl․lisp file
File, monitor.lisp: The mgl/src/monitor․lisp file
File, nlp.lisp: The mgl/src/nlp․lisp file
File, optimize.lisp: The mgl/src/optimize․lisp file
File, package.lisp: The mgl/src/package․lisp file
File, package.lisp: The mgl-gnuplot/src/package․lisp file
File, resample.lisp: The mgl/src/resample․lisp file
File, unroll.lisp: The mgl/src/unroll․lisp file
File, util.lisp: The mgl/src/util․lisp file
file-mapping: Public classes
flt: Public types
flt-vector: Public types
fnn: Public classes
fnn-clamping-cache: Public classes
fnn-gp: Public classes
full-cloud: Public classes
function-mapping: Public classes
function-sampler: Public classes

G
gaussian-chunk: Public classes
gaussian-process.lisp: The mgl/src/gaussian-process․lisp file
gd-optimizer: Private classes
gnuplot.lisp: The mgl-gnuplot/src/gnuplot․lisp file
gp: Public classes
gradient-descent.lisp: The mgl/src/gradient-descent․lisp file

I
incoming: Private structures
index: Public types
index-vector: Public types
iterative-optimizer: Public classes

L
lagged-clump: Private structures
limit: Private types
log.lisp: The mgl/src/log․lisp file
lump: Public classes
lumps.lisp: The mgl/src/lumps․lisp file
lumpy: Private structures

M
mapping: Public classes
measure.lisp: The mgl/src/measure․lisp file
mgl: The mgl system
mgl: The mgl package
mgl-bm: The mgl-bm package
mgl-bp: The mgl-bp package
mgl-cg: The mgl-cg package
mgl-common: The mgl-common package
mgl-core: The mgl-core package
mgl-dataset: The mgl-dataset package
mgl-diffun: The mgl-diffun package
mgl-gd: The mgl-gd package
mgl-gnuplot: The mgl-gnuplot system
mgl-gnuplot: The mgl-gnuplot package
mgl-gnuplot.asd: The mgl-gnuplot/mgl-gnuplot․asd file
mgl-gp: The mgl-gp package
mgl-log: The mgl-log package
mgl-nlp: The mgl-nlp package
mgl-opt: The mgl-opt package
mgl-resample: The mgl-resample package
mgl-unroll: The mgl-unroll package
mgl-util: The mgl-util package
mgl.asd: The mgl/mgl․asd file
mgl.lisp: The mgl/src/mgl․lisp file
Module, src: The mgl/src module
Module, src: The mgl-gnuplot/src module
monitor: Public classes
monitor.lisp: The mgl/src/monitor․lisp file

N
nlp.lisp: The mgl/src/nlp․lisp file
normal-sparsity-gradient-source: Public classes
normalized-batch-gd-optimizer: Public classes
normalized-group-chunk: Private classes

O
optimize.lisp: The mgl/src/optimize․lisp file

P
Package, mgl: The mgl package
Package, mgl-bm: The mgl-bm package
Package, mgl-bp: The mgl-bp package
Package, mgl-cg: The mgl-cg package
Package, mgl-common: The mgl-common package
Package, mgl-core: The mgl-core package
Package, mgl-dataset: The mgl-dataset package
Package, mgl-diffun: The mgl-diffun package
Package, mgl-gd: The mgl-gd package
Package, mgl-gnuplot: The mgl-gnuplot package
Package, mgl-gp: The mgl-gp package
Package, mgl-log: The mgl-log package
Package, mgl-nlp: The mgl-nlp package
Package, mgl-opt: The mgl-opt package
Package, mgl-resample: The mgl-resample package
Package, mgl-unroll: The mgl-unroll package
Package, mgl-util: The mgl-util package
package.lisp: The mgl/src/package․lisp file
package.lisp: The mgl-gnuplot/src/package․lisp file
parameterized-executor-cache-mixin: Public classes
per-weight-batch-gd-optimizer: Public classes
periodic-fn: Public classes
plot: Public classes
positive-flt: Public types
posterior-gp: Public classes
prior-gp: Public classes

R
rbm: Public classes
rbm-cd-learner: Public classes
relu-chunk: Public classes
resample.lisp: The mgl/src/resample․lisp file
rmse-counter: Public classes
rnn: Public classes
running-stat: Public classes

S
segment-set: Public classes
segmented-gd-optimizer: Public classes
sgd-optimizer: Public classes
sigmoid-chunk: Public classes
softmax-chunk: Public classes
softmax-label-chunk: Public classes
sparse-bm-learner: Private classes
sparsity-gradient-source: Public classes
src: The mgl/src module
src: The mgl-gnuplot/src module
Structure, incoming: Private structures
Structure, lagged-clump: Private structures
Structure, lumpy: Private structures
System, mgl: The mgl system
System, mgl-gnuplot: The mgl-gnuplot system

T
temporal-chunk: Public classes
Type, flt: Public types
Type, flt-vector: Public types
Type, index: Public types
Type, index-vector: Public types
Type, limit: Private types
Type, positive-flt: Public types

U
unroll.lisp: The mgl/src/unroll․lisp file
util.lisp: The mgl/src/util․lisp file