This is the mgl Reference Manual, version 0.1.0, generated automatically by Declt version 4.0 beta 2 "William Riker" on Sun Dec 15 07:03:13 2024 GMT+0.
mgl/mgl.asd
mgl-gnuplot/mgl-gnuplot.asd
mgl/src/package.lisp
mgl/src/common.lisp
mgl/src/resample.lisp
mgl/src/util.lisp
mgl/src/log.lisp
mgl/src/dataset.lisp
mgl/src/copy.lisp
mgl/src/core.lisp
mgl/src/feature.lisp
mgl/src/monitor.lisp
mgl/src/counter.lisp
mgl/src/measure.lisp
mgl/src/classification.lisp
mgl/src/optimize.lisp
mgl/src/gradient-descent.lisp
mgl/src/conjugate-gradient.lisp
mgl/src/differentiable-function.lisp
mgl/src/boltzmann-machine.lisp
mgl/src/deep-belief-network.lisp
mgl/src/backprop.lisp
mgl/src/lumps.lisp
mgl/src/unroll.lisp
mgl/src/gaussian-process.lisp
mgl/src/nlp.lisp
mgl/src/mgl.lisp
mgl/src/doc.lisp
mgl-gnuplot/src/package.lisp
mgl-gnuplot/src/gnuplot.lisp
The main system appears first, followed by any subsystem dependency.
mgl
MGL is a machine learning library for backpropagation
neural networks, boltzmann machines, gaussian processes and more.
Gábor Melis <mega@retes.hu>
(GIT https://github.com/melisgl/mgl.git)
MIT, see COPYING.
0.1.0
alexandria
(system).
closer-mop
(system).
array-operations
(system).
lla
(system).
cl-reexport
(system).
mgl-gnuplot
(system).
mgl-mat
(system).
mgl-pax
(system).
num-utils
(system).
named-readtables
(system).
pythonic-string-reader
(system).
swank
(system).
src
(module).
mgl-gnuplot
Gabor Melis
MIT
external-program
(system).
alexandria
(system).
src
(module).
Modules are listed depth-first from the system components tree.
mgl/src
mgl
(system).
package.lisp
(file).
common.lisp
(file).
resample.lisp
(file).
util.lisp
(file).
log.lisp
(file).
dataset.lisp
(file).
copy.lisp
(file).
core.lisp
(file).
feature.lisp
(file).
monitor.lisp
(file).
counter.lisp
(file).
measure.lisp
(file).
classification.lisp
(file).
optimize.lisp
(file).
gradient-descent.lisp
(file).
conjugate-gradient.lisp
(file).
differentiable-function.lisp
(file).
boltzmann-machine.lisp
(file).
deep-belief-network.lisp
(file).
backprop.lisp
(file).
lumps.lisp
(file).
unroll.lisp
(file).
gaussian-process.lisp
(file).
nlp.lisp
(file).
mgl.lisp
(file).
doc.lisp
(file).
mgl-gnuplot/src
mgl-gnuplot
(system).
package.lisp
(file).
gnuplot.lisp
(file).
Files are sorted by type and then listed depth-first from the systems components trees.
mgl/mgl.asd
mgl-gnuplot/mgl-gnuplot.asd
mgl/src/package.lisp
mgl/src/common.lisp
mgl/src/resample.lisp
mgl/src/util.lisp
mgl/src/log.lisp
mgl/src/dataset.lisp
mgl/src/copy.lisp
mgl/src/core.lisp
mgl/src/feature.lisp
mgl/src/monitor.lisp
mgl/src/counter.lisp
mgl/src/measure.lisp
mgl/src/classification.lisp
mgl/src/optimize.lisp
mgl/src/gradient-descent.lisp
mgl/src/conjugate-gradient.lisp
mgl/src/differentiable-function.lisp
mgl/src/boltzmann-machine.lisp
mgl/src/deep-belief-network.lisp
mgl/src/backprop.lisp
mgl/src/lumps.lisp
mgl/src/unroll.lisp
mgl/src/gaussian-process.lisp
mgl/src/nlp.lisp
mgl/src/mgl.lisp
mgl/src/doc.lisp
mgl-gnuplot/src/package.lisp
mgl-gnuplot/src/gnuplot.lisp
mgl/src/package.lisp
src
(module).
mgl-gd
.
mgl-resample
.
mgl-dataset
.
mgl-opt
.
mgl-nlp
.
mgl-bp
.
mgl-cg
.
mgl-unroll
.
mgl-log
.
mgl-diffun
.
mgl-util
.
mgl-gp
.
mgl-common
.
mgl-bm
.
mgl
.
mgl-core
.
mgl/src/common.lisp
package.lisp
(file).
src
(module).
batch-size
(generic reader).
default-value
(generic reader).
group-size
(generic reader).
name
(generic function).
nodes
(generic function).
scale
(generic reader).
size
(generic function).
weights
(generic reader).
@mgl-common
(special variable).
mgl/src/resample.lisp
common.lisp
(file).
src
(module).
bag
(function).
bag-cv
(function).
cross-validate
(function).
fracture
(function).
fracture-stratified
(function).
sample-from
(function).
sample-stratified
(function).
shuffle
(function).
shuffle!
(function).
split-fold/cont
(function).
split-fold/mod
(function).
split-stratified
(function).
spread-strata
(function).
stratify
(function).
zip-evenly
(function).
@mgl-resample
(special variable).
@mgl-resample-bagging
(special variable).
@mgl-resample-cross-validation
(special variable).
@mgl-resample-cv-bagging
(special variable).
@mgl-resample-misc
(special variable).
@mgl-resample-partitions
(special variable).
@mgl-resample-shuffling
(special variable).
coerce-to-fresh-vector
(function).
collect-distinct
(function).
find-enough-weights
(function).
sample-with-replacement*
(function).
sample-without-replacement*
(function).
shuffle-vector!
(function).
split-by-index
(function).
mgl/src/util.lisp
resample.lisp
(file).
src
(module).
*experiment-random-seed*
(special variable).
*no-array-bounds-check*
(special variable).
add-to-running-stat
(function).
append1
(function).
applies-to-p
(function).
apply-key
(macro).
as-column-vector
(function).
binarize-randomly
(function).
binomial-log-likelihood-ratio
(function).
call-periodic-fn
(function).
call-periodic-fn!
(function).
call-repeatably
(function).
clear-running-stat
(function).
decay-rate-to-half-life
(function).
decay-to-half-life
(function).
defclass-now
(macro).
define-descriptions
(macro).
defmaker
(macro).
flt
(function).
flt
(type).
flt-ctype
(constant).
flt-vector
(function).
flt-vector
(type).
fn
(reader method).
gaussian-random-1
(function).
group
(function).
half-life-to-decay
(function).
half-life-to-decay-rate
(function).
hash-table->vector
(function).
index
(type).
index-vector
(type).
insert-into-sorted-vector
(function).
invert-permutation
(function).
last-eval
(reader method).
(setf last-eval)
(writer method).
last1
(function).
least-negative-flt
(constant).
least-positive-flt
(constant).
make-random-generator
(function).
make-sequence-generator
(function).
make-sorted-group-generator
(function).
max-position
(function).
max-row-positions
(function).
most-negative-flt
(constant).
most-positive-flt
(constant).
multinomial-log-likelihood-ratio
(function).
periodic-fn
(class).
permute
(function).
poisson-random
(function).
positive-flt
(type).
print-object
(method).
print-table
(function).
push-all
(macro).
repeatably
(macro).
repeatedly
(macro).
rows-to-arrays
(function).
running-stat
(class).
running-stat-mean
(function).
running-stat-variance
(function).
scaled-tanh
(function).
sech
(function).
select-random-element
(function).
shuffle-groups
(function).
sigmoid
(function).
sign
(function).
sorting-permutation
(function).
special-case
(macro).
split-plist
(function).
subseq*
(function).
the!
(macro).
try-chance
(function).
uninterned-symbol-p
(function).
while
(macro).
with-zero-on-underflow
(macro).
->description
(function).
cross-entropy-to-perplexity
(function).
format-description
(function).
make-instance-args
(function).
period
(reader method).
perplexity-to-cross-entropy
(function).
pprint-descriptions
(function).
remove-unkeyword-args
(function).
sort-groups!
(function).
with-safe-printing
(macro).
mgl/src/log.lisp
util.lisp
(file).
src
(module).
*log-file*
(special variable).
*log-time*
(special variable).
log-mat-room
(function).
log-msg
(function).
with-logging-entry
(macro).
@mgl-log
(special variable).
time->string
(function).
mgl/src/dataset.lisp
log.lisp
(file).
src
(module).
*infinitely-empty-dataset*
(special variable).
finishedp
(generic function).
function-sampler
(class).
generator
(reader method).
list-samples
(function).
make-random-sampler
(function).
make-sequence-sampler
(function).
map-dataset
(function).
map-datasets
(function).
max-n-samples
(reader method).
(setf max-n-samples)
(writer method).
n-samples
(reader method).
name
(reader method).
print-object
(method).
sample
(generic function).
@mgl-dataset
(special variable).
@mgl-sampler
(special variable).
@mgl-sampler-function-sampler
(special variable).
ensure-sampler
(function).
mgl/src/copy.lisp
dataset.lisp
(file).
src
(module).
copy
(generic function).
copy-object-extra-initargs
(generic function).
copy-object-slot
(generic function).
define-slots-not-to-be-copied
(macro).
define-slots-to-be-shallow-copied
(macro).
with-copying
(macro).
*objects-copied*
(special variable).
define-slot-not-to-be-copied
(macro).
define-slot-to-be-shallow-copied
(macro).
mgl/src/core.lisp
copy.lisp
(file).
src
(module).
do-batches-for-model
(macro).
do-executors
(macro).
instance-to-executor-parameters
(generic function).
load-state
(function).
make-executor-with-parameters
(generic function).
map-batches-for-model
(function).
map-over-executors
(generic function).
(setf max-n-stripes)
(setf expander).
max-n-stripes
(generic function).
(setf n-stripes)
(setf expander).
n-stripes
(generic function).
parameterized-executor-cache-mixin
(class).
read-state
(function).
read-state*
(generic function).
save-state
(function).
set-input
(generic function).
set-max-n-stripes
(generic function).
set-n-stripes
(generic function).
stripe-end
(generic function).
stripe-start
(generic function).
with-stripes
(macro).
write-state
(function).
write-state*
(generic function).
@mgl-core
(special variable).
@mgl-executors
(special variable).
@mgl-model-stripe
(special variable).
@mgl-parameterized-executor-cache
(special variable).
@mgl-persistence
(special variable).
executor-cache
(reader method).
find-one-executor
(generic function).
insert-into-executor-cache
(function).
lookup-executor-cache
(function).
stripe-binding
(function).
trivially-map-over-executors
(function).
mgl/src/feature.lisp
core.lisp
(file).
src
(module).
count-features
(function).
decode
(generic function).
encode
(generic function).
encoder/decoder
(class).
feature-disambiguities
(function).
feature-llrs
(function).
make-indexer
(function).
read-state*
(method).
write-state*
(method).
@mgl-feature-encoding
(special variable).
@mgl-feature-selection
(special variable).
@mgl-features
(special variable).
all-document-classes
(function).
decodings
(reader method).
document-features
(function).
encodings
(reader method).
index-scored-features
(function).
reverse-map
(function).
mgl/src/monitor.lisp
feature.lisp
(file).
src
(module).
apply-monitor
(generic function).
apply-monitors
(function).
counter
(generic function).
measurer
(reader method).
monitor
(class).
monitor-model-results
(function).
monitors
(generic function).
print-object
(method).
@mgl-monitor
(special variable).
@mgl-monitoring
(special variable).
mgl/src/counter.lisp
monitor.lisp
(file).
src
(module).
add-to-counter
(generic function).
attributed
(class).
attributes
(reader method).
(setf attributes)
(writer method).
basic-counter
(class).
concat-counter
(class).
concatenation-type
(reader method).
counter-raw-values
(generic function).
counter-values
(generic function).
initialize-instance
(method).
log-padded
(function).
name
(method).
print-object
(method).
print-object
(method).
reset-counter
(generic function).
rmse-counter
(class).
with-padded-attribute-printing
(macro).
*attribute-print-widths*
(special variable).
@mgl-attributes
(special variable).
@mgl-counter
(special variable).
@mgl-counter-classes
(special variable).
attribute-print-width
(function).
concatenation
(reader method).
(setf concatenation)
(writer method).
denominator*
(reader method).
get-all
(function).
maybe-print-unreadable-object
(macro).
numerator*
(reader method).
pad-to-width
(function).
plists-to-column-widths
(function).
print-name
(function).
mgl/src/measure.lisp
counter.lisp
(file).
src
(module).
@mgl-measurer
(special variable).
mgl/src/classification.lisp
measure.lisp
(file).
src
(module).
add-confusion-matrix
(function).
add-to-counter
(method).
classification-accuracy-counter
(class).
confusion-class-name
(generic function).
confusion-count
(generic function).
(setf confusion-count)
(generic function).
confusion-matrix
(class).
confusion-matrix-accuracy
(function).
confusion-matrix-classes
(generic function).
confusion-matrix-precision
(function).
confusion-matrix-recall
(function).
cross-entropy-counter
(class).
label-index
(generic function).
label-index-distribution
(generic function).
label-index-distributions
(generic function).
label-indices
(generic function).
make-classification-accuracy-monitors
(function).
make-classification-accuracy-monitors*
(generic function).
make-confusion-matrix
(function).
make-cross-entropy-monitors
(function).
make-cross-entropy-monitors*
(generic function).
make-label-monitors
(function).
map-confusion-matrix
(generic function).
measure-classification-accuracy
(function).
measure-confusion
(function).
measure-cross-entropy
(function).
measure-roc-auc
(function).
print-object
(method).
print-object
(method).
print-object
(method).
reset-counter
(method).
sort-confusion-classes
(generic function).
*print-label-counters*
(special variable).
@mgl-classification
(special variable).
@mgl-classification-counter
(special variable).
@mgl-classification-measurer
(special variable).
@mgl-classification-monitor
(special variable).
@mgl-confusion-matrix
(special variable).
confusion-matrix-test
(function).
counts
(reader method).
per-label-counters
(reader method).
mgl/src/optimize.lisp
classification.lisp
(file).
src
(module).
accumulate-gradients*
(generic function).
cost
(generic function).
do-gradient-sink
(macro).
do-segment-set
(macro).
initialize-gradient-source*
(generic function).
initialize-instance
(method).
initialize-optimizer*
(generic function).
iterative-optimizer
(class).
list-segments
(function).
make-cost-monitors
(function).
make-cost-monitors*
(generic function).
map-gradient-sink
(generic function).
map-segment-runs
(generic function).
map-segments
(generic function).
minimize
(function).
minimize*
(generic function).
monitor-optimization-periodically
(function).
monitors
(method).
n-instances
(reader method).
on-n-instances-changed
(reader method).
(setf on-n-instances-changed)
(writer method).
on-optimization-finished
(reader method).
(setf on-optimization-finished)
(writer method).
on-optimization-started
(reader method).
(setf on-optimization-started)
(writer method).
print-object
(method).
report-optimization-parameters
(generic function).
reset-optimization-monitors
(generic function).
segment-derivatives
(generic function).
segment-set
(class).
segment-set->mat
(function).
segment-set<-mat
(function).
segment-weights
(generic function).
segments
(generic function).
set-n-instances
(function).
size
(reader method).
terminate-optimization-p
(function).
termination
(reader method).
(setf termination)
(writer method).
@mgl-opt
(special variable).
@mgl-opt-cost
(special variable).
@mgl-opt-extension-api
(special variable).
@mgl-opt-gradient-sink
(special variable).
@mgl-opt-gradient-source
(special variable).
@mgl-opt-iterative-optimizer
(special variable).
@mgl-opt-optimizer
(special variable).
ensure-seq
(function).
monitor-optimization-periodically*
(function).
start-indices
(reader method).
mgl/src/gradient-descent.lisp
optimize.lisp
(file).
src
(module).
adam-optimizer
(class).
after-update-hook
(reader method).
(setf after-update-hook)
(writer method).
arrange-for-clipping-gradients
(function).
batch-gd-optimizer
(class).
batch-size
(reader method).
(setf batch-size)
(writer method).
before-update-hook
(reader method).
(setf before-update-hook)
(writer method).
clip-l2-norm
(function).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
initialize-optimizer*
(method).
initialize-optimizer*
(method).
initialize-optimizer*
(method).
initialize-optimizer*
(method).
initialize-optimizer*
(method).
learning-rate
(reader method).
learning-rate
(reader method).
(setf learning-rate)
(writer method).
(setf learning-rate)
(writer method).
map-gradient-sink
(method).
map-gradient-sink
(method).
mean-decay
(reader method).
(setf mean-decay)
(writer method).
mean-decay-decay
(reader method).
(setf mean-decay-decay)
(writer method).
minimize*
(method).
momentum
(reader method).
(setf momentum)
(writer method).
momentum-type
(reader method).
n-weight-uses-in-batch
(reader method).
n-weight-uses-in-batch
(reader method).
(setf n-weight-uses-in-batch)
(writer method).
(setf n-weight-uses-in-batch)
(writer method).
normalized-batch-gd-optimizer
(class).
per-weight-batch-gd-optimizer
(class).
print-object
(method).
segment-set
(reader method).
segmented-gd-optimizer
(class).
segmenter
(reader method).
segments
(reader method).
segments
(method).
sgd-optimizer
(class).
use-segment-derivatives-p
(reader method).
variance-adjustment
(reader method).
(setf variance-adjustment)
(writer method).
variance-decay
(reader method).
(setf variance-decay)
(writer method).
weight-decay
(reader method).
(setf weight-decay)
(writer method).
weight-penalty
(reader method).
(setf weight-penalty)
(writer method).
@mgl-gd
(special variable).
@mgl-gd-adam-optimizer
(special variable).
@mgl-gd-batch-gd-optimizer
(special variable).
@mgl-gd-normalized-batch-gd-optimizer
(special variable).
@mgl-gd-per-weight-optimization
(special variable).
@mgl-gd-segmented-gd-optimizer
(special variable).
@mgl-gd-sgd-optimizer
(special variable).
@mgl-gd-utilities
(special variable).
accumulator
(reader method).
(setf accumulator)
(writer method).
adam-time-step
(reader method).
(setf adam-time-step)
(writer method).
adam-update
(function).
base-gd-optimizer
(class).
cuda-adam-update
(function).
cuda-adam-update-d
(function).
cuda-adam-update-s
(function).
effective-mean-decay
(function).
ensure-mean-estimates
(function).
gd-optimizer
(class).
lisp-adam-update
(function).
lisp-adam-update-d
(function).
lisp-adam-update-s
(function).
maybe-update-weights
(generic function).
mean-estimates
(reader method).
(setf mean-estimates)
(writer method).
n-instances-in-batch
(reader method).
(setf n-instances-in-batch)
(writer method).
n-instances-until-update
(generic function).
optimizers
(reader method).
set-up-n-weight-uses
(function).
update-all-weights/adam
(function).
update-all-weights/adam-use-segment-derivatives
(function).
update-all-weights/sgd-nesterov
(function).
update-all-weights/sgd-no-momentum
(function).
update-all-weights/sgd-normal
(function).
variance-estimates
(reader method).
(setf variance-estimates)
(writer method).
weight-deltas
(reader method).
(setf weight-deltas)
(writer method).
mgl/src/conjugate-gradient.lisp
gradient-descent.lisp
(file).
src
(module).
*default-ext*
(special variable).
*default-int*
(special variable).
*default-max-n-evaluations*
(special variable).
*default-max-n-evaluations-per-line-search*
(special variable).
*default-max-n-line-searches*
(special variable).
*default-ratio*
(special variable).
*default-rho*
(special variable).
*default-sig*
(special variable).
accumulate-gradients*
(method).
batch-size
(reader method).
(setf batch-size)
(writer method).
cg
(function).
cg-args
(reader method).
(setf cg-args)
(writer method).
cg-optimizer
(class).
describe-object
(method).
initialize-optimizer*
(method).
log-cg-batch-done
(generic function).
map-gradient-sink
(method).
minimize*
(method).
on-cg-batch-done
(reader method).
(setf on-cg-batch-done)
(writer method).
segment-filter
(reader method).
segment-set
(reader method).
segments
(method).
weights
(reader method).
(setf weights)
(writer method).
@mgl-cg
(special variable).
accumulator
(reader method).
check-limit
(function).
decayed-cg-optimizer-mixin
(class).
limit
(type).
negate-vector
(function).
polack-ribiere
(function).
process-batch
(function).
segment-decay-fn
(reader method).
(setf segment-decay-fn)
(writer method).
spare-vectors
(reader method).
(setf spare-vectors)
(writer method).
train-batch
(function).
update-direction
(function).
v1=v2+c*v3
(function).
with-nil-on-arithmetic-error
(macro).
mgl/src/differentiable-function.lisp
conjugate-gradient.lisp
(file).
src
(module).
accumulate-gradients*
(method).
diffun
(class).
fn
(reader method).
parameter-indices
(reader method).
weight-indices
(reader method).
@mgl-diffun
(special variable).
add-diffun-gradients
(function).
differentiate-numerically
(function).
evaluate-diffun
(function).
merge-weights-and-arguments
(function).
mgl/src/boltzmann-machine.lisp
differentiable-function.lisp
(file).
src
(module).
accumulate-gradients*
(method).
accumulate-gradients*
(method).
accumulate-gradients*
(method).
bm
(reader method).
bm
(class).
bm-learner
(class).
bm-pcd-learner
(class).
cheating-sparsity-gradient-source
(class).
chunk
(reader method).
chunk
(class).
chunk1
(reader method).
chunk2
(reader method).
chunks
(reader method).
cloud
(reader method).
cloud
(class).
cloud-a
(reader method).
cloud-b
(reader method).
cloud-chunk-among-chunks
(function).
clouds
(reader method).
clouds-up-to-layers
(reader method).
conditioning-chunk
(class).
conditioning-chunks
(reader method).
conditioning-cloud-p
(function).
constant-chunk
(class).
constrained-poisson-chunk
(class).
copy-object-extra-initargs
(method).
copy-object-extra-initargs
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
copy-object-slot
(method).
cost
(reader method).
damping
(reader method).
dbm
(class).
dbm->dbn
(function).
dbn
(reader method).
default-mean-field-supervisor
(generic function).
default-value
(reader method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
do-clouds
(macro).
down-dbm
(function).
exp-normalized-group-chunk
(class).
factored-cloud
(class).
find-chunk
(generic function).
find-cloud
(generic function).
full-cloud
(class).
gaussian-chunk
(class).
group-size
(reader method).
hidden-chunks
(reader method).
hidden-sampling
(reader method).
(setf hidden-sampling)
(writer method).
importances
(reader method).
(setf importances)
(writer method).
indices-present
(reader method).
(setf indices-present)
(writer method).
initialize-gradient-source*
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
inputs
(reader method).
inputs->nodes
(function).
label-index-distributions
(method).
label-indices
(method).
layers
(reader method).
make-classification-accuracy-monitors*
(method).
make-cross-entropy-monitors*
(method).
make-reconstruction-monitors
(function).
make-reconstruction-monitors*
(generic function).
map-segment-runs
(method).
map-segments
(method).
map-segments
(method).
map-segments
(method).
map-segments
(method).
mark-everything-present
(function).
max-n-stripes
(reader method).
max-n-stripes
(method).
means
(reader method).
merge-cloud-specs
(function).
monitor-bm-mean-field-bottom-up
(function).
monitor-bm-mean-field-reconstructions
(function).
monitors
(reader method).
n-gibbs
(reader method).
(setf n-gibbs)
(writer method).
n-particles
(reader method).
n-stripes
(method).
n-stripes
(reader method).
name
(reader method).
name
(reader method).
negative-phase
(generic function).
nodes
(reader method).
nodes->inputs
(function).
normal-sparsity-gradient-source
(class).
persistent-chains
(reader method).
positive-phase
(generic function).
print-object
(method).
print-object
(method).
print-object
(method).
print-object
(method).
print-object
(method).
print-object
(method).
rank
(function).
rbm
(reader method).
rbm
(class).
rbm-cd-learner
(class).
read-state*
(method).
read-state*
(method).
read-state*
(method).
reconstruction-error
(function).
reconstruction-rmse
(function).
relu-chunk
(class).
sample-chunk
(generic function).
sample-hidden
(function).
sample-visible
(function).
scale
(reader method).
(setf scale)
(writer method).
segment-weights
(method).
set-chunk-mean
(generic function).
set-hidden-mean
(generic function).
set-hidden-mean/1
(function).
set-input
(method).
set-input
(method).
set-max-n-stripes
(method).
set-max-n-stripes
(method).
set-max-n-stripes
(method).
set-max-n-stripes
(method).
set-n-stripes
(method).
set-n-stripes
(method).
set-n-stripes
(method).
set-n-stripes
(method).
set-visible-mean
(generic function).
set-visible-mean/1
(function).
settle-hidden-mean-field
(function).
settle-mean-field
(function).
settle-visible-mean-field
(function).
sigmoid-chunk
(class).
size
(reader method).
softmax-chunk
(class).
softmax-label-chunk
(class).
sparser
(reader method).
sparsity-gradient-source
(class).
stripe-end
(method).
stripe-start
(method).
supervise-mean-field/default
(function).
target
(reader method).
temporal-chunk
(class).
up-dbm
(function).
visible-chunks
(reader method).
visible-sampling
(reader method).
(setf visible-sampling)
(writer method).
weights
(reader method).
write-state*
(method).
write-state*
(method).
write-state*
(method).
*current-stripe*
(special variable).
*versions*
(special variable).
->chunk
(function).
->cloud
(function).
->clouds
(function).
accumulate-cloud-statistics
(generic function).
accumulate-cloud-statistics*
(generic function).
accumulate-negative-phase-statistics
(generic function).
accumulate-positive-phase-statistics
(generic function).
accumulate-sparsity-statistics
(generic function).
accumulated-in-sink-p
(function).
activate-cloud
(function).
activate-cloud*
(generic function).
add-chunk-nodes
(function).
bm-mcmc-learner
(class).
bm-type-name
(function).
both-cloud-ends-in-p
(function).
cached-activations1
(reader method).
cached-activations2
(reader method).
cached-version1
(reader method).
(setf cached-version1)
(writer method).
cached-version2
(reader method).
(setf cached-version2)
(writer method).
call-with-sink-accumulator
(generic function).
check-dbm-clouds
(function).
check-no-self-connection
(function).
check-stripes
(function).
check-valuep
(function).
cloud-between-chunks-p
(function).
conditioning-chunk-p
(function).
conditioning-clouds-to
(function).
connects-to-p
(function).
copy-chunk-nodes
(function).
copy-dbm-chunk-to-dbn
(function).
copy-dbm-cloud-to-dbn
(function).
copy-nodes
(generic function).
cuda-exp-normalized
(function).
cuda-exp-normalized-d
(function).
cuda-exp-normalized-s
(function).
cuda-sample-softmax
(function).
cuda-sample-softmax-d
(function).
cuda-sample-softmax-s
(function).
do-chunk
(macro).
do-cloud-runs
(macro).
do-cloud/chunk1
(macro).
do-stripes
(macro).
ensure-mat-large-enough
(function).
ensure-random-numbers
(function).
ensure-scratch
(function).
factored-cloud-shared-chunk
(function).
factored-cloud-shared-chunk
(class).
fill-chunk
(function).
flush-sparsity
(generic function).
format-full-cloud-norm
(function).
full-cloud-norm
(function).
full-clouds-everywhere
(function).
full-clouds-everywhere-between-layers
(function).
has-hidden-to-hidden-p
(reader method).
has-inputs-p
(reader method).
has-visible-to-visible-p
(reader method).
hidden-and-conditioning-chunks
(reader method).
hidden-source-chunk
(reader method).
hijack-means-to-activation
(function).
make-do-cloud/chunk2
(function).
map-sparser
(function).
maybe-remember
(function).
maybe-use-remembered
(function).
means-or-samples
(function).
name-clashes
(function).
next-node-inputs
(reader method).
node-change
(function).
nodes->means
(function).
norm
(function).
normalized-group-chunk
(class).
old-nodes
(reader method).
old-products
(reader method).
old-sum1
(reader method).
other-chunk
(function).
products
(reader method).
random-numbers
(reader method).
(setf random-numbers)
(writer method).
remove-if*
(function).
resize-chunk
(generic function).
scale1
(reader method).
scale2
(reader method).
scratch
(reader method).
(setf scratch)
(writer method).
set-mean
(function).
set-mean*
(function).
sparse-bm-learner
(class).
sparsity-gradient-sources
(reader method).
sparsity-target
(reader method).
stable-set-difference
(function).
sum-chunk-nodes-and-old-nodes
(function).
sum-nodes-and-old-nodes
(function).
sum1
(reader method).
sum2
(reader method).
swap-nodes
(function).
swap-nodes*
(function).
use-blas-on-chunk-p
(function).
version
(function).
visible-and-conditioning-chunks
(reader method).
visible-nodes->means
(function).
with-sink-accumulator
(macro).
with-versions
(macro).
zero-chunk
(function).
zero-weight-to-self
(generic function).
mgl/src/deep-belief-network.lisp
boltzmann-machine.lisp
(file).
src
(module).
chunks
(method).
clouds
(method).
dbn
(class).
down-mean-field
(function).
find-chunk
(method).
find-cloud
(method).
hidden-chunks
(method).
initialize-instance
(method).
initialize-instance
(method).
make-classification-accuracy-monitors*
(method).
make-cross-entropy-monitors*
(method).
make-reconstruction-monitors*
(method).
max-n-stripes
(reader method).
monitor-dbn-mean-field-bottom-up
(function).
monitor-dbn-mean-field-reconstructions
(function).
(setf n-rbms)
(setf expander).
n-rbms
(function).
n-stripes
(method).
rbms
(reader method).
read-state*
(method).
set-input
(method).
set-input
(method).
set-max-n-stripes
(method).
set-n-stripes
(method).
visible-chunks
(method).
write-state*
(method).
add-rbm
(function).
all-rbms
(function).
check-no-name-clashes
(function).
inactive-rbms
(reader method).
previous-rbm
(function).
set-n-rbms
(function).
mgl/src/backprop.lisp
deep-belief-network.lisp
(file).
src
(module).
*cuda-window-start-time*
(special variable).
*warp-time*
(special variable).
accumulate-gradients*
(method).
add-clump
(function).
backward
(generic function).
bp-learner
(class).
bpn
(reader method).
bpn
(class).
build-fnn
(macro).
build-rnn
(macro).
clump
(class).
clumps
(reader method).
cost
(method).
cost
(method).
cuda-window-start-time
(reader method).
(setf cuda-window-start-time)
(writer method).
derivatives
(generic function).
describe-object
(method).
describe-object
(method).
describe-object
(method).
describe-object
(method).
find-clump
(function).
fnn
(class).
forward
(generic function).
initialize-gradient-source*
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
lag
(function).
make-classification-accuracy-monitors*
(method).
make-cross-entropy-monitors*
(method).
make-step-monitor-monitor-counter
(generic function).
make-step-monitor-monitors
(function).
map-segments
(method).
map-segments
(method).
map-segments
(method).
max-lag
(reader method).
max-n-stripes
(reader method).
monitor-bpn-results
(function).
monitors
(reader method).
(setf monitors)
(writer method).
n-stripes
(reader method).
name
(reader method).
nodes
(method).
print-object
(method).
print-object
(method).
read-state*
(method).
rnn
(class).
set-input
(method).
set-input
(method).
set-max-n-stripes
(method).
set-n-stripes
(method).
size
(method).
step-monitors
(reader method).
(setf step-monitors)
(writer method).
stripedp
(generic function).
time-step
(function).
unfolder
(reader method).
warp-length
(reader method).
warp-start
(reader method).
warped-time
(function).
write-state*
(method).
*bpn-being-built*
(special variable).
*in-training-p*
(special variable).
*names-of-nested-bpns-in-rnn*
(special variable).
*rnn*
(special variable).
@mgl-bp
(special variable).
@mgl-bp-extension-api
(special variable).
@mgl-bp-monitoring
(special variable).
@mgl-bp-overview
(special variable).
@mgl-bp-training
(special variable).
@mgl-bpn
(special variable).
@mgl-fnn
(special variable).
@mgl-fnn-tutorial
(special variable).
@mgl-rnn
(special variable).
@mgl-rnn-time-warp
(special variable).
@mgl-rnn-tutorial
(special variable).
add-and-forget-derivatives
(function).
backward-bpn
(generic function).
check-rnn
(function).
check-weights-not-new
(function).
collect-non-constant-mats-for-cuda-sync
(function).
compute-derivatives
(function).
copy-lagged-clump
(function).
current-time
(reader method).
(setf current-time)
(writer method).
ensure-rnn-bpn
(function).
find-nested-bpn
(function).
first-trained-clump
(reader method).
forward-bpn
(generic function).
input-seqs
(reader method).
(setf input-seqs)
(writer method).
lagged-clump
(structure).
lagged-clump-lag
(reader).
(setf lagged-clump-lag)
(writer).
lagged-clump-name
(reader).
(setf lagged-clump-name)
(writer).
lagged-clump-p
(function).
lagged-clump-path
(reader).
(setf lagged-clump-path)
(writer).
last-cost
(reader method).
(setf last-cost)
(writer method).
make-lagged-clump
(function).
map-clumps
(generic function).
max-time
(reader method).
(setf max-time)
(writer method).
maybe-add-name-to-init
(function).
non-constant-mats
(generic function).
remove-trailing-nil-instances
(reader method).
(setf remove-trailing-nil-instances)
(writer method).
remove-trailing-nils
(function).
resolve-clump
(generic function).
resolve-clumps
(function).
resolve-step-monitors
(function).
rnn-backward-cuda-syncs
(function).
rnn-forward-cuda-syncs
(function).
weight-lumps
(reader method).
(setf weight-lumps)
(writer method).
mgl/src/lumps.lisp
backprop.lisp
(file).
src
(module).
->*
(function).
->*
(class).
->+
(function).
->+
(class).
->abs
(function).
->abs
(class).
->activation
(function).
->activation
(class).
->batch-normalization
(function).
->batch-normalization
(class).
->batch-normalized
(function).
->batch-normalized
(class).
->batch-normalized-activation
(function).
->dropout
(function).
->dropout
(class).
->embedding
(function).
->embedding
(class).
->exp
(function).
->exp
(class).
->gaussian-random
(function).
->gaussian-random
(class).
->input
(function).
->input
(class).
->loss
(function).
->loss
(class).
->lstm
(function).
->lstm
(class).
->max
(function).
->max
(class).
->max-channel
(function).
->max-channel
(class).
->min
(function).
->min
(class).
->normalized
(function).
->normalized
(class).
->relu
(function).
->relu
(class).
->sample-binary
(function).
->sample-binary
(class).
->scaled-tanh
(function).
->scaled-tanh
(class).
->seq-barrier
(function).
->seq-barrier
(class).
->sigmoid
(function).
->sigmoid
(class).
->sin
(function).
->sin
(class).
->softmax-xe-loss
(function).
->softmax-xe-loss
(class).
->squared-difference
(function).
->squared-difference
(class).
->sum
(function).
->sum
(class).
->tanh
(function).
->tanh
(class).
->v*m
(function).
->v*m
(class).
->weight
(function).
->weight
(class).
arrange-for-renormalizing-activations
(function).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
batch-normalization
(reader method).
batch-size
(reader method).
cost
(method).
cost
(method).
default-size
(generic function).
default-value
(reader method).
derivatives
(reader method).
dimensions
(reader method).
dropout
(reader method).
dropout
(reader method).
dropout
(reader method).
(setf dropout)
(writer method).
(setf dropout)
(writer method).
(setf dropout)
(writer method).
ensure-softmax-target-matrix
(function).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
group-size
(reader method).
group-size
(reader method).
group-size
(reader method).
group-size
(reader method).
group-size
(reader method).
importance
(reader method).
(setf importance)
(writer method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
initialize-instance
(method).
input-row-indices
(reader method).
(setf input-row-indices)
(writer method).
label-index-distributions
(method).
label-indices
(method).
lump
(class).
map-segments
(method).
map-segments
(method).
max-n-stripes
(method).
max-n-stripes
(method).
mean
(reader method).
(setf mean)
(writer method).
n-stripes
(method).
n-stripes
(method).
name
(method).
nodes
(reader method).
population-decay
(reader method).
print-object
(method).
read-state*
(method).
read-state*
(method).
read-state*
(method).
renormalize-activations
(function).
scale
(reader method).
scale
(reader method).
(setf scale)
(writer method).
segment-derivatives
(method).
segment-weights
(method).
seq-elt-fn
(reader method).
seq-indices
(reader method).
(setf seq-indices)
(writer method).
set-max-n-stripes
(method).
set-max-n-stripes
(method).
set-max-n-stripes
(method).
set-n-stripes
(method).
set-n-stripes
(method).
shift
(reader method).
size
(method).
size
(reader method).
stripe-end
(method).
stripe-start
(method).
stripedp
(method).
target
(reader method).
(setf target)
(writer method).
transpose-weights-p
(reader method).
variance
(reader method).
(setf variance)
(writer method).
variance-adjustment
(reader method).
variance-for-prediction
(reader method).
(setf variance-for-prediction)
(writer method).
weights
(reader method).
weights
(reader method).
with-weights-copied
(macro).
write-state*
(method).
write-state*
(method).
write-state*
(method).
*lumps-to-copy*
(special variable).
*n-softmax-threads*
(special variable).
+default-population-decay+
(special variable).
+default-variance-adjustment+
(special variable).
@mgl-bp-*-lump
(special variable).
@mgl-bp-+-lump
(special variable).
@mgl-bp-abs-lump
(special variable).
@mgl-bp-activation-functions
(special variable).
@mgl-bp-activation-subnet
(special variable).
@mgl-bp-activations
(special variable).
@mgl-bp-arithmetic
(special variable).
@mgl-bp-batch-normalization
(special variable).
@mgl-bp-dropout-lump
(special variable).
@mgl-bp-embedding-lump
(special variable).
@mgl-bp-exp-lump
(special variable).
@mgl-bp-gaussian-random-lump
(special variable).
@mgl-bp-input-lump
(special variable).
@mgl-bp-inputs
(special variable).
@mgl-bp-loss-lump
(special variable).
@mgl-bp-losses
(special variable).
@mgl-bp-lstm-subnet
(special variable).
@mgl-bp-lump
(special variable).
@mgl-bp-lumps
(special variable).
@mgl-bp-max-channel-lump
(special variable).
@mgl-bp-max-lump
(special variable).
@mgl-bp-min-lump
(special variable).
@mgl-bp-normalized-lump
(special variable).
@mgl-bp-relu-lump
(special variable).
@mgl-bp-rnn-operations
(special variable).
@mgl-bp-sample-binary-lump
(special variable).
@mgl-bp-scaled-tanh-lump
(special variable).
@mgl-bp-seq-barrier-lump
(special variable).
@mgl-bp-sigmoid-lump
(special variable).
@mgl-bp-sine-lump
(special variable).
@mgl-bp-softmax-xe-loss-lump
(special variable).
@mgl-bp-squared-difference-lump
(special variable).
@mgl-bp-stochasticity
(special variable).
@mgl-bp-sum-lump
(special variable).
@mgl-bp-tanh-lump
(special variable).
@mgl-bp-utilities
(special variable).
@mgl-bp-v*m-lump
(special variable).
@mgl-bp-weight-lump
(special variable).
args
(generic function).
batch-mean
(reader method).
(setf batch-mean)
(writer method).
batch-variance
(reader method).
(setf batch-variance)
(writer method).
call-with-weights-copied
(function).
check-size-and-default-size
(function).
cuda-dropout-xorwow
(function).
cuda-dropout-xorwow-d
(function).
cuda-dropout-xorwow-s
(function).
cuda-max
(function).
cuda-max-channel
(function).
cuda-max-channel-d
(function).
cuda-max-channel-derivative
(function).
cuda-max-channel-derivative-d
(function).
cuda-max-channel-derivative-s
(function).
cuda-max-channel-s
(function).
cuda-max-d
(function).
cuda-max-derivative
(function).
cuda-max-derivative-d
(function).
cuda-max-derivative-s
(function).
cuda-max-s
(function).
cuda-maybe-renormalize-columns
(function).
cuda-maybe-renormalize-columns-d
(function).
cuda-maybe-renormalize-columns-s
(function).
cuda-maybe-renormalize-rows
(function).
cuda-maybe-renormalize-rows-d
(function).
cuda-maybe-renormalize-rows-s
(function).
cuda-min
(function).
cuda-min-d
(function).
cuda-min-derivative
(function).
cuda-min-derivative-d
(function).
cuda-min-derivative-s
(function).
cuda-min-s
(function).
cuda-rectify
(function).
cuda-rectify-d
(function).
cuda-rectify-derivative
(function).
cuda-rectify-derivative-d
(function).
cuda-rectify-derivative-s
(function).
cuda-rectify-s
(function).
cuda-scaled-tanh!
(function).
cuda-scaled-tanh!-d
(function).
cuda-scaled-tanh!-s
(function).
cuda-scaled-tanh-derivative!
(function).
cuda-scaled-tanh-derivative!-d
(function).
cuda-scaled-tanh-derivative!-s
(function).
cuda-sigmoid!
(function).
cuda-sigmoid!-d
(function).
cuda-sigmoid!-s
(function).
cuda-sigmoid-derivative!
(function).
cuda-sigmoid-derivative!-d
(function).
cuda-sigmoid-derivative!-s
(function).
cuda-sin!
(function).
cuda-sin!-d
(function).
cuda-sin!-s
(function).
cuda-sin-derivative!
(function).
cuda-sin-derivative!-d
(function).
cuda-sin-derivative!-s
(function).
cuda-softmax-xe
(function).
cuda-softmax-xe-d
(function).
cuda-softmax-xe-derivative
(function).
cuda-softmax-xe-derivative-d
(function).
cuda-softmax-xe-derivative-s
(function).
cuda-softmax-xe-derivative/sparse
(function).
cuda-softmax-xe-derivative/sparse-d
(function).
cuda-softmax-xe-derivative/sparse-s
(function).
cuda-softmax-xe-s
(function).
cuda-tanh!
(function).
cuda-tanh!-d
(function).
cuda-tanh!-s
(function).
cuda-tanh-derivative!
(function).
cuda-tanh-derivative!-d
(function).
cuda-tanh-derivative!-s
(function).
do-sparse-targets
(macro).
dropout!
(function).
ensure-batch-mean
(function).
ensure-batch-variance
(function).
ensure-mask
(function).
ensure-population-mean
(function).
ensure-population-variance
(function).
ensure-randoms
(function).
find-activation-lump-for-weight
(function).
lisp-dropout
(function).
lisp-dropout-d
(function).
lisp-dropout-s
(function).
lisp-max
(function).
lisp-max-channel
(function).
lisp-max-channel-d
(function).
lisp-max-channel-derivative
(function).
lisp-max-channel-derivative-d
(function).
lisp-max-channel-derivative-s
(function).
lisp-max-channel-s
(function).
lisp-max-d
(function).
lisp-max-derivative
(function).
lisp-max-derivative-d
(function).
lisp-max-derivative-s
(function).
lisp-max-s
(function).
lisp-maybe-renormalize-columns
(function).
lisp-maybe-renormalize-columns-d
(function).
lisp-maybe-renormalize-columns-s
(function).
lisp-maybe-renormalize-rows
(function).
lisp-maybe-renormalize-rows-d
(function).
lisp-maybe-renormalize-rows-s
(function).
lisp-min
(function).
lisp-min-d
(function).
lisp-min-derivative
(function).
lisp-min-derivative-d
(function).
lisp-min-derivative-s
(function).
lisp-min-s
(function).
lisp-rectify
(function).
lisp-rectify-d
(function).
lisp-rectify-derivative
(function).
lisp-rectify-derivative-d
(function).
lisp-rectify-derivative-s
(function).
lisp-rectify-s
(function).
lisp-scaled-tanh!
(function).
lisp-scaled-tanh!-d
(function).
lisp-scaled-tanh!-s
(function).
lisp-scaled-tanh-derivative!
(function).
lisp-scaled-tanh-derivative!-d
(function).
lisp-scaled-tanh-derivative!-s
(function).
lisp-sigmoid!
(function).
lisp-sigmoid!-d
(function).
lisp-sigmoid!-s
(function).
lisp-sigmoid-derivative!
(function).
lisp-sigmoid-derivative!-d
(function).
lisp-sigmoid-derivative!-s
(function).
lisp-sin!
(function).
lisp-sin!-d
(function).
lisp-sin!-s
(function).
lisp-sin-derivative!
(function).
lisp-sin-derivative!-d
(function).
lisp-sin-derivative!-s
(function).
lisp-softmax-xe
(function).
lisp-softmax-xe-d
(function).
lisp-softmax-xe-derivative
(function).
lisp-softmax-xe-derivative-d
(function).
lisp-softmax-xe-derivative-s
(function).
lisp-softmax-xe-derivative/sparse
(function).
lisp-softmax-xe-derivative/sparse-d
(function).
lisp-softmax-xe-derivative/sparse-s
(function).
lisp-softmax-xe-s
(function).
lisp-tanh!
(function).
lisp-tanh!-d
(function).
lisp-tanh!-s
(function).
lisp-tanh-derivative!
(function).
lisp-tanh-derivative!-d
(function).
lisp-tanh-derivative!-s
(function).
mask
(reader method).
mat-and-row/column-sum-size
(function).
maybe-copy-weight
(function).
maybe-renormalize-columns
(function).
maybe-renormalize-rows
(function).
n-steps
(reader method).
(setf n-steps)
(writer method).
non-constant-mats
(method).
non-constant-mats
(method).
non-constant-mats
(method).
population-mean
(reader method).
(setf population-mean)
(writer method).
population-variance
(reader method).
(setf population-variance)
(writer method).
print-lump-parts
(generic function).
randoms
(reader method).
rectify!
(function).
renormalize-mats
(function).
scaled-tanh!
(function).
scaled-tanh-derivative!
(function).
shared-with-clump
(reader method).
sigmoid!
(function).
sigmoid-derivative!
(function).
sin!
(function).
sin-derivative!
(function).
tanh!
(function).
tanh-derivative!
(function).
x
(generic function).
y
(generic function).
mgl/src/unroll.lisp
lumps.lisp
(file).
src
(module).
backward
(method).
chunk-lump-name
(function).
clamping-cache
(reader method).
default-size
(method).
fnn-clamping-cache
(class).
forward
(method).
initialize-fnn-from-bm
(function).
populate-key
(reader method).
populate-map-cache
(function).
populate-map-cache-lazily-from-dbm
(reader method).
set-dropout-and-rescale-activation-weights
(function).
set-input
(method).
unroll-dbm
(function).
unroll-dbn
(function).
->constant
(function).
->constant
(class).
add-connection
(function).
chunk->fnn-definition
(generic function).
chunk-activation-lump-name
(function).
clamp-cached-entry-on-fnn
(function).
cloud-linear-lump-name
(function).
cloud-weight-lump-name
(function).
collect-map-chunks-and-lumps
(function).
copy-incoming
(function).
copy-lumpy
(function).
ensure-lumpy
(function).
find-lumpy
(function).
find-lumpy-by-name
(function).
find-mm-lumps-from
(function).
incoming
(structure).
incoming->fnn-defintion
(generic function).
incoming-cloud
(reader).
(setf incoming-cloud)
(writer).
incoming-from-lumpy
(reader).
(setf incoming-from-lumpy)
(writer).
incoming-list->fnn-definition
(function).
incoming-p
(function).
incoming-transposep
(reader).
(setf incoming-transposep)
(writer).
initialize-from-cloud
(generic function).
lumpies->fnn-definition
(function).
lumpy
(structure).
lumpy-activation-name
(function).
lumpy-chunk
(reader).
(setf lumpy-chunk)
(writer).
lumpy-depth
(reader).
(setf lumpy-depth)
(writer).
lumpy-incomings
(reader).
(setf lumpy-incomings)
(writer).
lumpy-kind
(reader).
(setf lumpy-kind)
(writer).
lumpy-name
(function).
lumpy-original
(reader).
(setf lumpy-original)
(writer).
lumpy-p
(function).
lumpy-symbol
(reader).
(setf lumpy-symbol)
(writer).
make-incoming
(function).
make-lumpy
(function).
populate-convert-to-dbm-sample-fn
(reader method).
populate-periodic-fn
(reader method).
mgl/src/gaussian-process.lisp
unroll.lisp
(file).
src
(module).
->gp
(function).
->gp
(class).
->periodic
(function).
->periodic
(class).
->ref
(function).
->ref
(class).
->rep
(function).
->rep
(class).
->rough-exponential
(function).
->rough-exponential
(class).
->stretch
(function).
->stretch
(class).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
backward
(method).
covariance-lump-name
(reader method).
default-size
(method).
default-size
(method).
default-size
(method).
default-size
(method).
default-size
(method).
default-size
(method).
fnn-gp
(class).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
forward
(method).
gp
(class).
gp-confidences-as-plot-data
(function).
gp-covariances
(function).
gp-means
(generic function).
gp-means-and-covariances
(function).
gp-means-and-covariances*
(generic function).
gp-samples-as-plot-data
(function).
index
(reader method).
instance-to-executor-parameters
(method).
mean-lump-name
(reader method).
posterior-gp
(class).
prior-gp
(reader method).
prior-gp
(class).
set-input
(method).
update-gp
(function).
centered-evidence-outputs
(reader method).
covariance-fn
(reader method).
covariances
(reader method).
derive-rough-exponential
(function).
drop-negative-index-p
(reader method).
evidence-inputs
(reader method).
evidence-outputs
(reader method).
extract-covariances
(function).
extract-means
(function).
find-gp-lump
(function).
gaussian-kernel
(function).
gp-covariances*
(generic function).
gp-data-matrix
(function).
gp-data-matrix-for-level
(function).
into
(reader method).
inverted-covariances
(reader method).
length-scale
(reader method).
make-matrix-from-lump-stripe
(function).
make-vector-from-lump-stripe
(function).
mean-fn
(reader method).
means
(reader method).
n
(reader method).
n
(reader method).
period
(reader method).
posterior-gp-means-and-covariances
(function).
posterior-gps
(reader method).
(setf posterior-gps)
(writer method).
rough-exponential
(function).
roughness
(reader method).
sample-gp
(function).
samples
(reader method).
(setf samples)
(writer method).
signal-variance
(reader method).
update-gp*
(function).
x
(reader method).
x
(reader method).
x
(reader method).
x
(reader method).
mgl/src/nlp.lisp
gaussian-process.lisp
(file).
src
(module).
bag-of-words-encoder
(class).
bag-of-words-kind
(reader method).
bleu
(function).
encode
(method).
encoded-feature-test
(reader method).
encoded-feature-type
(reader method).
feature-encoder
(reader method).
feature-mapper
(reader method).
make-n-gram-mappee
(function).
@mgl-nlp
(special variable).
@mgl-nlp-bag-of-words
(special variable).
compact-binary-feature-vector
(function).
encode/bag-of-words
(function).
mgl/src/mgl.lisp
nlp.lisp
(file).
src
(module).
@mgl-bm
(special variable).
@mgl-code-organization
(special variable).
@mgl-dependencies
(special variable).
@mgl-glossary
(special variable).
@mgl-gp
(special variable).
@mgl-introduction
(special variable).
@mgl-links
(special variable).
@mgl-manual
(special variable).
@mgl-overview
(special variable).
mgl/src/doc.lisp
mgl.lisp
(file).
src
(module).
pax-pages
(function).
pax-sections
(function).
mgl-gnuplot/src/gnuplot.lisp
package.lisp
(file).
src
(module).
*command-stream*
(special variable).
command
(function).
command
(class).
data
(macro).
data*
(function).
data-mapping
(class).
end-session
(function).
file
(macro).
file*
(function).
file-mapping
(class).
fn
(macro).
fn*
(function).
function-mapping
(class).
mapping
(class).
plot
(macro).
plot
(class).
plot*
(function).
splot
(macro).
splot*
(function).
start-session
(function).
with-command-stream
(macro).
with-session
(macro).
write-command
(generic function).
write-data
(generic function).
write-mapping
(generic function).
*gnuplot-binary*
(special variable).
filename
(reader method).
function-expression
(reader method).
kind
(reader method).
kind-to-string
(function).
mapping-data
(reader method).
mappings
(reader method).
(setf mappings)
(writer method).
options
(reader method).
pipe-to-gnuplot
(function).
write-options
(function).
Packages are listed by definition order.
mgl-gd
mgl-resample
mgl-dataset
mgl-opt
mgl-nlp
mgl-bp
mgl-cg
mgl-unroll
mgl-gnuplot
mgl-log
mgl-diffun
mgl-util
mgl-gp
mgl-common
mgl-bm
mgl
mgl-core
mgl-gd
See MGL-GD::@MGL-GD.
common-lisp
.
editor-hints.named-readtables
.
mgl-common
.
mgl-core
.
mgl-dataset
.
mgl-mat
.
mgl-opt
.
mgl-pax
.
mgl-util
.
pythonic-string-reader
.
adam-optimizer
(class).
after-update-hook
(generic reader).
(setf after-update-hook)
(generic writer).
arrange-for-clipping-gradients
(function).
batch-gd-optimizer
(class).
before-update-hook
(generic reader).
(setf before-update-hook)
(generic writer).
clip-l2-norm
(function).
learning-rate
(generic reader).
(setf learning-rate)
(generic writer).
mean-decay
(generic reader).
(setf mean-decay)
(generic writer).
mean-decay-decay
(generic reader).
(setf mean-decay-decay)
(generic writer).
momentum
(generic reader).
(setf momentum)
(generic writer).
momentum-type
(generic reader).
n-weight-uses-in-batch
(generic reader).
(setf n-weight-uses-in-batch)
(generic writer).
normalized-batch-gd-optimizer
(class).
per-weight-batch-gd-optimizer
(class).
segmented-gd-optimizer
(class).
segmenter
(generic reader).
sgd-optimizer
(class).
use-segment-derivatives-p
(generic reader).
variance-adjustment
(generic reader).
(setf variance-adjustment)
(generic writer).
variance-adjustment
(slot).
variance-decay
(generic reader).
(setf variance-decay)
(generic writer).
weight-decay
(generic reader).
(setf weight-decay)
(generic writer).
weight-penalty
(generic reader).
(setf weight-penalty)
(generic writer).
@mgl-gd
(special variable).
@mgl-gd-adam-optimizer
(special variable).
@mgl-gd-batch-gd-optimizer
(special variable).
@mgl-gd-normalized-batch-gd-optimizer
(special variable).
@mgl-gd-per-weight-optimization
(special variable).
@mgl-gd-segmented-gd-optimizer
(special variable).
@mgl-gd-sgd-optimizer
(special variable).
@mgl-gd-utilities
(special variable).
accumulator
(generic reader).
(setf accumulator)
(generic writer).
adam-time-step
(generic reader).
(setf adam-time-step)
(generic writer).
adam-update
(function).
base-gd-optimizer
(class).
cuda-adam-update
(function).
cuda-adam-update-d
(function).
cuda-adam-update-s
(function).
effective-mean-decay
(function).
ensure-mean-estimates
(function).
gd-optimizer
(class).
lisp-adam-update
(function).
lisp-adam-update-d
(function).
lisp-adam-update-s
(function).
maybe-update-weights
(generic function).
mean-estimates
(generic reader).
(setf mean-estimates)
(generic writer).
n-instances-in-batch
(generic reader).
(setf n-instances-in-batch)
(generic writer).
n-instances-until-update
(generic function).
optimizers
(generic reader).
set-up-n-weight-uses
(function).
update-all-weights/adam
(function).
update-all-weights/adam-use-segment-derivatives
(function).
update-all-weights/sgd-nesterov
(function).
update-all-weights/sgd-no-momentum
(function).
update-all-weights/sgd-normal
(function).
variance-estimates
(generic reader).
(setf variance-estimates)
(generic writer).
weight-deltas
(generic reader).
(setf weight-deltas)
(generic writer).
mgl-resample
See MGL-RESAMPLE::@MGL-RESAMPLE.
common-lisp
.
mgl-pax
.
mgl
.
bag
(function).
bag-cv
(function).
cross-validate
(function).
fracture
(function).
fracture-stratified
(function).
sample-from
(function).
sample-stratified
(function).
shuffle
(function).
shuffle!
(function).
split-fold/cont
(function).
split-fold/mod
(function).
split-stratified
(function).
spread-strata
(function).
stratify
(function).
zip-evenly
(function).
@mgl-resample
(special variable).
@mgl-resample-bagging
(special variable).
@mgl-resample-cross-validation
(special variable).
@mgl-resample-cv-bagging
(special variable).
@mgl-resample-misc
(special variable).
@mgl-resample-partitions
(special variable).
@mgl-resample-shuffling
(special variable).
coerce-to-fresh-vector
(function).
collect-distinct
(function).
find-enough-weights
(function).
sample-with-replacement*
(function).
sample-without-replacement*
(function).
shuffle-vector!
(function).
split-by-index
(function).
mgl-dataset
See MGL-DATASET::@MGL-DATASET.
common-lisp
.
mgl-common
.
mgl-pax
.
mgl-util
.
*infinitely-empty-dataset*
(special variable).
finishedp
(generic function).
function-sampler
(class).
generator
(generic reader).
list-samples
(function).
make-random-sampler
(function).
make-sequence-sampler
(function).
map-dataset
(function).
map-datasets
(function).
max-n-samples
(generic reader).
(setf max-n-samples)
(generic writer).
n-samples
(generic reader).
sample
(generic function).
@mgl-dataset
(special variable).
@mgl-sampler
(special variable).
@mgl-sampler-function-sampler
(special variable).
ensure-sampler
(function).
mgl-opt
See MGL-OPT::@MGL-OPT.
common-lisp
.
mgl-common
.
mgl-core
.
mgl-dataset
.
mgl-log
.
mgl-mat
.
mgl-pax
.
mgl-util
.
accumulate-gradients*
(generic function).
do-gradient-sink
(macro).
do-segment-set
(macro).
initialize-gradient-source*
(generic function).
initialize-optimizer*
(generic function).
iterative-optimizer
(class).
list-segments
(function).
make-cost-monitors
(function).
make-cost-monitors*
(generic function).
map-gradient-sink
(generic function).
map-segment-runs
(generic function).
map-segments
(generic function).
minimize
(function).
minimize*
(generic function).
monitor-optimization-periodically
(function).
n-instances
(generic reader).
on-n-instances-changed
(generic reader).
(setf on-n-instances-changed)
(generic writer).
on-optimization-finished
(generic reader).
(setf on-optimization-finished)
(generic writer).
on-optimization-started
(generic reader).
(setf on-optimization-started)
(generic writer).
report-optimization-parameters
(generic function).
reset-optimization-monitors
(generic function).
segment-derivatives
(generic function).
segment-set
(slot).
segment-set
(generic reader).
segment-set
(class).
segment-set
(slot).
segment-set->mat
(function).
segment-set<-mat
(function).
segment-weights
(generic function).
segments
(slot).
segments
(generic function).
set-n-instances
(function).
terminate-optimization-p
(function).
termination
(generic reader).
(setf termination)
(generic writer).
@mgl-opt
(special variable).
@mgl-opt-cost
(special variable).
@mgl-opt-extension-api
(special variable).
@mgl-opt-gradient-sink
(special variable).
@mgl-opt-gradient-source
(special variable).
@mgl-opt-iterative-optimizer
(special variable).
@mgl-opt-optimizer
(special variable).
ensure-seq
(function).
monitor-optimization-periodically*
(function).
start-indices
(generic reader).
mgl-nlp
See MGL-NLP::@MGL-NLP.
mgl
.
bag-of-words-encoder
(class).
bag-of-words-kind
(generic reader).
bleu
(function).
encoded-feature-test
(generic reader).
encoded-feature-type
(generic reader).
feature-encoder
(generic reader).
feature-mapper
(generic reader).
make-n-gram-mappee
(function).
@mgl-nlp
(special variable).
@mgl-nlp-bag-of-words
(special variable).
compact-binary-feature-vector
(function).
encode/bag-of-words
(function).
mgl-bp
See MGL-BP::@MGL-BP.
cl-cuda
.
common-lisp
.
mgl-cg
.
mgl-common
.
mgl-core
.
mgl-dataset
.
mgl-gd
.
mgl-mat
.
mgl-opt
.
mgl-pax
.
mgl-util
.
mgl
.
mgl-gp
.
mgl-unroll
.
*cuda-window-start-time*
(special variable).
*warp-time*
(special variable).
->*
(function).
->*
(class).
->+
(function).
->+
(class).
->abs
(function).
->abs
(class).
->activation
(function).
->activation
(class).
->batch-normalization
(function).
->batch-normalization
(class).
->batch-normalized
(function).
->batch-normalized
(class).
->batch-normalized-activation
(function).
->dropout
(function).
->dropout
(class).
->embedding
(function).
->embedding
(class).
->exp
(function).
->exp
(class).
->gaussian-random
(function).
->gaussian-random
(class).
->input
(function).
->input
(class).
->loss
(function).
->loss
(class).
->lstm
(function).
->lstm
(class).
->max
(function).
->max
(class).
->max-channel
(function).
->max-channel
(class).
->min
(function).
->min
(class).
->normalized
(function).
->normalized
(class).
->relu
(function).
->relu
(class).
->sample-binary
(function).
->sample-binary
(class).
->scaled-tanh
(function).
->scaled-tanh
(class).
->seq-barrier
(function).
->seq-barrier
(class).
->sigmoid
(function).
->sigmoid
(class).
->sin
(function).
->sin
(class).
->softmax-xe-loss
(function).
->softmax-xe-loss
(class).
->squared-difference
(function).
->squared-difference
(class).
->sum
(function).
->sum
(class).
->tanh
(function).
->tanh
(class).
->v*m
(function).
->v*m
(class).
->weight
(function).
->weight
(class).
add-clump
(function).
arrange-for-renormalizing-activations
(function).
backward
(generic function).
batch-normalization
(generic reader).
bp-learner
(class).
bpn
(generic reader).
bpn
(class).
build-fnn
(macro).
build-rnn
(macro).
clump
(class).
clumps
(generic reader).
cuda-window-start-time
(generic reader).
(setf cuda-window-start-time)
(generic writer).
default-size
(generic function).
derivatives
(generic function).
dimensions
(generic reader).
dropout
(generic reader).
(setf dropout)
(generic writer).
ensure-softmax-target-matrix
(function).
find-clump
(function).
fnn
(class).
forward
(generic function).
importance
(generic reader).
(setf importance)
(generic writer).
input-row-indices
(generic reader).
(setf input-row-indices)
(generic writer).
lag
(function).
lump
(class).
make-step-monitor-monitor-counter
(generic function).
make-step-monitor-monitors
(function).
max-lag
(generic reader).
mean
(generic reader).
(setf mean)
(generic writer).
monitor-bpn-results
(function).
population-decay
(generic reader).
renormalize-activations
(function).
rnn
(class).
seq-elt-fn
(generic reader).
seq-indices
(generic reader).
(setf seq-indices)
(generic writer).
shift
(generic reader).
step-monitors
(generic reader).
(setf step-monitors)
(generic writer).
stripedp
(generic function).
time-step
(function).
transpose-weights-p
(generic reader).
unfolder
(generic reader).
variance
(generic reader).
(setf variance)
(generic writer).
variance-for-prediction
(generic reader).
(setf variance-for-prediction)
(generic writer).
warp-length
(generic reader).
warp-start
(generic reader).
warped-time
(function).
with-weights-copied
(macro).
*bpn-being-built*
(special variable).
*in-training-p*
(special variable).
*lumps-to-copy*
(special variable).
*n-softmax-threads*
(special variable).
*names-of-nested-bpns-in-rnn*
(special variable).
*rnn*
(special variable).
+default-population-decay+
(special variable).
+default-variance-adjustment+
(special variable).
@mgl-bp
(special variable).
@mgl-bp-*-lump
(special variable).
@mgl-bp-+-lump
(special variable).
@mgl-bp-abs-lump
(special variable).
@mgl-bp-activation-functions
(special variable).
@mgl-bp-activation-subnet
(special variable).
@mgl-bp-activations
(special variable).
@mgl-bp-arithmetic
(special variable).
@mgl-bp-batch-normalization
(special variable).
@mgl-bp-dropout-lump
(special variable).
@mgl-bp-embedding-lump
(special variable).
@mgl-bp-exp-lump
(special variable).
@mgl-bp-extension-api
(special variable).
@mgl-bp-gaussian-random-lump
(special variable).
@mgl-bp-input-lump
(special variable).
@mgl-bp-inputs
(special variable).
@mgl-bp-loss-lump
(special variable).
@mgl-bp-losses
(special variable).
@mgl-bp-lstm-subnet
(special variable).
@mgl-bp-lump
(special variable).
@mgl-bp-lumps
(special variable).
@mgl-bp-max-channel-lump
(special variable).
@mgl-bp-max-lump
(special variable).
@mgl-bp-min-lump
(special variable).
@mgl-bp-monitoring
(special variable).
@mgl-bp-normalized-lump
(special variable).
@mgl-bp-overview
(special variable).
@mgl-bp-relu-lump
(special variable).
@mgl-bp-rnn-operations
(special variable).
@mgl-bp-sample-binary-lump
(special variable).
@mgl-bp-scaled-tanh-lump
(special variable).
@mgl-bp-seq-barrier-lump
(special variable).
@mgl-bp-sigmoid-lump
(special variable).
@mgl-bp-sine-lump
(special variable).
@mgl-bp-softmax-xe-loss-lump
(special variable).
@mgl-bp-squared-difference-lump
(special variable).
@mgl-bp-stochasticity
(special variable).
@mgl-bp-sum-lump
(special variable).
@mgl-bp-tanh-lump
(special variable).
@mgl-bp-training
(special variable).
@mgl-bp-utilities
(special variable).
@mgl-bp-v*m-lump
(special variable).
@mgl-bp-weight-lump
(special variable).
@mgl-bpn
(special variable).
@mgl-fnn
(special variable).
@mgl-fnn-tutorial
(special variable).
@mgl-rnn
(special variable).
@mgl-rnn-time-warp
(special variable).
@mgl-rnn-tutorial
(special variable).
add-and-forget-derivatives
(function).
args
(generic function).
backward-bpn
(generic function).
batch-mean
(generic reader).
(setf batch-mean)
(generic writer).
batch-variance
(generic reader).
(setf batch-variance)
(generic writer).
call-with-weights-copied
(function).
check-rnn
(function).
check-size-and-default-size
(function).
check-weights-not-new
(function).
collect-non-constant-mats-for-cuda-sync
(function).
compute-derivatives
(function).
copy-lagged-clump
(function).
cuda-dropout-xorwow
(function).
cuda-dropout-xorwow-d
(function).
cuda-dropout-xorwow-s
(function).
cuda-max
(function).
cuda-max-channel
(function).
cuda-max-channel-d
(function).
cuda-max-channel-derivative
(function).
cuda-max-channel-derivative-d
(function).
cuda-max-channel-derivative-s
(function).
cuda-max-channel-s
(function).
cuda-max-d
(function).
cuda-max-derivative
(function).
cuda-max-derivative-d
(function).
cuda-max-derivative-s
(function).
cuda-max-s
(function).
cuda-maybe-renormalize-columns
(function).
cuda-maybe-renormalize-columns-d
(function).
cuda-maybe-renormalize-columns-s
(function).
cuda-maybe-renormalize-rows
(function).
cuda-maybe-renormalize-rows-d
(function).
cuda-maybe-renormalize-rows-s
(function).
cuda-min
(function).
cuda-min-d
(function).
cuda-min-derivative
(function).
cuda-min-derivative-d
(function).
cuda-min-derivative-s
(function).
cuda-min-s
(function).
cuda-rectify
(function).
cuda-rectify-d
(function).
cuda-rectify-derivative
(function).
cuda-rectify-derivative-d
(function).
cuda-rectify-derivative-s
(function).
cuda-rectify-s
(function).
cuda-scaled-tanh!
(function).
cuda-scaled-tanh!-d
(function).
cuda-scaled-tanh!-s
(function).
cuda-scaled-tanh-derivative!
(function).
cuda-scaled-tanh-derivative!-d
(function).
cuda-scaled-tanh-derivative!-s
(function).
cuda-sigmoid!
(function).
cuda-sigmoid!-d
(function).
cuda-sigmoid!-s
(function).
cuda-sigmoid-derivative!
(function).
cuda-sigmoid-derivative!-d
(function).
cuda-sigmoid-derivative!-s
(function).
cuda-sin!
(function).
cuda-sin!-d
(function).
cuda-sin!-s
(function).
cuda-sin-derivative!
(function).
cuda-sin-derivative!-d
(function).
cuda-sin-derivative!-s
(function).
cuda-softmax-xe
(function).
cuda-softmax-xe-d
(function).
cuda-softmax-xe-derivative
(function).
cuda-softmax-xe-derivative-d
(function).
cuda-softmax-xe-derivative-s
(function).
cuda-softmax-xe-derivative/sparse
(function).
cuda-softmax-xe-derivative/sparse-d
(function).
cuda-softmax-xe-derivative/sparse-s
(function).
cuda-softmax-xe-s
(function).
cuda-tanh!
(function).
cuda-tanh!-d
(function).
cuda-tanh!-s
(function).
cuda-tanh-derivative!
(function).
cuda-tanh-derivative!-d
(function).
cuda-tanh-derivative!-s
(function).
current-time
(generic reader).
(setf current-time)
(generic writer).
do-sparse-targets
(macro).
dropout!
(function).
ensure-batch-mean
(function).
ensure-batch-variance
(function).
ensure-mask
(function).
ensure-population-mean
(function).
ensure-population-variance
(function).
ensure-randoms
(function).
ensure-rnn-bpn
(function).
find-activation-lump-for-weight
(function).
find-nested-bpn
(function).
first-trained-clump
(generic reader).
forward-bpn
(generic function).
input-seqs
(generic reader).
(setf input-seqs)
(generic writer).
lagged-clump
(structure).
lagged-clump-lag
(reader).
(setf lagged-clump-lag)
(writer).
lagged-clump-name
(reader).
(setf lagged-clump-name)
(writer).
lagged-clump-p
(function).
lagged-clump-path
(reader).
(setf lagged-clump-path)
(writer).
last-cost
(generic reader).
(setf last-cost)
(generic writer).
lisp-dropout
(function).
lisp-dropout-d
(function).
lisp-dropout-s
(function).
lisp-max
(function).
lisp-max-channel
(function).
lisp-max-channel-d
(function).
lisp-max-channel-derivative
(function).
lisp-max-channel-derivative-d
(function).
lisp-max-channel-derivative-s
(function).
lisp-max-channel-s
(function).
lisp-max-d
(function).
lisp-max-derivative
(function).
lisp-max-derivative-d
(function).
lisp-max-derivative-s
(function).
lisp-max-s
(function).
lisp-maybe-renormalize-columns
(function).
lisp-maybe-renormalize-columns-d
(function).
lisp-maybe-renormalize-columns-s
(function).
lisp-maybe-renormalize-rows
(function).
lisp-maybe-renormalize-rows-d
(function).
lisp-maybe-renormalize-rows-s
(function).
lisp-min
(function).
lisp-min-d
(function).
lisp-min-derivative
(function).
lisp-min-derivative-d
(function).
lisp-min-derivative-s
(function).
lisp-min-s
(function).
lisp-rectify
(function).
lisp-rectify-d
(function).
lisp-rectify-derivative
(function).
lisp-rectify-derivative-d
(function).
lisp-rectify-derivative-s
(function).
lisp-rectify-s
(function).
lisp-scaled-tanh!
(function).
lisp-scaled-tanh!-d
(function).
lisp-scaled-tanh!-s
(function).
lisp-scaled-tanh-derivative!
(function).
lisp-scaled-tanh-derivative!-d
(function).
lisp-scaled-tanh-derivative!-s
(function).
lisp-sigmoid!
(function).
lisp-sigmoid!-d
(function).
lisp-sigmoid!-s
(function).
lisp-sigmoid-derivative!
(function).
lisp-sigmoid-derivative!-d
(function).
lisp-sigmoid-derivative!-s
(function).
lisp-sin!
(function).
lisp-sin!-d
(function).
lisp-sin!-s
(function).
lisp-sin-derivative!
(function).
lisp-sin-derivative!-d
(function).
lisp-sin-derivative!-s
(function).
lisp-softmax-xe
(function).
lisp-softmax-xe-d
(function).
lisp-softmax-xe-derivative
(function).
lisp-softmax-xe-derivative-d
(function).
lisp-softmax-xe-derivative-s
(function).
lisp-softmax-xe-derivative/sparse
(function).
lisp-softmax-xe-derivative/sparse-d
(function).
lisp-softmax-xe-derivative/sparse-s
(function).
lisp-softmax-xe-s
(function).
lisp-tanh!
(function).
lisp-tanh!-d
(function).
lisp-tanh!-s
(function).
lisp-tanh-derivative!
(function).
lisp-tanh-derivative!-d
(function).
lisp-tanh-derivative!-s
(function).
make-lagged-clump
(function).
map-clumps
(generic function).
mask
(generic reader).
mat-and-row/column-sum-size
(function).
max-time
(generic reader).
(setf max-time)
(generic writer).
maybe-add-name-to-init
(function).
maybe-copy-weight
(function).
maybe-renormalize-columns
(function).
maybe-renormalize-rows
(function).
n-steps
(generic reader).
(setf n-steps)
(generic writer).
non-constant-mats
(generic function).
population-mean
(generic reader).
(setf population-mean)
(generic writer).
population-variance
(generic reader).
(setf population-variance)
(generic writer).
print-lump-parts
(generic function).
randoms
(generic reader).
rectify!
(function).
remove-trailing-nil-instances
(generic reader).
(setf remove-trailing-nil-instances)
(generic writer).
remove-trailing-nils
(function).
renormalize-mats
(function).
resolve-clump
(generic function).
resolve-clumps
(function).
resolve-step-monitors
(function).
rnn-backward-cuda-syncs
(function).
rnn-forward-cuda-syncs
(function).
scaled-tanh!
(function).
scaled-tanh-derivative!
(function).
shared-with-clump
(generic reader).
sigmoid!
(function).
sigmoid-derivative!
(function).
sin!
(function).
sin-derivative!
(function).
tanh!
(function).
tanh-derivative!
(function).
weight-lumps
(generic reader).
(setf weight-lumps)
(generic writer).
x
(generic function).
y
(generic function).
mgl-cg
See MGL-CG::@MGL-CG.
common-lisp
.
mgl-common
.
mgl-core
.
mgl-dataset
.
mgl-log
.
mgl-mat
.
mgl-opt
.
mgl-pax
.
mgl-util
.
*default-ext*
(special variable).
*default-int*
(special variable).
*default-max-n-evaluations*
(special variable).
*default-max-n-evaluations-per-line-search*
(special variable).
*default-max-n-line-searches*
(special variable).
*default-ratio*
(special variable).
*default-rho*
(special variable).
*default-sig*
(special variable).
cg
(function).
cg-args
(generic reader).
(setf cg-args)
(generic writer).
cg-optimizer
(class).
log-cg-batch-done
(generic function).
on-cg-batch-done
(generic reader).
(setf on-cg-batch-done)
(generic writer).
segment-filter
(generic reader).
@mgl-cg
(special variable).
accumulator
(generic reader).
check-limit
(function).
decayed-cg-optimizer-mixin
(class).
limit
(type).
negate-vector
(function).
polack-ribiere
(function).
process-batch
(function).
segment-decay-fn
(generic reader).
(setf segment-decay-fn)
(generic writer).
spare-vectors
(generic reader).
(setf spare-vectors)
(generic writer).
train-batch
(function).
update-direction
(function).
v1=v2+c*v3
(function).
with-nil-on-arithmetic-error
(macro).
mgl-unroll
Translating Boltzmann Machines to a Backprop networks, aka ‘unrolling’.
common-lisp
.
mgl-bm
.
mgl-bp
.
mgl-common
.
mgl-core
.
mgl-dataset
.
mgl-mat
.
mgl-util
.
mgl
.
chunk-lump-name
(function).
clamping-cache
(generic reader).
fnn-clamping-cache
(class).
initialize-fnn-from-bm
(function).
populate-key
(generic reader).
populate-map-cache
(function).
populate-map-cache-lazily-from-dbm
(generic reader).
set-dropout-and-rescale-activation-weights
(function).
unroll-dbm
(function).
unroll-dbn
(function).
->constant
(function).
->constant
(class).
add-connection
(function).
chunk->fnn-definition
(generic function).
chunk-activation-lump-name
(function).
clamp-cached-entry-on-fnn
(function).
cloud-linear-lump-name
(function).
cloud-weight-lump-name
(function).
collect-map-chunks-and-lumps
(function).
copy-incoming
(function).
copy-lumpy
(function).
ensure-lumpy
(function).
find-lumpy
(function).
find-lumpy-by-name
(function).
find-mm-lumps-from
(function).
incoming
(structure).
incoming->fnn-defintion
(generic function).
incoming-cloud
(reader).
(setf incoming-cloud)
(writer).
incoming-from-lumpy
(reader).
(setf incoming-from-lumpy)
(writer).
incoming-list->fnn-definition
(function).
incoming-p
(function).
incoming-transposep
(reader).
(setf incoming-transposep)
(writer).
initialize-from-cloud
(generic function).
lumpies->fnn-definition
(function).
lumpy
(structure).
lumpy-activation-name
(function).
lumpy-chunk
(reader).
(setf lumpy-chunk)
(writer).
lumpy-depth
(reader).
(setf lumpy-depth)
(writer).
lumpy-incomings
(reader).
(setf lumpy-incomings)
(writer).
lumpy-kind
(reader).
(setf lumpy-kind)
(writer).
lumpy-name
(function).
lumpy-original
(reader).
(setf lumpy-original)
(writer).
lumpy-p
(function).
lumpy-symbol
(reader).
(setf lumpy-symbol)
(writer).
make-incoming
(function).
make-lumpy
(function).
populate-convert-to-dbm-sample-fn
(generic reader).
populate-periodic-fn
(generic reader).
mgl-gnuplot
Minimalistic, interactive or batch mode gnuplot interface that supports multiplots and inline data.
common-lisp
.
*command-stream*
(special variable).
command
(function).
command
(class).
data
(macro).
data*
(function).
data-mapping
(class).
end-session
(function).
file
(macro).
file*
(function).
file-mapping
(class).
fn
(macro).
fn*
(function).
function-mapping
(class).
mapping
(class).
plot
(macro).
plot
(class).
plot*
(function).
splot
(macro).
splot*
(function).
start-session
(function).
with-command-stream
(macro).
with-session
(macro).
write-command
(generic function).
write-data
(generic function).
write-mapping
(generic function).
*gnuplot-binary*
(special variable).
filename
(generic reader).
function-expression
(generic reader).
kind
(generic reader).
kind-to-string
(function).
mapping-data
(generic reader).
mappings
(generic reader).
(setf mappings)
(generic writer).
options
(generic reader).
pipe-to-gnuplot
(function).
write-options
(function).
mgl-log
See MGL-LOG::@MGL-LOG.
common-lisp
.
mgl-common
.
mgl-pax
.
mgl-util
.
*log-file*
(special variable).
*log-time*
(special variable).
log-mat-room
(function).
log-msg
(function).
with-logging-entry
(macro).
@mgl-log
(special variable).
time->string
(function).
mgl-diffun
See MGL-DIFFUN::@MGL-DIFFUN.
common-lisp
.
mgl-common
.
mgl-core
.
mgl-mat
.
mgl-opt
.
mgl-pax
.
mgl-util
.
mgl
.
diffun
(class).
parameter-indices
(generic reader).
weight-indices
(generic reader).
@mgl-diffun
(special variable).
add-diffun-gradients
(function).
differentiate-numerically
(function).
evaluate-diffun
(function).
merge-weights-and-arguments
(function).
mgl-util
Simple utilities, types.
common-lisp
.
mgl-common
.
mgl-mat
.
mgl
.
mgl-bm
.
mgl-bp
.
mgl-cg
.
mgl-core
.
mgl-dataset
.
mgl-diffun
.
mgl-gd
.
mgl-gp
.
mgl-log
.
mgl-nlp
.
mgl-opt
.
mgl-unroll
.
*experiment-random-seed*
(special variable).
*no-array-bounds-check*
(special variable).
add-to-running-stat
(function).
append1
(function).
applies-to-p
(function).
apply-key
(macro).
as-column-vector
(function).
binarize-randomly
(function).
binomial-log-likelihood-ratio
(function).
call-periodic-fn
(function).
call-periodic-fn!
(function).
call-repeatably
(function).
clear-running-stat
(function).
copy
(generic function).
copy-object-extra-initargs
(generic function).
copy-object-slot
(generic function).
decay-rate-to-half-life
(function).
decay-to-half-life
(function).
defclass-now
(macro).
define-descriptions
(macro).
define-slots-not-to-be-copied
(macro).
define-slots-to-be-shallow-copied
(macro).
defmaker
(macro).
flt
(function).
flt
(type).
flt-ctype
(constant).
flt-vector
(function).
flt-vector
(type).
gaussian-random-1
(function).
group
(function).
half-life-to-decay
(function).
half-life-to-decay-rate
(function).
hash-table->vector
(function).
index
(generic reader).
index
(type).
index
(slot).
index-vector
(type).
insert-into-sorted-vector
(function).
invert-permutation
(function).
last-eval
(generic reader).
(setf last-eval)
(generic writer).
last1
(function).
least-negative-flt
(constant).
least-positive-flt
(constant).
make-random-generator
(function).
make-sequence-generator
(function).
make-sorted-group-generator
(function).
max-position
(function).
max-row-positions
(function).
most-negative-flt
(constant).
most-positive-flt
(constant).
multinomial-log-likelihood-ratio
(function).
periodic-fn
(class).
permute
(function).
poisson-random
(function).
positive-flt
(type).
print-table
(function).
push-all
(macro).
repeatably
(macro).
repeatedly
(macro).
rows-to-arrays
(function).
running-stat
(class).
running-stat-mean
(function).
running-stat-variance
(function).
scaled-tanh
(function).
sech
(function).
select-random-element
(function).
shuffle-groups
(function).
sigmoid
(function).
sign
(function).
sorting-permutation
(function).
special-case
(macro).
split-plist
(function).
subseq*
(function).
the!
(macro).
try-chance
(function).
uninterned-symbol-p
(function).
while
(macro).
with-copying
(macro).
with-zero-on-underflow
(macro).
*objects-copied*
(special variable).
->description
(function).
cross-entropy-to-perplexity
(function).
define-slot-not-to-be-copied
(macro).
define-slot-to-be-shallow-copied
(macro).
format-description
(function).
make-instance-args
(function).
period
(generic reader).
perplexity-to-cross-entropy
(function).
pprint-descriptions
(function).
remove-unkeyword-args
(function).
sort-groups!
(function).
with-safe-printing
(macro).
mgl-gp
Gaussian processes with support for training with backpropagation.
common-lisp
.
mgl-bp
.
mgl-common
.
mgl-core
.
mgl-mat
.
mgl-util
.
mgl
.
->gp
(function).
->gp
(class).
->periodic
(function).
->periodic
(class).
->ref
(function).
->ref
(class).
->rep
(function).
->rep
(class).
->rough-exponential
(function).
->rough-exponential
(class).
->stretch
(function).
->stretch
(class).
covariance-lump-name
(generic reader).
fnn-gp
(class).
gp
(class).
gp-confidences-as-plot-data
(function).
gp-covariances
(function).
gp-means
(generic function).
gp-means-and-covariances
(function).
gp-means-and-covariances*
(generic function).
gp-samples-as-plot-data
(function).
mean-lump-name
(generic reader).
posterior-gp
(class).
prior-gp
(generic reader).
prior-gp
(class).
update-gp
(function).
centered-evidence-outputs
(generic reader).
covariance-fn
(generic reader).
covariances
(generic reader).
derive-rough-exponential
(function).
drop-negative-index-p
(generic reader).
evidence-inputs
(generic reader).
evidence-outputs
(generic reader).
extract-covariances
(function).
extract-means
(function).
find-gp-lump
(function).
gaussian-kernel
(function).
gp-covariances*
(generic function).
gp-data-matrix
(function).
gp-data-matrix-for-level
(function).
into
(generic reader).
inverted-covariances
(generic reader).
length-scale
(generic reader).
make-matrix-from-lump-stripe
(function).
make-vector-from-lump-stripe
(function).
mean-fn
(generic reader).
means
(generic reader).
n
(generic reader).
period
(generic reader).
posterior-gp-means-and-covariances
(function).
posterior-gps
(generic reader).
(setf posterior-gps)
(generic writer).
rough-exponential
(function).
roughness
(generic reader).
sample-gp
(function).
samples
(generic reader).
(setf samples)
(generic writer).
signal-variance
(generic reader).
update-gp*
(function).
x
(generic reader).
mgl-common
The only purpose of this package is to avoid conflicts between other packages.
common-lisp
.
mgl-pax
.
mgl
.
mgl-bm
.
mgl-bp
.
mgl-cg
.
mgl-core
.
mgl-dataset
.
mgl-diffun
.
mgl-gd
.
mgl-gp
.
mgl-log
.
mgl-opt
.
mgl-unroll
.
mgl-util
.
batch-size
(slot).
batch-size
(slot).
batch-size
(slot).
batch-size
(generic reader).
(setf batch-size)
(generic writer).
cost
(generic function).
cost
(slot).
default-value
(slot).
default-value
(slot).
default-value
(generic reader).
default-value
(slot).
fn
(slot).
fn
(slot).
fn
(generic reader).
group-size
(slot).
group-size
(slot).
group-size
(slot).
group-size
(slot).
group-size
(slot).
group-size
(generic reader).
group-size
(slot).
name
(slot).
name
(slot).
name
(slot).
name
(generic function).
name
(slot).
name
(slot).
name=
(function).
nodes
(slot).
nodes
(generic function).
nodes
(slot).
scale
(slot).
scale
(slot).
scale
(generic reader).
(setf scale)
(generic writer).
scale
(slot).
size
(slot).
size
(slot).
size
(generic function).
size
(slot).
target
(slot).
target
(generic reader).
(setf target)
(generic writer).
weights
(slot).
weights
(slot).
weights
(slot).
weights
(generic reader).
(setf weights)
(generic writer).
weights
(slot).
@mgl-common
(special variable).
mgl-bm
Fully General Boltzmann Machines, Restricted Boltzmann Machines and their stacks called Deep Belief Networks (DBN).
mgl-rbm
cl-cuda
.
common-lisp
.
mgl-common
.
mgl-core
.
mgl-gd
.
mgl-mat
.
mgl-opt
.
mgl-pax
.
mgl-util
.
bm
(generic reader).
bm
(class).
bm-learner
(class).
bm-pcd-learner
(class).
cheating-sparsity-gradient-source
(class).
chunk
(slot).
chunk
(generic reader).
chunk
(class).
chunk1
(generic reader).
chunk2
(generic reader).
chunks
(generic function).
cloud
(slot).
cloud
(generic reader).
cloud
(class).
cloud-a
(generic reader).
cloud-b
(generic reader).
cloud-chunk-among-chunks
(function).
clouds
(generic function).
clouds-up-to-layers
(generic reader).
conditioning-chunk
(class).
conditioning-chunks
(generic reader).
conditioning-cloud-p
(function).
constant-chunk
(class).
constrained-poisson-chunk
(class).
damping
(generic reader).
dbm
(class).
dbm->dbn
(function).
dbn
(generic reader).
dbn
(class).
default-mean-field-supervisor
(generic function).
do-clouds
(macro).
down-dbm
(function).
down-mean-field
(function).
exp-normalized-group-chunk
(class).
factored-cloud
(class).
find-chunk
(generic function).
find-cloud
(generic function).
full-cloud
(class).
gaussian-chunk
(class).
hidden-chunks
(generic function).
hidden-sampling
(generic reader).
(setf hidden-sampling)
(generic writer).
importances
(generic reader).
(setf importances)
(generic writer).
indices-present
(generic reader).
(setf indices-present)
(generic writer).
inputs
(generic reader).
inputs->nodes
(function).
layers
(generic reader).
make-reconstruction-monitors
(function).
make-reconstruction-monitors*
(generic function).
mark-everything-present
(function).
means
(generic reader).
merge-cloud-specs
(function).
monitor-bm-mean-field-bottom-up
(function).
monitor-bm-mean-field-reconstructions
(function).
monitor-dbn-mean-field-bottom-up
(function).
monitor-dbn-mean-field-reconstructions
(function).
n-gibbs
(generic reader).
(setf n-gibbs)
(generic writer).
n-particles
(generic reader).
(setf n-rbms)
(setf expander).
n-rbms
(function).
negative-phase
(generic function).
nodes->inputs
(function).
normal-sparsity-gradient-source
(class).
persistent-chains
(generic reader).
positive-phase
(generic function).
rank
(function).
rbm
(generic reader).
rbm
(class).
rbm-cd-learner
(class).
rbms
(generic reader).
reconstruction-error
(function).
reconstruction-rmse
(function).
relu-chunk
(class).
sample-chunk
(generic function).
sample-hidden
(function).
sample-visible
(function).
set-chunk-mean
(generic function).
set-hidden-mean
(generic function).
set-hidden-mean/1
(function).
set-visible-mean
(generic function).
set-visible-mean/1
(function).
settle-hidden-mean-field
(function).
settle-mean-field
(function).
settle-visible-mean-field
(function).
sigmoid-chunk
(class).
softmax-chunk
(class).
softmax-label-chunk
(class).
sparser
(generic reader).
sparsity-gradient-source
(class).
supervise-mean-field/default
(function).
temporal-chunk
(class).
up-dbm
(function).
visible-chunks
(generic function).
visible-sampling
(generic reader).
(setf visible-sampling)
(generic writer).
*current-stripe*
(special variable).
*versions*
(special variable).
->chunk
(function).
->cloud
(function).
->clouds
(function).
accumulate-cloud-statistics
(generic function).
accumulate-cloud-statistics*
(generic function).
accumulate-negative-phase-statistics
(generic function).
accumulate-positive-phase-statistics
(generic function).
accumulate-sparsity-statistics
(generic function).
accumulated-in-sink-p
(function).
activate-cloud
(function).
activate-cloud*
(generic function).
add-chunk-nodes
(function).
add-rbm
(function).
all-rbms
(function).
bm-mcmc-learner
(class).
bm-type-name
(function).
both-cloud-ends-in-p
(function).
cached-activations1
(generic reader).
cached-activations2
(generic reader).
cached-version1
(generic reader).
(setf cached-version1)
(generic writer).
cached-version2
(generic reader).
(setf cached-version2)
(generic writer).
call-with-sink-accumulator
(generic function).
check-dbm-clouds
(function).
check-no-name-clashes
(function).
check-no-self-connection
(function).
check-stripes
(function).
check-valuep
(function).
cloud-between-chunks-p
(function).
conditioning-chunk-p
(function).
conditioning-clouds-to
(function).
connects-to-p
(function).
copy-chunk-nodes
(function).
copy-dbm-chunk-to-dbn
(function).
copy-dbm-cloud-to-dbn
(function).
copy-nodes
(generic function).
cuda-exp-normalized
(function).
cuda-exp-normalized-d
(function).
cuda-exp-normalized-s
(function).
cuda-sample-softmax
(function).
cuda-sample-softmax-d
(function).
cuda-sample-softmax-s
(function).
do-chunk
(macro).
do-cloud-runs
(macro).
do-cloud/chunk1
(macro).
do-stripes
(macro).
ensure-mat-large-enough
(function).
ensure-random-numbers
(function).
ensure-scratch
(function).
factored-cloud-shared-chunk
(function).
factored-cloud-shared-chunk
(class).
fill-chunk
(function).
flush-sparsity
(generic function).
format-full-cloud-norm
(function).
full-cloud-norm
(function).
full-clouds-everywhere
(function).
full-clouds-everywhere-between-layers
(function).
has-hidden-to-hidden-p
(generic reader).
has-inputs-p
(generic reader).
has-visible-to-visible-p
(generic reader).
hidden-and-conditioning-chunks
(generic reader).
hidden-source-chunk
(generic reader).
hijack-means-to-activation
(function).
inactive-rbms
(generic reader).
make-do-cloud/chunk2
(function).
map-sparser
(function).
maybe-remember
(function).
maybe-use-remembered
(function).
means-or-samples
(function).
name-clashes
(function).
next-node-inputs
(generic reader).
node-change
(function).
nodes->means
(function).
norm
(function).
normalized-group-chunk
(class).
old-nodes
(generic reader).
old-products
(generic reader).
old-sum1
(generic reader).
other-chunk
(function).
previous-rbm
(function).
products
(generic reader).
random-numbers
(generic reader).
(setf random-numbers)
(generic writer).
remove-if*
(function).
resize-chunk
(generic function).
scale1
(generic reader).
scale2
(generic reader).
scratch
(generic reader).
(setf scratch)
(generic writer).
set-mean
(function).
set-mean*
(function).
set-n-rbms
(function).
sparse-bm-learner
(class).
sparsity-gradient-sources
(generic reader).
sparsity-target
(generic reader).
stable-set-difference
(function).
sum-chunk-nodes-and-old-nodes
(function).
sum-nodes-and-old-nodes
(function).
sum1
(generic reader).
sum2
(generic reader).
swap-nodes
(function).
swap-nodes*
(function).
use-blas-on-chunk-p
(function).
version
(function).
visible-and-conditioning-chunks
(generic reader).
visible-nodes->means
(function).
with-sink-accumulator
(macro).
with-versions
(macro).
zero-chunk
(function).
zero-weight-to-self
(generic function).
mgl
See MGL::@MGL-MANUAL. This package reexports
everything from other packages defined here plus MGL-MAT.
common-lisp
.
mgl-bm
.
mgl-bp
.
mgl-cg
.
mgl-common
.
mgl-core
.
mgl-dataset
.
mgl-diffun
.
mgl-gd
.
mgl-gp
.
mgl-log
.
mgl-mat
.
mgl-nlp
.
mgl-opt
.
mgl-pax
.
mgl-resample
.
mgl-unroll
.
mgl-util
.
@mgl-bm
(special variable).
@mgl-code-organization
(special variable).
@mgl-dependencies
(special variable).
@mgl-glossary
(special variable).
@mgl-gp
(special variable).
@mgl-introduction
(special variable).
@mgl-links
(special variable).
@mgl-manual
(special variable).
@mgl-overview
(special variable).
pax-pages
(function).
pax-sections
(function).
mgl-core
See MGL-CORE::@MGL-MODEL, MGL-CORE::@MGL-MONITOR, MGL-CORE::@MGL-CLASSIFICATION.
common-lisp
.
mgl-common
.
mgl-dataset
.
mgl-log
.
mgl-mat
.
mgl-pax
.
mgl-util
.
mgl
.
mgl-bm
.
mgl-bp
.
mgl-cg
.
mgl-diffun
.
mgl-gd
.
mgl-gp
.
mgl-nlp
.
mgl-opt
.
mgl-unroll
.
add-confusion-matrix
(function).
add-to-counter
(generic function).
apply-monitor
(generic function).
apply-monitors
(function).
attributed
(class).
attributes
(generic reader).
(setf attributes)
(generic writer).
basic-counter
(class).
classification-accuracy-counter
(class).
concat-counter
(class).
concatenation-type
(generic reader).
confusion-class-name
(generic function).
confusion-count
(generic function).
(setf confusion-count)
(generic function).
confusion-matrix
(class).
confusion-matrix-accuracy
(function).
confusion-matrix-classes
(generic function).
confusion-matrix-precision
(function).
confusion-matrix-recall
(function).
count-features
(function).
counter
(generic function).
counter-raw-values
(generic function).
counter-values
(generic function).
cross-entropy-counter
(class).
decode
(generic function).
do-batches-for-model
(macro).
do-executors
(macro).
encode
(generic function).
encoder/decoder
(class).
feature-disambiguities
(function).
feature-llrs
(function).
instance-to-executor-parameters
(generic function).
label-index
(generic function).
label-index-distribution
(generic function).
label-index-distributions
(generic function).
label-indices
(generic function).
load-state
(function).
log-padded
(function).
make-classification-accuracy-monitors
(function).
make-classification-accuracy-monitors*
(generic function).
make-confusion-matrix
(function).
make-cross-entropy-monitors
(function).
make-cross-entropy-monitors*
(generic function).
make-executor-with-parameters
(generic function).
make-indexer
(function).
make-label-monitors
(function).
map-batches-for-model
(function).
map-confusion-matrix
(generic function).
map-over-executors
(generic function).
max-n-stripes
(slot).
max-n-stripes
(slot).
max-n-stripes
(slot).
(setf max-n-stripes)
(setf expander).
max-n-stripes
(generic function).
measure-classification-accuracy
(function).
measure-confusion
(function).
measure-cross-entropy
(function).
measure-roc-auc
(function).
measurer
(generic reader).
monitor
(class).
monitor-model-results
(function).
monitors
(slot).
monitors
(slot).
monitors
(generic function).
(setf monitors)
(generic writer).
n-stripes
(slot).
n-stripes
(slot).
(setf n-stripes)
(setf expander).
n-stripes
(generic function).
parameterized-executor-cache-mixin
(class).
read-state
(function).
read-state*
(generic function).
reset-counter
(generic function).
rmse-counter
(class).
save-state
(function).
set-input
(generic function).
set-max-n-stripes
(generic function).
set-n-stripes
(generic function).
sort-confusion-classes
(generic function).
stripe-end
(generic function).
stripe-start
(generic function).
with-padded-attribute-printing
(macro).
with-stripes
(macro).
write-state
(function).
write-state*
(generic function).
*attribute-print-widths*
(special variable).
*print-label-counters*
(special variable).
@mgl-attributes
(special variable).
@mgl-classification
(special variable).
@mgl-classification-counter
(special variable).
@mgl-classification-measurer
(special variable).
@mgl-classification-monitor
(special variable).
@mgl-confusion-matrix
(special variable).
@mgl-core
(special variable).
@mgl-counter
(special variable).
@mgl-counter-classes
(special variable).
@mgl-executors
(special variable).
@mgl-feature-encoding
(special variable).
@mgl-feature-selection
(special variable).
@mgl-features
(special variable).
@mgl-measurer
(special variable).
@mgl-model-stripe
(special variable).
@mgl-monitor
(special variable).
@mgl-monitoring
(special variable).
@mgl-parameterized-executor-cache
(special variable).
@mgl-persistence
(special variable).
all-document-classes
(function).
attribute-print-width
(function).
concatenation
(generic reader).
(setf concatenation)
(generic writer).
confusion-matrix-test
(function).
counts
(generic reader).
decodings
(generic reader).
denominator*
(generic reader).
document-features
(function).
encodings
(generic reader).
executor-cache
(generic reader).
find-one-executor
(generic function).
get-all
(function).
index-scored-features
(function).
insert-into-executor-cache
(function).
lookup-executor-cache
(function).
maybe-print-unreadable-object
(macro).
numerator*
(generic reader).
pad-to-width
(function).
per-label-counters
(generic reader).
plists-to-column-widths
(function).
print-name
(function).
reverse-map
(function).
stripe-binding
(function).
trivially-map-over-executors
(function).
Definitions are sorted by export status, category, package, and then by lexicographic order.
The default stream to which commands and inline data are written by WRITE-COMMAND.
The default for CUDA-WINDOW-START-TIME.
Extrapolate maximum EXT times the current step-size.
Don’t reevaluate within INT of the limit of the current bracket.
Maximum allowed slope ratio.
RHO is the minimum allowed fraction of the expected (from the slope at the initial point in the linesearch). Constants must satisfy 0 < RHO < SIG < 1.
SIG and RHO are the constants controlling the Wolfe-Powell conditions. SIG is the maximum allowed absolute ratio between previous and new slopes (derivatives in the search direction), thus setting SIG to low (positive) values forces higher precision in the line-searches.
This is the default dataset for MGL-OPT:MINIMIZE. It’s an infinite stream of NILs.
Controls whether warping is enabled (see @MGL-RNN-TIME-WARP). Don’t enable it for training, as it would make backprop impossible.
Syntactic sugar to assemble FNNs from CLUMPs. Like LET*, it is a
sequence of bindings (of symbols to CLUMPs). The names of the clumps
created default to the symbol of the binding. In case a clump is not
bound to a symbol (because it was created in a nested expression),
the local function CLUMP can be used to find the clump with the
given name in the fnn being built. Example:
(build-fnn ()
(features (->input :size n-features))
(biases (->weight :size n-features))
(weights (->weight :size (* n-hiddens n-features)))
(activations0 (->v*m :weights weights :x (clump ’features)))
(activations (->+ :args (list biases activations0)))
(output (->sigmoid :x activations)))
Create an RNN with MAX-N-STRIPES and MAX-LAG whose UNFOLDER is BODY wrapped in a lambda. Bind symbol given as the RNN argument to the RNN object so that BODY can see it.
Convenience macro over MAP-BATCHES-FOR-MODEL.
Convenience macro on top of MAP-OVER-EXECUTORS.
A convenience macro on top of MAP-GRADIENT-SINK.
Iterate over SEGMENTS in SEGMENT-SET. If START is specified, the it is bound to the start index of SEGMENT within SEGMENT-SET. The start index is the sum of the sizes of previous segments.
Like CONSTANTLY but evaluates BODY it for each time.
Let the compiler compile BODY for the case when TEST is true and also when it’s false. The purpose is to allow different constraints to propagate to the two branches allowing them to be more optimized.
Binds *COMMAND-STREAM* to STREAM routing all command output to STREAM by default.
Note the width of values for each attribute key which is the number of characters in the value’s PRINC-TO-STRING’ed representation. In BODY, if attributes with they same key are printed they are forced to be at least this wide. This allows for nice, table-like output:
(let ((attributeds
(list (make-instance ’basic-counter
:attributes ’(:a 1 :b 23 :c 456))
(make-instance ’basic-counter
:attributes ’(:a 123 :b 45 :c 6)))))
(with-padded-attribute-printing (attributeds)
(map nil (lambda (attributed)
(format t "~A~%" attributed))
attributeds)))
;; 1 23 456: 0.000e+0 (0)
;; 123 45 6 : 0.000e+0 (0)
Start gnuplot, bind STREAM and *COMMAND-STREAM* to its standard input. The stream is closed when BODY exits.
Bind start and optionally end indices belonging to stripes in
striped objects.
(WITH-STRIPES ((STRIPE1 OBJECT1 START1 END1)
(STRIPE2 OBJECT2 START2)
...)
...)
This is how one’s supposed to find the index range corresponding to
the Nth input in an input lump of a bpn:
(with-stripes ((n input-lump start end))
(loop for i upfrom start below end
do (setf (mref (nodes input-lump) i) 0d0)))
Note how the input lump is striped, but the matrix into which we are indexing (NODES) is not known to WITH-STRIPES. In fact, for lumps the same stripe indices work with NODES and MGL-BP:DERIVATIVES.
In BODY ->WEIGHT will first look up if a weight lump of the same name exists in FROM-BPN and return that, or else create a weight lump normally. If FROM-BPN is NIL, then no weights are copied.
max-n-stripes
(generic function).
Create a subnetwork of class ->ACTIVATION that computes the over
activation from dense connection from lumps in INPUTS, and
elementwise connection from lumps in PEEPHOLES. Create new ->WEIGHT
lumps as necessary. INPUTS and PEEPHOLES can be a single lump or a
list of lumps. Finally, if ADD-BIAS-P, then add an elementwise bias
too. SIZE must be specified explicitly, because it is not possible
to determine it unless there are peephole connections.
“‘cl-transcript
(->activation (->input :size 10 :name ’input) :name ’h1 :size 4)
==> #<->ACTIVATION (H1 :ACTIVATION) :STRIPES 1/1 :CLUMPS 4>
“‘
This is the basic workhorse of neural networks which takes care of
the linear transformation whose results and then fed to some
non-linearity (->SIGMOID, ->TANH, etc).
The name of the subnetwork clump is ‘(,NAME :ACTIVATION)‘. The bias weight lump (if any) is named ‘(:BIAS ,NAME)‘. Dense connection weight lumps are named are named after the input and NAME: ‘(,(NAME INPUT) ,NAME)‘, while peepholes weight lumps are named ‘(,(NAME INPUT) ,NAME :PEEPHOLE)‘. This is useful to know if, for example, they are to be initialized differently.
A utility functions that creates and wraps an ->ACTIVATION in
->BATCH-NORMALIZED and with its BATCH-NORMALIZATION the two weight
lumps for the scale and shift
parameters. ‘(->BATCH-NORMALIZED-ACTIVATION INPUTS :NAME ’H1 :SIZE
10)‘ is equivalent to:
“‘commonlisp
(->batch-normalized (->activation inputs :name ’h1 :size 10 :add-bias-p nil)
:name ’(h1 :batch-normalized-activation))
“‘
Note how biases are turned off since normalization will cancel them anyway (but a shift is added which amounts to the same effect).
Create an LSTM layer consisting of input, forget, output gates with
which input, cell state and output are scaled. Lots of lumps are
created, the final one representing to output of the LSTM has NAME.
The rest of the lumps are named automatically based on NAME. This
function returns only the output lump (‘m‘), but all created lumps
are added automatically to the BPN being built.
There are many papers and tutorials on LSTMs. This version is well
described in "Long Short-Term Memory Recurrent Neural Network
Architectures for Large Scale Acoustic Modeling" (2014, Hasim Sak,
Andrew Senior, Francoise Beaufays). Using the notation from that
paper:
$$
i_t = s(W\_{ix} x\_t + W\_{im} m\_{t-1} + W\_{ic} \odot
c\_{t-1} + b\_i)
$$
$$
f\_t = s(W\_{fx} x\_t + W\_{fm} m\_{t-1} + W\_{fc} \odot
c\_{t-1} + b\_f)
$$
$$
c\_t = f\_t \odot c\_{t-1} + i\_t \odot g(W\_{cx} x\_t +
W\_{cm} m\_{t-1} + b\_c)
$$
$$
o\_t = s(W\_{ox} x\_t + W\_{om} m\_{t-1} + W\_{oc} \odot
c\_t + b\_o)
$$
$$
m\_t = o\_t \odot h(c\_t),
$$
where ‘i‘, ‘f‘, and ‘o‘ are the input, forget and output gates. ‘c‘
is the cell state and ‘m‘ is the actual output.
Weight matrices for connections from ‘c‘ (‘W_ic‘, ‘W_fc‘ and ‘W_oc‘)
are diagonal and represented by just the vector of diagonal values.
These connections are only added if PEEPHOLES is true.
A notable difference from the paper is that in addition to being a
single lump, ‘x_t‘ (INPUTS) can also be a list of lumps. Whenever
some activation is to be calculated based on ‘x_t‘, it is going to
be the sum of individual activations. For example, ‘W_ix * x_t‘ is
really ‘sum_j W_ijx * inputs_j‘.
If CELL-INIT is non-NIL, then it must be a CLUMP of SIZE form which
stands for the initial state of the value cell (‘c_{-1}‘). CELL-INIT
being NIL is equivalent to the state of all zeros.
ACTIVATION-FN defaults to ->ACTIVATION, but it can be for example ->BATCH-NORMALIZED-ACTIVATION. In general, functions like the aforementioned two with signature like (INPUTS &KEY NAME SIZE PEEPHOLES) can be passed as ACTIVATION-FN.
Add CLUMP to BPN. MAX-N-STRIPES of CLUMP gets set to that of BPN. It is an error to add a clump with a name already used by one of the CLUMPS of BPN.
Add MATRIX into RESULT-MATRIX.
Call APPLY-MONITOR on each monitor in MONITORS and ARGUMENTS. This is how an event is fired.
Make it so that the norm of the batch normalized gradients accumulated by BATCH-GD-OPTIMIZER is clipped to L2-UPPER-BOUND before every update. See CLIP-L2-NORM.
By pushing a lambda to AFTER-UPDATE-HOOK of OPTIMIZER arrange for
all weights beings trained by OPTIMIZER to be renormalized (as in
RENORMALIZE-ACTIVATIONS with L2-UPPER-BOUND).
It is assumed that if the weights either belong to an activation lump or are simply added to the activations (i.e. they are biases).
Sample from SEQ with SAMPLE-FROM (passing RATIO, WEIGHT, REPLACEMENT), or SAMPLE-STRATIFIED if KEY is not NIL. Call FN with the sample. If N is NIL then keep repeating this until FN performs a non-local exit. Else N must be a non-negative integer, N iterations will be performed, the primary values returned by FN collected into a list and returned. See SAMPLE-FROM and SAMPLE-STRATIFIED for examples.
Perform cross-validation on different shuffles of DATA N times and
collect the results. Since CROSS-VALIDATE collects the return values
of FN, the return value of this function is a list of lists of FN
results. If N is NIL, don’t collect anything just keep doing
repeated CVs until FN performs a non-local exit.
The following example simply collects the test and training sets for
2-fold CV repeated 3 times with shuffled data:
“‘commonlisp
;;; This is non-deterministic.
(bag-cv ’(0 1 2 3 4) #’list :n 3 :n-folds 2)
=> ((((2 3 4) (1 0))
((1 0) (2 3 4)))
(((2 1 0) (4 3))
((4 3) (2 1 0)))
(((1 0 3) (2 4))
((2 4) (1 0 3))))
“‘
CV bagging is useful when a single CV is not producing stable results. As an ensemble method, CV bagging has the advantage over bagging that each example will occur the same number of times and after the first CV is complete there is a complete but less reliable estimate for each example which gets refined by further CVs.
Return 1 with X probability and 0 otherwise.
See "Accurate Methods for the Statistics of Surprise and
Coincidence" by Ted Dunning
(http://citeseer.ist.psu.edu/29096.html).
All classes must have non-zero counts, that is, K1, N1-K1, K2, N2-K2 are positive integers. To ensure this - and also as kind of prior - add a small number such as 1 to K1, K2 and 2 to N1, N2 before calling.
Compute the [BLEU score](http://en.wikipedia.org/wiki/BLEU) for
bilingual CORPUS. BLEU measures how good a translation is compared
to human reference translations.
CANDIDATES (keyed by CANDIDATE-KEY) and REFERENCES (keyed by
REFERENCE-KEY) are sequences of sentences. A sentence is a sequence
of words. Words are compared with EQUAL, and may be any kind of
object (not necessarily strings).
Currently there is no support for multiple reference translations. N
determines the largest n-grams to consider.
The first return value is the BLEU score (between 0 and 1, not as a
percentage). The second value is the sum of the lengths of
CANDIDATES divided by the sum of the lengths of REFERENCES (or NIL,
if the denominator is 0). The third is a list of n-gram
precisions (also between 0 and 1 or NIL), one for each element in
\[1..‘N‘].
This is basically a reimplementation of [multi-bleu.perl](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl).
“‘cl-transcript
(bleu ’((1 2 3 4) (a b))
’((1 2 3 4) (1 2)))
=> 0.8408964
=> 1
=> (;; 1-gram precision: 4/6
2/3
;; 2-gram precision: 3/4
3/4
;; 3-gram precision: 2/2
1
;; 4-gram precision: 1/1
1)
“‘
CG-OPTIMIZER passes each batch of data to this function with its
CG-ARGS passed on.
Minimize a differentiable multivariate function with conjugate
gradient. The Polak-Ribiere flavour of conjugate gradients is used
to compute search directions, and a line search using quadratic and
cubic polynomial approximations and the Wolfe-Powell stopping
criteria is used together with the slope ratio method for guessing
initial step sizes. Additionally a bunch of checks are made to make
sure that exploration is taking place and that extrapolation will
not be unboundedly large.
FN is a function of two parameters: WEIGHTS and DERIVATIVES. WEIGHTS
is a MAT of the same size as W that is where the search start from.
DERIVATIVES is also a MAT of that size and it is where FN shall
place the partial derivatives. FN returns the value of the function
that is being minimized.
CG performs a number of line searches and invokes FN at each step. A line search invokes FN at most MAX-N-EVALUATIONS-PER-LINE-SEARCH number of times and can succeed in improving the minimum by the sufficient margin or it can fail. Note, the even a failed line search may improve further and hence change the weights it’s just that the improvement was deemed too small. CG stops when either:
- two line searches fail in a row
- MAX-N-LINE-SEARCHES is reached
- MAX-N-EVALUATIONS is reached
CG returns a MAT that contains the best weights, the minimum, the
number of line searches performed, the number of succesful line
searches and the number of evaluations.
When using MAX-N-EVALUATIONS remember that there is an extra
evaluation of FN before the first line search.
SPARE-VECTORS is a list of preallocated MATs of the same size as W.
Passing 6 of them covers the current need of the algorithm and it
will not cons up vectors of size W at all.
NOTE: If the function terminates within a few iterations, it could
be an indication that the function values and derivatives are not
consistent (ie, there may be a bug in the implementation of FN
function).
SIG and RHO are the constants controlling the Wolfe-Powell conditions. SIG is the maximum allowed absolute ratio between previous and new slopes (derivatives in the search direction), thus setting SIG to low (positive) values forces higher precision in the line-searches. RHO is the minimum allowed fraction of the expected (from the slope at the initial point in the linesearch). Constants must satisfy 0 < RHO < SIG < 1. Tuning of SIG (depending on the nature of the function to be optimized) may speed up the minimization; it is probably not worth playing much with RHO.
The name of the lump that represents CHUNK.
Scale MATS so that their $L_2$ norm does not exceed L2-UPPER-BOUND.
Compute the norm of of MATS as if they were a single vector. If the norm is greater than L2-UPPER-BOUND, then scale each matrix destructively by the norm divided by L2-UPPER-BOUND and if non-NIL call the function CALLBACK with the scaling factor.
Return the overall accuracy of the results in MATRIX. It’s computed
as the number of correctly classified cases (hits) divided by the
name of cases. Return the number of hits and the number of cases as
the second and third value. If FILTER function is given, then call
it with the target and the prediction of the cell. Disregard cell
for which FILTER returns NIL.
Precision and recall can be easily computed by giving the right filter, although those are provided in separate convenience functions.
Return the accuracy over the cases when the classifier said PREDICTION.
Return the accuracy over the cases when the correct class is TARGET.
Return scored features as an EQUAL hash table whose keys are
features of DOCUMENTS and values are counts of occurrences of
features. MAPPER takes a function and a document and calls function
with features of the document.
“‘cl-transcript
(sort (alexandria:hash-table-alist
(count-features ’(("hello" "world")
("this" "is" "our" "world"))
(lambda (fn document)
(map nil fn document))))
#’string< :key #’car)
=> (("hello" . 1) ("is" . 1) ("our" . 1) ("this" . 1) ("world" . 2))
“‘
Map FN over the FOLDS of DATA split with SPLIT-FN and collect the
results in a list. The simplest demonstration is:
“‘cl-transcript
(cross-validate ’(0 1 2 3 4)
(lambda (test training)
(list test training))
:n-folds 5)
=> (((0) (1 2 3 4))
((1) (0 2 3 4))
((2) (0 1 3 4))
((3) (0 1 2 4))
((4) (0 1 2 3)))
“‘
Of course, in practice one would typically train a model and return the trained model and/or its score on TEST. Also, sometimes one may want to do only some of the folds and remember which ones they were:
“‘cl-transcript
(cross-validate ’(0 1 2 3 4)
(lambda (fold test training)
(list :fold fold test training))
:folds ’(2 3)
:pass-fold t)
=> ((:fold 2 (2) (0 1 3 4))
(:fold 3 (3) (0 1 2 4)))
“‘
Finally, the way the data is split can be customized. By default SPLIT-FOLD/MOD is called with the arguments DATA, the fold (from among FOLDS) and N-FOLDS. SPLIT-FOLD/MOD returns two values which are then passed on to FN. One can use SPLIT-FOLD/CONT or SPLIT-STRATIFIED or any other function that works with these arguments. The only real constraint is that FN has to take as many arguments (plus the fold argument if PASS-FOLD) as SPLIT-FN returns.
Convert DBM to a DBN by discarding intralayer connections and
doubling activations of clouds where necessary. If a chunk does not
have input from below then scale its input from above by 2;
similarly, if a chunk does not have input from above then scale its
input from below by 2. By default, weights are shared between clouds
and their copies.
For now, unrolling the resulting DBN to a BPN is not supported.
Do a single downward pass in DBM, propagating the mean-field much like performing approximate inference, but in the other direction. Disregard intralayer and upward connections, double activations to chunks having downward connections.
Propagate the means down from the means of DBN.
Set TARGET of SOFTMAX-XE-LOSS to a MAT capable of holding the dense target values for N stripes.
Return scored features as an EQUAL hash table whose keys are
features of DOCUMENTS and values are their _disambiguities_. MAPPER
takes a function and a document and calls function with features of
the document.
From the paper ’Using Ambiguity Measure Feature Selection Algorithm for Support Vector Machine Classifier’.
Return scored features as an EQUAL hash table whose keys are
features of DOCUMENTS and values are their log likelihood ratios.
MAPPER takes a function and a document and calls function with
features of the document.
“‘cl-transcript
(sort (alexandria:hash-table-alist
(feature-llrs ’((:a "hello" "world")
(:b "this" "is" "our" "world"))
(lambda (fn document)
(map nil fn (rest document)))
#’first))
#’string< :key #’car)
=> (("hello" . 2.6032386) ("is" . 2.6032386) ("our" . 2.6032386)
("this" . 2.6032386) ("world" . 4.8428774e-8))
“‘
Find the clump with NAME among CLUMPS of BPN. As always, names are compared with EQUAL. If not found, then return NIL or signal and error depending on ERRORP.
Partition SEQ into a number of subsequences. FRACTIONS is either a
positive integer or a list of non-negative real numbers. WEIGHT is
NIL or a function that returns a non-negative real number when
called with an element from SEQ. If FRACTIONS is a positive integer
then return a list of that many subsequences with equal sum of
weights bar rounding errors, else partition SEQ into subsequences,
where the sum of weights of subsequence I is proportional to element
I of FRACTIONS. If WEIGHT is NIL, then it’s element is assumed to
have the same weight.
To split into 5 sequences:
“‘cl-transcript
(fracture 5 ’(0 1 2 3 4 5 6 7 8 9))
=> ((0 1) (2 3) (4 5) (6 7) (8 9))
“‘
To split into two sequences whose lengths are proportional to 2 and
3:
“‘cl-transcript
(fracture ’(2 3) ’(0 1 2 3 4 5 6 7 8 9))
=> ((0 1 2 3) (4 5 6 7 8 9))
“‘
Similar to FRACTURE, but also makes sure that keys are evenly
distributed among the partitions (see STRATIFY). It can be useful
for classification tasks to partition the data set while keeping the
distribution of classes the same.
Note that the sets returned are not in random order. In fact, they
are sorted internally by KEY.
For example, to make two splits with approximately the same number
of even and odd numbers:
“‘cl-transcript
(fracture-stratified 2 ’(0 1 2 3 4 5 6 7 8 9) :key #’evenp)
=> ((0 2 1 3) (4 6 8 5 7 9))
“‘
Return a list of MGL-GNUPLOT:DATA-MAPPINGs, one for each level in LEVELS-AND-OPTIONS (a list of (LEVEL OPTIONS)). Each mapping contains INPUTS in its first column, and MEANS + LEVEL*VARIANCES in the second.
Returns a matrix that contains INPUTS in its first column, and a sample taken with SAMPLE-GP in its second.
Initialize FNN from the weights of BM according to cloud INITS that was returned by UNROLL-DBN or UNROLL-DBM.
Copy the previously clamped INPUTS to NODES as if SET-INPUT were called with the same parameters.
Insert ITEM into VECTOR while keeping it sorted by PRED. Extend the vector if needed while respecting MAX-LENGTH.
In RNN or if it’s NIL the RNN being extended with another
BPN (called _unfolding_), look up the CLUMP with NAME in the BPN
that’s LAG number of time steps before the BPN being added. If this
function is called from UNFOLDER of an RNN (which is what happens
behind the scene in the body of BUILD-RNN), then it returns an
opaque object representing a lagged connection to a clump, else it
returns the CLUMP itself.
FIXDOC: PATH
Return a list of samples of length at most MAX-SIZE or less if SAMPLER runs out.
A utility function that returns the list of segments from MAP-SEGMENTS on GRADIENT-SOURCE.
Load weights of OBJECT from FILENAME. Return OBJECT.
Log (see LOG-MSG) ATTRIBUTEDS non-escaped (as in PRINC or ~A) with the output being as table-like as possible.
Return a list of MONITOR objects associated with
[CLASSIFICATION-ACCURACY-COUNTER][]s. LABEL-INDEX-FN is a function
like LABEL-INDEX. See that function for more.
Implemented in terms of MAKE-CLASSIFICATION-ACCURACY-MONITORS*.
Classes are compared with TEST.
Return a list of MONITOR objects, each associated with one BASIC-COUNTER with attribute :TYPE "cost". Implemented in terms of MAKE-COST-MONITORS*.
Return a list of MONITOR objects associated with [CROSS-ENTROPY-COUNTER][]s. LABEL-INDEX-DISTRIBUTION-FN is a function like LABEL-INDEX-DISTRIBUTION. See that function for more.
Implemented in terms of MAKE-CROSS-ENTROPY-MONITORS*.
Take the top N features from SCORED-FEATURES (see @MGL-FEATURE-SELECTION), assign indices to them starting from START. Return an ENCODER/DECODER (or another CLASS) that converts between objects and indices.
Return classification accuracy and cross-entropy monitors. See MAKE-CLASSIFICATION-ACCURACY-MONITORS and MAKE-CROSS-ENTROPY-MONITORS for a description of paramters.
Make a function of a single argument that’s suitable as the
function argument to a mapper function. It calls FUNCTION with every
N element.
“‘cl-transcript
(map nil (make-n-gram-mappee #’print 3) ’(a b c d e))
..
.. (A B C)
.. (B C D)
.. (C D E)
“‘
Return a function that returns elements of VECTOR in random order without end. When there are no more elements, start over with a different random order.
Create a sampler that returns elements of SEQ in random order. If MAX-N-SAMPLES is non-nil, then at most MAX-N-SAMPLES are sampled. The first pass over a shuffled copy of SEQ, and this copy is reshuffled whenever the sampler reaches the end of it. Shuffling is performed by calling the REORDER function.
Return a function that returns elements of SEQ in order without end. When there are no more elements, start over.
Create a sampler that returns elements of SEQ in their original order. If MAX-N-SAMPLES is non-nil, then at most MAX-N-SAMPLES are sampled.
Return a list of monitors, one for every monitor in STEP-MONITORS
of RNN. These monitors extract the results from their warp
counterpairs with COUNTER-VALUES-FN and add them to their own
counter that’s created by MAKE-COUNTER. Wow. Ew. The idea is that
one does something like this do monitor warped prediction:
“‘commonlisp
(let ((*warp-time* t))
(setf (step-monitors rnn)
(make-cost-monitors rnn :attributes ’(:event "warped pred.")))
(monitor-bpn-results dataset rnn
;; Just collect and reset the warp
;; monitors after each batch of
;; instances.
(make-step-monitor-monitors rnn)))
“‘
Call FN with batches of instances from DATASET suitable for MODEL. The number of instances in a batch is MAX-N-STRIPES of MODEL or less if there are no more instances left.
Call FN with each instance in DATASET. This is basically equivalent to iterating over the elements of a sequence or a sampler (see @MGL-SAMPLER).
Call FN with a list of instances, one from each dataset in
DATASETS. Return nothing. If IMPUTE is specified then iterate until
the largest dataset is consumed imputing IMPUTE for missing values.
If IMPUTE is not specified then iterate until the smallest dataset
runs out.
“‘cl-transcript
(map-datasets #’prin1 ’((0 1 2) (:a :b)))
.. (0 :A)(1 :B)
(map-datasets #’prin1 ’((0 1 2) (:a :b)) :impute nil)
.. (0 :A)(1 :B)(2 NIL)
“‘
It is of course allowed to mix sequences with samplers:
“‘cl-transcript
(map-datasets #’prin1
(list ’(0 1 2)
(make-sequence-sampler ’(:a :b) :max-n-samples 2)))
.. (0 :A)(1 :B)
“‘
Find the colums with the maximum in each row of the 2d MAT and return them as a list.
Return the number of correct classifications and as the second
value the number of instances (equal to length of TRUTHS in the
non-weighted case). TRUTHS (keyed by TRUTH-KEY) is a sequence of
opaque class labels compared with TEST to another sequence of
classes labels in PREDICTIONS (keyed by PREDICTION-KEY). If WEIGHT
is non-nil, then it is a function that returns the weight of an
element of TRUTHS. Weighted cases add their weight to both
counts (returned as the first and second values) instead of 1 as in
the non-weighted case.
Note how the returned values are suitable for MULTIPLE-VALUE-CALL with #’ADD-TO-COUNTER and a CLASSIFICATION-ACCURACY-COUNTER.
Create a CONFUSION-MATRIX from TRUTHS and PREDICTIONS.
TRUTHS (keyed by TRUTH-KEY) is a sequence of class labels compared
with TEST to another sequence of class labels in PREDICTIONS (keyed
by PREDICTION-KEY). If WEIGHT is non-nil, then it is a function that
returns the weight of an element of TRUTHS. Weighted cases add their
weight to both counts (returned as the first and second values).
Note how the returned confusion matrix can be added to another with ADD-TO-COUNTER.
Return the sum of the cross-entropy between pairs of elements with
the same index of TRUTHS and PREDICTIONS. TRUTH-KEY is a function
that’s when applied to an element of TRUTHS returns a sequence
representing some kind of discrete target distribution (P in the
definition below). TRUTH-KEY may be NIL which is equivalent to the
IDENTITY function. PREDICTION-KEY is the same kind of key for
PREDICTIONS, but the sequence it returns represents a distribution
that approximates (Q below) the true one.
Cross-entropy of the true and approximating distributions is defined
as:
cross-entropy(p,q) = - sum_i p(i) * log(q(i))
of which this function returns the sum over the pairs of elements of
TRUTHS and PREDICTIONS keyed by TRUTH-KEY and PREDICTION-KEY.
Due to the logarithm, if q(i) is close to zero, we run into numerical problems. To prevent this, all q(i) that are less than MIN-PREDICTION-PR are treated as if they were MIN-PREDICTION-PR.
The second value returned is the sum of p(i) over all TRUTHS and all
‘I‘. This is normally equal to ‘(LENGTH TRUTHS)‘, since elements of
TRUTHS represent a probability distribution, but this is not
enforced which allows relative importance of elements to be
controlled.
The third value returned is a plist that maps each index occurring
in the distribution sequences to a list of two elements:
sum_j p_j(i) * log(q_j(i))
and
sum_j p_j(i)
where ‘J‘ indexes into TRUTHS and PREDICTIONS.
(measure-cross-entropy ’((0 1 0)) ’((0.1 0.7 0.2)))
=> 0.35667497
1
(2 (0.0 0)
1 (0.35667497 1)
0 (0.0 0))
Note how the returned values are suitable for MULTIPLE-VALUE-CALL with #’ADD-TO-COUNTER and a CROSS-ENTROPY-COUNTER.
Return the area under the ROC curve for PREDICTIONS representing
predictions for a binary classification problem. PRED is a predicate
function for deciding whether a prediction belongs to the so called
positive class. KEY returns a number for each element which is the
predictor’s idea of how much that element is likely to belong to the
class, although it’s not necessarily a probability.
If WEIGHT is NIL, then all elements of PREDICTIONS count as 1
towards the unnormalized sum within AUC. Else WEIGHT must be a
function like KEY, but it should return the importance (a positive
real number) of elements. If the weight of an prediction is 2 then
it’s as if there were another identical copy of that prediction in
PREDICTIONS.
The algorithm is based on algorithm 2 in the paper ’An introduction
to ROC analysis’ by Tom Fawcett.
ROC AUC is equal to the probability of a randomly chosen positive having higher KEY (score) than a randomly chosen negative element. With equal scores in mind, a more precise version is: AUC is the expectation of the above probability over all possible sequences sorted by scores.
Combine cloud SPECS and DEFAULT-SPECS. If the first element of
SPECS is :MERGE then merge them else return SPECS. Merging
concatenates them but removes those specs from DEFAULT-SPECS that
are between chunks that have a spec in SPECS. If a spec has CLASS
NIL then it is removed as well. A cloud spec at minimum specifies
the name of the chunks it connects:
(:chunk1 inputs :chunk2 features)
in which case it defaults to be a FULL-CLOUD. If that is not desired
then the class can be specified:
(:chunk1 inputs :chunk2 features :class factored-cloud)
To remove a cloud from DEFAULT-SPECS use :CLASS NIL:
(:chunk1 inputs :chunk2 features :class nil)
Other initargs are passed as is to MAKE-INSTANCE:
(:chunk1 inputs :chunk2 features :class factored-cloud :rank 10)
You may also pass a CLOUD object as a spec.
Minimize the value of the real valued function represented by
GRADIENT-SOURCE by updating some of its parameters in WEIGHTS (a MAT
or a sequence of MATs). Return WEIGHTS. DATASET (see
MGL-DATASET::@MGL-DATASET) is a set of unoptimized parameters of the
same function. For example, WEIGHTS may be the weights of a neural
network while DATASET is the training set consisting of inputs
suitable for SET-INPUT. The default
DATASET, (*INFINITELY-EMPTY-DATASET*) is suitable for when all
parameters are optimized, so there is nothing left to come from the
environment.
Optimization terminates if DATASET is a sampler and it runs out or
when some other condition met (see TERMINATION, for example). If
DATASET is a SEQUENCE, then it is reused over and over again.
Examples for various optimizers are provided in MGL-GD::@MGL-GD and MGL-CG::@MGL-CG.
Like COLLECT-BM-MEAN-FIELD-ERRORS but reconstruct the labels even if they were missing.
For every batch (of size MAX-N-STRIPES of BPN) of instances in DATASET, set the batch as the next input with SET-INPUT, perform a FORWARD pass and apply MONITORS to the BPN (with APPLY-MONITORS). Finally, return the counters of MONITORS. This is built on top of MONITOR-MODEL-RESULTS.
Run the mean field up to RBM then down to the bottom and collect the errors with COLLECT-BATCH-ERRORS. By default, return the rmse at each level in the DBN.
Run the mean field up to RBM then down to the bottom and collect the errors with COLLECT-BATCH-ERRORS. By default, return the rmse at each level in the DBN.
Call FN with batches of instances from DATASET until it runs
out (as in DO-BATCHES-FOR-MODEL). FN is supposed to apply MODEL to
the batch and return some kind of result (for neural networks, the
result is the model state itself). Apply MONITORS to each batch and
the result returned by FN for that batch. Finally, return the list
of counters of MONITORS.
The purpose of this function is to collect various results and
statistics (such as error measures) efficiently by applying the
model only once, leaving extraction of quantities of interest from
the model’s results to MONITORS.
See the model specific versions of this functions such as MGL-BP:MONITOR-BPN-RESULTS.
For each periodic function in the list of PERIODIC-FNS, add a
monitor to OPTIMIZER’s ON-OPTIMIZATION-STARTED,
ON-OPTIMIZATION-FINISHED and ON-N-INSTANCES-CHANGED hooks. The
monitors are simple functions that just call each periodic function
with the event parameters (OPTIMIZER GRADIENT-SOURCE N-INSTANCES).
Return OPTIMIZER.
To log and reset the monitors of the gradient source after every
1000 instances seen by OPTIMIZER:
(monitor-optimization-periodically optimizer
’((:fn log-my-test-error
:period 2000)
(:fn reset-optimization-monitors
:period 1000
:last-eval 0)))
Note how we don’t pass it’s allowed to just pass the initargs for a PERIODIC-FN instead of PERIODIC-FN itself. The :LAST-EVAL 0 bit prevents RESET-OPTIMIZATION-MONITORS from being called at the start of the optimization when the monitors are empty anyway.
See "Accurate Methods for the Statistics of Surprise and
Coincidence" by Ted Dunning
(http://citeseer.ist.psu.edu/29096.html).
K1 is the number of outcomes in each class. K2 is the same in a
possibly different process.
All elements in K1 and K2 are positive integers. To ensure this - and also as kind of prior - add a small number such as 1 each element in K1 and K2 before calling.
Return T if X and Y are EQL or if they are structured components whose elements are EQUAL. Strings and bit-vectors are EQUAL if they are the same length and have identical components. Other arrays must be EQ to be EQUAL.
equal
.
Copy NODES to INPUTS.
Populate the CLAMPING-CACHE of the MAP lumps of FNN unrolled from DBM. The values for the MAP lumps are taken from mean field of the correspending chunk of the DBM. What happens when the cache already has an entry for a sample is determined by IF-EXISTS: if :SKIP, the default, the cache is unchanged; if :SUPERSEDE, the cache entry is replaced by the calculated contents; if :APPEND, the new (lump array) entries are appended to the existing ones; if :ERROR, an error is signalled.
Read the weights of OBJECT from the bivalent STREAM where weights mean the learnt parameters. There is currently no sanity checking of data which will most certainly change in the future together with the serialization format. Return OBJECT.
Return the squared norm of INPUTS - NODES not considering constant or conditioning chunks that aren’t reconstructed in any case. The second value returned is the number of nodes that contributed to the error.
Return the squared norm of INPUTS - NODES not considering constant or conditioning chunks that aren’t reconstructed in any case. The second value returned is the number of nodes that contributed to the error.
If the l2 norm of the incoming weight vector of the a unit is
larger than L2-UPPER-BOUND then renormalize it to L2-UPPER-BOUND.
The list of ->V*M-LUMPS is assumed to be eventually fed to the same
lump.
To use it, group the activation clumps into the same GD-OPTIMIZER
and hang this function on AFTER-UPDATE-HOOK, that latter of which is
done for you ARRANGE-FOR-RENORMALIZING-ACTIVATIONS.
See "Improving neural networks by preventing co-adaptation of feature detectors (Hinton, 2012)", <http://arxiv.org/pdf/1207.0580.pdf>.
Return a sequence constructed by sampling with or without
REPLACEMENT from SEQ. The sum of weights in the result sequence will
approximately be the sum of weights of SEQ times RATIO. If WEIGHT is
NIL then elements are assumed to have equal weights, else WEIGHT
should return a non-negative real number when called with an element
of SEQ.
To randomly select half of the elements:
“‘common-lisp
(sample-from 1/2 ’(0 1 2 3 4 5))
=> (5 3 2)
“‘
To randomly select some elements such that the sum of their weights
constitute about half of the sum of weights across the whole
sequence:
“‘common-lisp
(sample-from 1/2 ’(0 1 2 3 4 5 6 7 8 9) :weight #’identity)
=> ;; sums to 28 that’s near 45/2
(9 4 1 6 8)
“‘
To sample with replacement (that is, allowing the element to be
sampled multiple times):
“‘common-lisp
(sample-from 1 ’(0 1 2 3 4 5) :replacement t)
=> (1 1 5 1 4 4)
“‘
Generate samples from the probability distribution defined by the chunk type and the mean that resides in NODES.
Like SAMPLE-FROM but makes sure that the weighted proportion of classes in the result is approximately the same as the proportion in SEQ. See STRATIFY for the description of KEY and TEST.
Generate samples from the probability distribution defined by the chunk type and the mean that resides in NODES.
Save weights of OBJECT to FILENAME. If ENSURE, then ENSURE-DIRECTORIES-EXIST is called on FILENAME. IF-EXISTS is passed on to OPEN. Return OBJECT.
Copy the values of SEGMENT-SET to MAT as if they were concatenated into a single MAT.
Copy the values of MAT to the weight matrices of SEGMENT-SET as if they were concatenated into a single MAT.
Set the dropout of LUMP to DROPOUT. Find the activation lump to which LUMP is fed and rescale its weights to compensate. There must be exactly on such activation lump or this function will fail.
Set NODES of the chunks in the hidden layer to the means of their respective probability distributions.
Set [N-INSTANCES][(reader iterative-optimizer)] of OPTIMIZER and fire ON-N-INSTANCES-CHANGED. ITERATIVE-OPTIMIZER subclasses must call this to increment [N-INSTANCES][(reader iterative-optimizer)].
Set NODES of the chunks in the visible layer to the means of their respective probability distributions.
Do possibly damped mean field updates on CHUNKS until convergence.
Compute V’_{t+1}, what would normally be the means, but average it
with the previous value: V_{t+1} = k * V_t + (1 - k) * V’{t+1} where
K is the damping factor (an FLT between 0 and 1).
Call SUPERVISOR with CHUNKS BM and the iteration. Settling is finished when SUPERVISOR returns NIL. If SUPERVISOR returns a non-nil value then it’s taken to be a damping factor. For no damping return 0.
Convenience function on top of SETTLE-MEAN-FIELD.
Copy of SEQ and shuffle it using Fisher-Yates algorithm.
Shuffle SEQ using Fisher-Yates algorithm.
Imagine dividing SEQ into N-FOLDS subsequences of the same
size (bar rounding). Return the subsequence of index FOLD as the
first value and the all the other subsequences concatenated into one
as the second value. The order of elements remains stable. This
function is suitable as the SPLIT-FN argument of CROSS-VALIDATE.
Partition SEQ into two sequences: one with elements of SEQ with indices whose remainder is FOLD when divided with N-FOLDS, and a second one with the rest. The second one is the larger set. The order of elements remains stable. This function is suitable as the SPLIT-FN argument of CROSS-VALIDATE.
Split SEQ into N-FOLDS partitions (as in FRACTURE-STRATIFIED). Return the partition of index FOLD as the first value, and the concatenation of the rest as the second value. This function is suitable as the SPLIT-FN argument of CROSS-VALIDATE (mostly likely as a closure with KEY, TEST, WEIGHT bound).
Return a sequence that’s a reordering of SEQ such that elements
belonging to different strata (under KEY and TEST, see STRATIFY) are
distributed evenly. The order of elements belonging to the same
stratum is unchanged.
For example, to make sure that even and odd numbers are distributed
evenly:
“‘cl-transcript
(spread-strata ’(0 2 4 6 8 1 3 5 7 9) :key #’evenp)
=> (0 1 2 3 4 5 6 7 8 9)
“‘
Same thing with unbalanced classes:
“‘cl-transcript
(spread-strata (vector 0 2 3 5 6 1 4)
:key (lambda (x)
(if (member x ’(1 4))
t
nil)))
=> #(0 1 2 3 4 5 6)
“‘
Return the list of strata of SEQ. SEQ is a sequence of elements for which the function KEY returns the class they belong to. Such classes are opaque objects compared for equality with TEST. A stratum is a sequence of elements with the same (under TEST) KEY.
“‘cl-transcript
(stratify ’(0 1 2 3 4 5 6 7 8 9) :key #’evenp)
=> ((0 2 4 6 8) (1 3 5 7 9))
“‘
A supervisor for SETTLE-MEAN-FIELD. Return NIL if the average of the absolute value of change in nodes is below NODE-CHANGE-LIMIT, else return 0 damping for N-UNDAMPED-ITERATIONS then DAMPING-FACTOR for another N-DAMPED-ITERATIONS, then NIL.
Utility function for subclasses of ITERATIVE-OPTIMIZER. It returns whether optimization is to be terminated based on N-INSTANCES and TERMINATION that are values of the respective accessors of ITERATIVE-OPTIMIZER.
Return the time step RNN is currently executing or being unfolded for. It is 0 when the RNN is being unfolded for the first time.
Unroll DBN recursively and turn it into a feed-forward backpropagation network. A single RBM in DBN of the form VISIBLE <-> HIDDEN is transformed into a VISIBLE -> HIDDEN -> RECONSTRUCTION-OF-VISIBLE network. While the undirected connection <-> has a common weight matrix for both directions, in the backprop network the weights pertaining to ->’s are distinct but are initialized from the same <-> (with one being the tranpose of it).
If BOTTOM-UP-ONLY then don’t generate the part of the network that represents the top-down flow, that is, skip the reconstructions.
Return backprop network lump definition forms, as the second value
‘inits’: initialization specifications suitable for
INITIALIZE-FNN-FROM-BM.
If there is no corresponding chunk in the layer below or there is no rbm below then the chunk is translated into an INPUT lump. Desired outputs and error node are not added. The first element of RMBS is the topmost one (last of the DBN), the one that goes into the middle of the backprop network.
Do a single upward pass in DBM, performing approximate inference. Disregard intralayer and downward connections, double activations to chunks having upward connections.
Update GP with the evidence embodied by INPUTS and the corresponding OUTPUTS. Return a new POSTERIOR-GP. If MEANS and COVARIANCES are given, then GP-MEANS-AND-COVARIANCES is not called.
Return the index of the BPN in CLUMPS of RNN whose task it is to
execute computation at ‘(- (TIME-STEP RNN) LAG)‘. This is normally
the same as TIME-STEP (disregarding LAG). That is, CLUMPS can be
indexed by TIME-STEP to get the BPN. However, when *WARP-TIME* is
true, execution proceeds in a cycle as the structure of the network
allows.
Suppose we have a typical RNN that only ever references the previous
time step so its MAX-LAG is 1. Its UNFOLDER returns ‘BPN‘s of
identical structure bar a shift in their time lagged connections
except for the very first, so WARP-START and WARP-LENGTH are both 1.
If *WARP-TIME* is NIL, then the mapping from TIME-STEP to the BPN in
CLUMPS is straightforward:
time: | 0 | 1 | 2 | 3 | 4 | 5
——–+—-+—-+—-+—-+—-+—-
warped: | 0 | 1 | 2 | 3 | 4 | 5
——–+—-+—-+—-+—-+—-+—-
bpn: | b0 | b1 | b2 | b3 | b4 | b5
When *WARP-TIME* is true, we reuse the ‘B1‘ - ‘B2‘ bpns in a loop:
time: | 0 | 1 | 2 | 3 | 4 | 5
——–+—-+—-+—-+—-+—-+—-
warped: | 0 | 1 | 2 | 1 | 2 | 1
——–+—-+—-+—-+—-+—-+—-
bpn: | b0 | b1 | b2 | b1*| b2 | b1*
‘B1*‘ is the same BPN as ‘B1‘, but its connections created by LAG go
through warped time and end up referencing ‘B2‘. This way, memory
consumption is independent of the number time steps needed to
process a sequence or make predictions.
To be able to pull this trick off WARP-START and WARP-LENGTH must be specified when the RNN is instantiated. In general, with *WARP-TIME* ‘(+ WARP-START (MAX 2 WARP-LENGTH))‘ bpns are needed. The 2 comes from the fact that with cycle length 1 a bpn would need to takes its input from itself which is problematic because it has NODES for only one set of values.
Write weight of OBJECT to the bivalent STREAM. Return OBJECT.
Make a single sequence out of the sequences in SEQS so that in the
returned sequence indices of elements belonging to the same source
sequence are spread evenly across the whole range. The result is a
list is RESULT-TYPE is LIST, it’s a vector if RESULT-TYPE is VECTOR.
If RESULT-TYPE is NIL, then it’s determined by the type of the first
sequence in SEQS.
“‘cl-transcript
(zip-evenly ’((0 2 4) (1 3)))
=> (0 1 2 3 4)
“‘
Add MULTIPLIER times the sum of first-order
gradients to accumulators of SINK (normally accessed with
DO-GRADIENT-SINK) and if VALUEP, return the sum of values of the
function being optimized for a BATCH of instances. GRADIENT-SOURCE
is the object representing the function being optimized, SINK is
gradient sink.
Note the number of instances in BATCH may be larger than what GRADIENT-SOURCE process in one go (in the sense of say, MAX-N-STRIPES), so DO-BATCHES-FOR-MODEL or something like (GROUP BATCH MAX-N-STRIPES) can be handy.
bp-learner
) gradient-sink batch multiplier valuep) ¶bm-pcd-learner
) gradient-sink batch multiplier valuep) ¶rbm-cd-learner
) gradient-sink batch multiplier valuep) ¶sparse-bm-learner
) sink batch multiplier valuep) ¶decayed-cg-optimizer-mixin
) gradient-source batch multiplier valuep) ¶Add ARGS to COUNTER in some way. See specialized
methods for type specific documentation. The kind of arguments to be
supported is the what the measurer functions (see @MGL-MEASURER)
intended to be paired with the counter return as multiple values.
cross-entropy-counter
) &rest args) ¶concat-counter
) &rest args) ¶basic-counter
) &rest args) ¶gd-optimizer
)) ¶gd-optimizer
)) ¶A list of functions with no arguments called after each weight update.
Apply MONITOR to ARGUMENTS. This sound fairly
generic, because it is. MONITOR can be anything, even a simple
function or symbol, in which case this is just CL:APPLY. See
@MGL-MONITOR for more.
attributed
)) ¶attributed
)) ¶A plist of attribute keys and values.
Compute the partial derivatives of the function
represented by CLUMP and add them to DERIVATIVES of the
corresponding argument clumps. The DERIVATIVES of CLUMP contains the
sum of partial derivatives of all clumps by the corresponding
output. This function is intended to be called after a FORWARD pass.
Take the ->SIGMOID clump for example when the network is being
applied to a batch of two instances ‘x1‘ and ‘x2‘. ‘x1‘ and ‘x2‘ are
set in the ->INPUT lump X. The sigmoid computes ‘1/(1+exp(-x))‘
where ‘X‘ is its only argument clump.
f(x) = 1/(1+exp(-x))
When BACKWARD is called on the sigmoid lump, its DERIVATIVES is a
2x1 MAT object that contains the partial derivatives of the loss
function:
dL(x1)/df
dL(x2)/df
Now the BACKWARD method of the sigmoid needs to add ‘dL(x1)/dx1‘ and ‘dL(x2)/dx2‘ to DERIVATIVES of ‘X‘. Now, ‘dL(x1)/dx1 = dL(x1)/df * df(x1)/dx1‘ and the first term is what we have in DERIVATIVES of the sigmoid so it only needs to calculate the second term.
->periodic
)) ¶->rough-exponential
)) ¶->constant
)) ¶->seq-barrier
)) ¶->normalized
)) ¶->sample-binary
)) ¶->gaussian-random
)) ¶->softmax-xe-loss
)) ¶->squared-difference
)) ¶->max-channel
)) ¶->scaled-tanh
)) ¶->embedding
)) ¶->batch-normalized
)) ¶bag-of-words-encoder
)) ¶automatically generated reader method
->batch-normalized
)) ¶The ->BATCH-NORMALIZATION of this lump. May be
shared between multiple ->BATCH-NORMALIZED lumps.
Batch normalization is special in that it has state apart from the
computed results (NODES) and its derivatives (DERIVATIVES). This
state is the estimated mean and variance of its inputs and they
are encapsulated by ->BATCH-NORMALIZATION.
If NORMALIZATION is not given at instantiation, then a new
->BATCH-NORMALIZATION object will be created automatically,
passing :BATCH-SIZE, :VARIANCE-ADJUSTMENT, and :POPULATION-DECAY
arguments on to ->BATCH-NORMALIZATION. See [BATCH-SIZE][(reader
->batch-normalization)], [VARIANCE-ADJUSTMENT][(reader
->batch-normalization)] and [POPULATION-DECAY][(reader
->batch-normalization)]. New scale and shift weight lumps will be
created with names:
‘(,name :scale)
‘(,name :shift)
where ‘\NAME‘ is the NAME of this lump.
This default behavior covers the use-case where the statistics kept by ->BATCH-NORMALIZATION are to be shared only between time steps of an RNN.
->batch-normalization
)) ¶Normally all stripes participate in the batch.
Lowering the number of stripes may increase the regularization
effect, but it also makes the computation less efficient. By
setting BATCH-SIZE to a divisor of N-STRIPES one can decouple the
concern of efficiency from that of regularization. The default
value, NIL, is equivalent to N-STRIPES. BATCH-SIZE only affects
training.
With the special value :USE-POPULATION, instead of the mean and the variance of the current batch, use the population statistics for normalization. This effectively cancels the regularization effect, leaving only the faster learning.
cg-optimizer
)) ¶After having gone through BATCH-SIZE number of
instances, weights are updated. Normally, CG operates on all
available data, but it may be useful to introduce some noise into
the optimization to reduce overfitting by using smaller batch
sizes. If BATCH-SIZE is not set, it is initialized to the size of
the dataset at the start of optimization.
gd-optimizer
)) ¶After having gone through BATCH-SIZE number of
inputs, weights are updated. With BATCH-SIZE 1, one gets
Stochastics Gradient Descent. With BATCH-SIZE equal to the number
of instances in the dataset, one gets standard, ’batch’ gradient
descent. With BATCH-SIZE between these two extremes, one gets the
most practical ’mini-batch’ compromise.
cg-optimizer
)) ¶After having gone through BATCH-SIZE number of
instances, weights are updated. Normally, CG operates on all
available data, but it may be useful to introduce some noise into
the optimization to reduce overfitting by using smaller batch
sizes. If BATCH-SIZE is not set, it is initialized to the size of
the dataset at the start of optimization.
gd-optimizer
)) ¶After having gone through BATCH-SIZE number of
inputs, weights are updated. With BATCH-SIZE 1, one gets
Stochastics Gradient Descent. With BATCH-SIZE equal to the number
of instances in the dataset, one gets standard, ’batch’ gradient
descent. With BATCH-SIZE between these two extremes, one gets the
most practical ’mini-batch’ compromise.
batch-gd-optimizer
)) ¶batch-gd-optimizer
)) ¶A list of functions of no parameters. Each
function is called just before a weight update takes place (after
accumulated gradients have been divided the length of the batch).
Convenient to hang some additional gradient accumulating code
on.
bm-learner
)) ¶automatically generated reader method
bm
.
bp-learner
)) ¶The BPN for which this BP-LEARNER provides the gradients.
bpn
.
cg-optimizer
)) ¶automatically generated reader method
cg-optimizer
)) ¶automatically generated writer method
sparsity-gradient-source
)) ¶automatically generated reader method
fnn-clamping-cache
)) ¶automatically generated reader method
sparsity-gradient-source
)) ¶automatically generated reader method
factored-cloud
)) ¶A full cloud whose visible chunk is the same as
the visible chunk of this cloud and whose hidden chunk is the same
as the visible chunk of CLOUD-B.
factored-cloud
)) ¶A full cloud whose hidden chunk is the same as the
hidden chunk of this cloud and whose visible chunk is the same as
the hidden chunk of CLOUD-A.
bpn
)) ¶A topological sorted adjustable array with a fill
pointer that holds the clumps that make up the network. Clumps are
added to it by ADD-CLUMP or, more often, automatically when within
a BUILD-FNN or BUILD-RNN. Rarely needed, FIND-CLUMP takes care of
most uses.
concat-counter
)) ¶A type designator suitable as the RESULT-TYPE argument to CONCATENATE.
Name of CLASS for presentation purposes.
confusion-matrix
) class) ¶confusion-matrix
) target prediction) ¶confusion-matrix
) target prediction) ¶A list of all classes. The default is to collect
classes from the counts. This can be overridden if, for instance,
some classes are not present in the results.
confusion-matrix
)) ¶Make a deepish copy of OBJECT in CONTEXT.
Return a list of
Return the value of the slot in the copied object
and T, or NIL as the second value if the slot need not be
initialized.
(eql mgl-bm:pcd)
) (original rbm
) (slot-name (eql mgl-bm:dbn)
) value) ¶(eql mgl-bm:pcd)
) (original dbm
) (slot-name (eql mgl-bm:hidden-chunks)
) value) ¶(eql mgl-bm:pcd)
) (original dbm
) (slot-name (eql mgl-bm:visible-chunks)
) value) ¶(eql mgl-bm:pcd)
) (original bm
) (slot-name (eql mgl-core:max-n-stripes)
) value) ¶(eql mgl-bm:pcd)
) (original bm
) (slot-name (eql mgl-bm:chunks)
) value) ¶(eql mgl-bm:pcd)
) (original full-cloud
) (slot-name (eql mgl-common:weights)
) value) ¶(eql mgl-bm:pcd)
) (original cloud
) (slot-name (eql mgl-bm::cached-activations2)
) value) ¶(eql mgl-bm:pcd)
) (original cloud
) (slot-name (eql mgl-bm::cached-activations1)
) value) ¶(eql mgl-bm:pcd)
) (original cloud
) (slot-name (eql mgl-bm::cached-version2)
) value) ¶(eql mgl-bm:pcd)
) (original cloud
) (slot-name (eql mgl-bm::cached-version1)
) value) ¶(eql mgl-bm:pcd)
) (original temporal-chunk
) (slot-name (eql mgl-bm::has-inputs-p)
) value) ¶(eql mgl-bm:pcd)
) (original temporal-chunk
) (slot-name (eql mgl-bm::next-node-inputs)
) value) ¶(eql mgl-bm:pcd)
) (original chunk
) (slot-name (eql mgl-bm:indices-present)
) value) ¶(eql mgl-bm:pcd)
) (original chunk
) (slot-name (eql mgl-bm:inputs)
) value) ¶(eql mgl-bm:pcd)
) (original chunk
) (slot-name (eql mgl-bm::old-nodes)
) value) ¶(eql mgl-bm:pcd)
) (original chunk
) (slot-name (eql mgl-bm:means)
) value) ¶(eql mgl-bm:pcd)
) (original chunk
) (slot-name (eql mgl-common:nodes)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original bm
) (slot-name (eql mgl-core:max-n-stripes)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original bm
) (slot-name (eql mgl-bm:chunks)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original full-cloud
) (slot-name (eql mgl-common:weights)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original cloud
) (slot-name (eql mgl-bm::cached-activations2)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original cloud
) (slot-name (eql mgl-bm::cached-activations1)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original cloud
) (slot-name (eql mgl-bm::cached-version2)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original cloud
) (slot-name (eql mgl-bm::cached-version1)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original temporal-chunk
) (slot-name (eql mgl-bm::has-inputs-p)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original temporal-chunk
) (slot-name (eql mgl-bm::next-node-inputs)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original chunk
) (slot-name (eql mgl-bm:indices-present)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original chunk
) (slot-name (eql mgl-bm:inputs)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original chunk
) (slot-name (eql mgl-bm::old-nodes)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original chunk
) (slot-name (eql mgl-bm:means)
) value) ¶(eql mgl-bm:dbm->dbn)
) (original chunk
) (slot-name (eql mgl-common:nodes)
) value) ¶Return the value of the cost function being
minimized. Calling this only makes sense in the context of an
ongoing optimization (see MINIMIZE). The cost is that of a batch of
instances.
->softmax-xe-loss
)) ¶sparsity-gradient-source
)) ¶automatically generated reader method
cost
.
Return an object representing the state of MONITOR
or NIL, if it doesn’t have any (say because it’s a simple logging
function). Most monitors have counters into which they accumulate
results until they are printed and reset. See @MGL-COUNTER for
more.
Return any number of values representing the state
of COUNTER in such a way that passing the returned values as
arguments ADD-TO-COUNTER on a fresh instance of the same type
recreates the original state.
concat-counter
)) ¶basic-counter
)) ¶Return any number of values representing the state of COUNTER. See specialized methods for type specific documentation.
concat-counter
)) ¶rmse-counter
)) ¶basic-counter
)) ¶rnn
)) ¶rnn
)) ¶Due to unfolding, the memory footprint of an RNN
is almost linear in the number of time steps (i.e. the max
sequence length). For prediction, this is addressed by
@MGL-RNN-TIME-WARP. For training, we cannot discard results of
previous time steps because they are needed for backpropagation,
but we can at least move them out of GPU memory if they are not
going to be used for a while and copy them back before they are
needed. Obviously, this is only relevant if CUDA is being used.
If CUDA-WINDOW-START-TIME is NIL, then this feature is turned off.
Else, during training, at CUDA-WINDOW-START-TIME or later time
steps, matrices belonging to non-weight lumps may be forced out of
GPU memory and later brought back as neeeded.
This feature is implemented in terms of MGL-MAT:WITH-SYNCING-CUDA-FACETS that uses CUDA host memory (also known as _page-locked_ or _pinned memory_) to do asynchronous copies concurrently with normal computation. The consequence of this is that it is now main memory usage that’s unbounded which toghether with page-locking makes it a potent weapon to bring a machine to a halt. You were warned.
sparsity-gradient-source
)) ¶automatically generated reader method
Decode ENCODED with ENCODER. For an DECODER /
ENCODER pair, ‘(DECODE DECODER (ENCODE ENCODER OBJECT))‘ must be
equal in some sense to ‘OBJECT‘.
If DECODER is a function designator, then it’s simply ‘FUNCALL‘ed with ENCODED.
encoder/decoder
) index) ¶symbol
) encoded) ¶function
) encoded) ¶Return a function suitable as the SUPERVISOR
argument for SETTLE-MEAN-FIELD. The default implementation
Return a default for the [SIZE][(reader lump)] of
LUMP if one is not supplied at instantiation. The value is often
computed based on the sizes of the inputs. This function is for
implementing new lump types.
->periodic
)) ¶->rough-exponential
)) ¶->constant
)) ¶->seq-barrier
)) ¶->normalized
)) ¶->sample-binary
)) ¶->softmax-xe-loss
)) ¶->squared-difference
)) ¶->max-channel
)) ¶->scaled-tanh
)) ¶->embedding
)) ¶->batch-normalization
)) ¶->batch-normalized
)) ¶lump
)) ¶Upon creation or resize the lump’s nodes get filled with this value.
constant-chunk
)) ¶automatically generated reader method
Return the MAT object representing the partial
derivatives of the function CLUMP computes. The returned partial
derivatives were accumulated by previous BACKWARD calls.
This matrix is shaped like the matrix returned by NODES.
->sigmoid
)) ¶->sigmoid
)) ¶See [DROPOUT][(ACCESSOR ->DROPOUT)].
Encode DECODED with ENCODER. This interface is
generic enough to be almost meaningless. See ENCODER/DECODER for a
simple, MGL-NLP:BAG-OF-WORDS-ENCODER for a slightly more involved
example.
If ENCODER is a function designator, then it’s simply ‘FUNCALL‘ed with DECODED.
bag-of-words-encoder
) decoded) ¶encoder/decoder
) object) ¶symbol
) decoded) ¶function
) decoded) ¶bag-of-words-encoder
)) ¶automatically generated reader method
bag-of-words-encoder
)) ¶automatically generated reader method
bag-of-words-encoder
)) ¶automatically generated reader method
bag-of-words-encoder
)) ¶automatically generated reader method
Find the chunk in OBJECT whose name is EQUAL to NAME. Signal an error if not found and ERRORP.
Find the cloud in OBJECT whose name is EQUAL to NAME. Signal an error if not found and ERRORP.
See if SAMPLER has run out of examples.
function-sampler
)) ¶diffun
)) ¶A real valued lisp function. It may have any number of parameters.
fn
.
periodic-fn
)) ¶automatically generated reader method
Compute the values of the function represented by
CLUMP for all stripes and place the results into NODES of CLUMP.
->periodic
)) ¶->rough-exponential
)) ¶->constant
)) ¶->seq-barrier
)) ¶->normalized
)) ¶->sample-binary
)) ¶->gaussian-random
)) ¶->softmax-xe-loss
)) ¶->squared-difference
)) ¶->max-channel
)) ¶->scaled-tanh
)) ¶->embedding
)) ¶->batch-normalized
)) ¶function-sampler
)) ¶A generator function of no arguments that returns the next sample.
Returns the vector of means for the vector of inputs X. X is a vector of arbitrary objects.
posterior-gp
) x) ¶Returns two values: the means and the covariances as matrices.
posterior-gp
) x1 x2) ¶->normalized
)) ¶automatically generated reader method
->softmax-xe-loss
)) ¶The number of elements in a softmax group. This is
the number of classes for classification. Often GROUP-SIZE is
equal to SIZE (it is the default), but in general the only
constraint is that SIZE is a multiple of GROUP-SIZE.
->max-channel
)) ¶The number of inputs in each group.
normalized-group-chunk
)) ¶automatically generated reader method
Controls whether and how hidden nodes are sampled
during the learning or mean field is used instead. :HALF-HEARTED,
the default value, samples the hiddens but uses the hidden means
to calculate the effect of the positive and negative phases on the
gradient. The default should almost always be preferable to T, as
it is a less noisy estimate.
->loss
)) ¶->loss
)) ¶This is to support weighted instances. That is
when not all training instances are equally important. If non-NIL,
a 1d MAT with the importances of stripes of the batch. When
IMPORTANCE is given (typically in SET-INPUT), then instead of
adding 1 to the derivatives of all stripes, IMPORTANCE is added
elemtwise.
Called automatically before MINIMIZE* is called,
this function may be specialized if GRADIENT-SOURCE needs some kind
of setup.
bp-learner
) weights dataset) ¶sparse-bm-learner
) weights dataset) ¶The default method does nothing.
Called automatically before training starts, this function sets up OPTIMIZER to be suitable for optimizing GRADIENT-SOURCE. It typically creates appropriately sized accumulators for the gradients.
cg-optimizer
) source weights dataset) ¶segmented-gd-optimizer
) source weights dataset) ¶per-weight-batch-gd-optimizer
) source weights dataset) ¶normalized-batch-gd-optimizer
) source weights dataset) ¶adam-optimizer
) source weights dataset) ¶gd-optimizer
) source weights dataset) ¶->embedding
)) ¶->embedding
)) ¶A sequence of batch size length of row indices. To be set in SET-INPUT.
Return the parameters for an executor able to
handle INSTANCE. Called by MAP-OVER-EXECUTORS on CACHE (that’s a
PARAMETERIZED-EXECUTOR-CACHE-MIXIN). The returned parameters are
keys in an EQUAL parameters->executor hash table.
Return the label of INSTANCE as a non-negative integer.
Return a one dimensional array of probabilities
representing the distribution of labels. The probability of the
label with LABEL-INDEX ‘I‘ is element at index ‘I‘ of the returned
arrray.
Return a sequence of label index distributions for
RESULTS produced by some model for a batch of instances. This is
akin to LABEL-INDEX-DISTRIBUTION.
->softmax-xe-loss
)) ¶softmax-label-chunk
)) ¶Return a sequence of label indices for RESULTS
produced by some model for a batch of instances. This is akin to
LABEL-INDEX.
->softmax-xe-loss
)) ¶softmax-label-chunk
)) ¶periodic-fn
)) ¶automatically generated reader method
periodic-fn
)) ¶automatically generated writer method
dbm
)) ¶A list of layers from bottom up. A layer is a list
of chunks. The layers partition the set of all chunks in the BM.
Chunks with no connections to layers below are visible (including
constant and conditioning) chunks. The layered structure is used
in the single, bottom-up, approximate inference pass. When
instantiating a DBM, VISIBLE-CHUNKS and HIDDEN-CHUNKS are inferred
from LAYERS and CLOUDS.
adam-optimizer
)) ¶adam-optimizer
)) ¶Same thing as [LEARNING-RATE][(ACCESSOR
GD-OPTIMIZER)] but with the default suggested by the Adam paper.
gd-optimizer
)) ¶gd-optimizer
)) ¶This is the step size along the gradient. Decrease
it if optimization diverges, increase it if it doesn’t make
progress.
This is a function can be added to
ON-CG-BATCH-DONE. The default implementation simply logs the event
arguments.
Identical to MAKE-CLASSIFICATION-ACCURACY-MONITORS
bar the keywords arguments. Specialize this to add to support for
new model types. The default implementation also allows for some
extensibility: if LABEL-INDICES is defined on MODEL, then it will be
used to extract label indices from model results.
bpn
) operation-mode label-index-fn attributes) ¶dbn
) operation-mode label-index-fn attributes) ¶bm
) operation-mode label-index-fn attributes) ¶Identical to MAKE-COST-MONITORS bar the keywords
arguments. Specialize this to add to support for new model types.
Identical to MAKE-CROSS-ENTROPY-MONITORS bar the
keywords arguments. Specialize this to add to support for new model
types. The default implementation also allows for some
extensibility: if LABEL-INDEX-DISTRIBUTIONS is defined on MODEL,
then it will be used to extract label distributions from model
results.
bpn
) operation-mode label-index-distribution-fn attributes) ¶dbn
) operation-mode label-index-distribution-fn attributes) ¶bm
) operation-mode label-index-distribution-fn attributes) ¶Create a new executor for PARAMETERS. CACHE is a PARAMETERIZED-EXECUTOR-CACHE-MIXIN. In the BPN gaussian process example, PARAMETERS would be a list of input dimensions.
In an RNN, STEP-COUNTER aggregates results of all
the time steps during the processing of instances in the current
batch. Return a new counter into which results from STEP-COUNTER can
be accumulated when the processing of the batch is finished. The
default implementation creates a copy of STEP-COUNTER.
Call FN with [‘TARGET‘][dislocated], PREDICTION,
COUNT paramaters for each cell in the confusion matrix. Cells with a
zero count may be ommitted.
confusion-matrix
)) ¶Call FN of lambda list (SEGMENT ACCUMULATOR) on
each segment and their corresponding accumulator MAT in SINK.
cg-optimizer
)) ¶segmented-gd-optimizer
)) ¶gd-optimizer
)) ¶Divide INSTANCES between executors that perform the
same function as PROTOTYPE-EXECUTOR and call FN with the instances
and the executor for which the instances are.
Some objects conflate function and call: the forward pass of a
[MGL-BP:BPN][class] computes output from inputs so it is like a
function but it also doubles as a function call in the sense that
the bpn (function) object changes state during the computation of
the output. Hence not even the forward pass of a bpn is thread safe.
There is also the restriction that all inputs must be of the same
size.
For example, if we have a function that builds bpn a for an input of
a certain size, then we can create a factory that creates bpns for a
particular call. The factory probably wants to keep the weights the
same though. In @MGL-PARAMETERIZED-EXECUTOR-CACHE,
MAKE-EXECUTOR-WITH-PARAMETERS is this factory.
Parallelization of execution is another possibility
MAP-OVER-EXECUTORS allows, but there is no prebuilt solution for it,
yet.
The default implementation simply calls FN with INSTANCES and PROTOTYPE-EXECUTOR.
parameterized-executor-cache-mixin
)) ¶Call FN with start and end of intervals of
consecutive indices that are not missing in SEGMENT. Called by
optimizers that support partial updates. The default implementation
assumes that all weights are present. This only needs to be
specialized if one plans to use an optimizer that knows how to deal
unused/missing weights such as MGL-GD:NORMALIZED-BATCH-GD-OPTIMIZER
and OPTIMIZER MGL-GD:PER-WEIGHT-BATCH-GD-OPTIMIZER.
full-cloud
)) ¶Apply FN to each segment of GRADIENT-SOURCE.
bp-learner
)) ¶bm-learner
)) ¶factored-cloud
)) ¶full-cloud
)) ¶list
)) ¶rnn
)) ¶The networks built by UNFOLDER may contain new
weights up to time step MAX-LAG. Beyond that point, all weight
lumps must be reappearances of weight lumps with the same name at
previous time steps. Most recurrent networks reference only the
state of lumps at the previous time step (with the function LAG),
hence the default of 1. But it is possible to have connections to
arbitrary time steps. The maximum connection lag must be specified
when creating the RNN.
function-sampler
)) ¶automatically generated reader method
function-sampler
)) ¶automatically generated writer method
The number of stripes with which the OBJECT is capable of dealing simultaneously.
bpn
)) ¶The maximum number of instances the network can
operate on in parallel. Within BUILD-FNN or BUILD-RNN, it defaults
to MAX-N-STRIPES of that parent network, else it defaults to 1.
When set MAX-N-STRIPES of all CLUMPS get set to the same value.
->gaussian-random
)) ¶->gaussian-random
)) ¶The mean of the normal distribution.
mean
.
adam-optimizer
)) ¶adam-optimizer
)) ¶A number between 0 and 1 that determines how fast
the estimated mean of derivatives is updated. 0 basically gives
you RMSPROP (if VARIANCE-DECAY is not too large) or AdaGrad (if
VARIANCE-DECAY is close to 1 and the learning rate is annealed.
This is $\beta_1$ in the paper.
adam-optimizer
)) ¶adam-optimizer
)) ¶A value that should be close to 1. MEAN-DECAY is
multiplied by this value after each update. This is $\lambda$ in
the paper.
Called by MINIMIZE after INITIALIZE-OPTIMIZER* and INITIALIZE-GRADIENT-SOURCE*, this generic function is the main extension point for writing optimizers.
cg-optimizer
) gradient-source weights dataset) ¶base-gd-optimizer
) gradient-source weights dataset) ¶iterative-optimizer
) gradient-source weights dataset) ¶gd-optimizer
)) ¶gd-optimizer
)) ¶A value in the [0, 1) interval. MOMENTUM times the
previous weight change is added to the gradient. 0 means no
momentum.
gd-optimizer
)) ¶One of :NORMAL, :NESTEROV or :NONE. For pure
optimization Nesterov’s momentum may be better, but it may also
increases chances of overfitting. Using :NONE is equivalent to 0
momentum, but it also uses less memory. Note that with :NONE,
MOMENTUM is ignored even it it is non-zero.
Return monitors associated with OBJECT. See various
methods such as [MONITORS][(accessor mgl-bp:bp-learner)] for more
documentation.
bp-learner
)) ¶A list of ‘MONITOR‘s.
bm-learner
)) ¶automatically generated reader method
iterative-optimizer
)) ¶bp-learner
)) ¶A list of ‘MONITOR‘s.
bm-mcmc-learner
)) ¶bm-mcmc-learner
)) ¶The number of steps of Gibbs sampling to perform.
This is how many full (HIDDEN -> VISIBLE -> HIDDEN) steps are
taken for CD learning, and how many times each chunk is sampled
for PCD.
iterative-optimizer
)) ¶The number of instances this optimizer has seen so far. Incremented automatically during optimization.
bm-pcd-learner
)) ¶The number of persistent chains to run. Also known as the number of fantasy particles.
function-sampler
)) ¶The number of stripes currently present in OBJECT. This is at most MAX-N-STRIPES.
bpn
)) ¶The current number of instances the network has.
This is automatically set to the number of instances passed to
SET-INPUT, so it rarely has to be manipulated directly although it
can be set. When set N-STRIPES of all CLUMPS get set to the same
value.
per-weight-batch-gd-optimizer
)) ¶per-weight-batch-gd-optimizer
)) ¶Number of uses of the weight in its current batch.
normalized-batch-gd-optimizer
)) ¶normalized-batch-gd-optimizer
)) ¶Number of uses of the weight in its current batch.
lagged-clump
)) ¶clump
)) ¶automatically generated reader method
name
.
cloud
)) ¶automatically generated reader method
name
.
chunk
)) ¶automatically generated reader method
name
.
attributed
)) ¶Return a string assembled from the values of the ATTRIBUTES of
ATTRIBUTED. If there are multiple entries with the same key, then
they are printed near together.
Values may be padded according to an enclosing
WITH-PADDED-ATTRIBUTE-PRINTING.
function-sampler
)) ¶An arbitrary object naming the sampler. Only used for printing the sampler object.
name
.
bm-pcd-learner
) gradient-sink multiplier) ¶rbm-cd-learner
) gradient-sink multiplier) ¶Returns a MAT object representing the state or
result of OBJECT. The first dimension of the returned matrix is
equal to the number of stripes.
lump
)) ¶The values computed by the lump in the forward
pass are stored here. It is an ‘N-STRIPES * SIZE‘ matrix that has
storage allocated for ‘MAX-N-STRIPES * SIZE‘ elements for
non-weight lumps. ->WEIGHT lumps have no stripes nor restrictions
on their shape.
chunk
)) ¶A value for each node in the chunk. First,
activations are put here (weights*inputs) then the mean of the
probability distribution is calculated from the activation and
finally (optionally) a sample is taken from the probability
distribution. All these values are stored in this vector. This is
also where SET-INPUT is supposed to clamp the values. Note that
not only the values in the matrix but also the matrix object
itself can change when the network is used.
cg-optimizer
)) ¶cg-optimizer
)) ¶An event hook called when processing a conjugate
gradient batch is done. The handlers on the hook are called with 8
arguments:
(optimizer gradient-source instances
best-w best-f n-line-searches
n-succesful-line-searches n-evaluations)
The latter 5 of which are the return values of the CG function.
iterative-optimizer
)) ¶iterative-optimizer
)) ¶An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called when optimization of a batch
of instances is done and N-INSTANCES is incremented.
iterative-optimizer
)) ¶iterative-optimizer
)) ¶An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called when optimization has
finished.
iterative-optimizer
)) ¶iterative-optimizer
)) ¶An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called after initializations are
performed (INITIALIZE-OPTIMIZER*, INITIALIZE-GRADIENT-SOURCE*) but
before optimization is started.
bm-pcd-learner
)) ¶A BM that keeps the states of the persistent
chains (each stripe is a chain), initialized from the BM being
trained by COPY with ’PCD as the context. Suitable for training BM
and RBM.
fnn-clamping-cache
)) ¶automatically generated reader method
fnn-clamping-cache
)) ¶automatically generated reader method
->batch-normalization
)) ¶While training, an exponential moving average of
batch means and standard deviances (termed _population
statistics_) is updated. When making predictions, normalization is
performed using these statistics. These population statistics are
persisted by SAVE-STATE.
bm-pcd-learner
) gradient-sink multiplier) ¶rbm-cd-learner
) gradient-sink multiplier) ¶posterior-gp
)) ¶automatically generated reader method
rbm-cd-learner
)) ¶automatically generated reader method
bm
.
This is the extension point for READ-STATE. It is
guaranteed that primary READ-STATE* methods will be called only once
for each OBJECT (under EQ). CONTEXT is an opaque object and must be
passed on to any recursive READ-STATE* calls.
->batch-normalization
) stream context) ¶factored-cloud
) stream seen) ¶full-cloud
) stream seen) ¶encoder/decoder
) stream context) ¶A utility that’s often called at the start of
optimization (from ON-OPTIMIZATION-STARTED). The default
implementation logs the description of GRADIENT-SOURCE (as in
DESCRIBE) and OPTIMIZER and calls LOG-MAT-ROOM.
Restore state of COUNTER to what it was just after creation.
cross-entropy-counter
)) ¶concat-counter
)) ¶basic-counter
)) ¶Report the state of [MONITORS][generic-function] of
OPTIMIZER and GRADIENT-SOURCE and reset their counters. See
MONITOR-OPTIMIZATION-PERIODICALLY for an example of how this is
used.
iterative-optimizer
) gradient-source) ¶Log the counters of the monitors of OPTIMIZER and GRADIENT-SOURCE and reset them.
If SAMPLER has not run out of data (see FINISHEDP)
SAMPLE returns an object that represents a sample from the world to
be experienced or, in other words, simply something the can be used
as input for training or prediction. It is not allowed to call
SAMPLE if SAMPLER is FINISHEDP.
function-sampler
)) ¶Sample from the probability distribution of CHUNK whose means are in NODES.
conditioning-chunk
)) ¶sigmoid-chunk
)) ¶gaussian-chunk
)) ¶relu-chunk
)) ¶softmax-chunk
)) ¶constrained-poisson-chunk
)) ¶->normalized
)) ¶The sum of nodes after normalization. Can be
changed during training, for instance when clamping. If it is a
vector then its length must be MAX-N-STRIPES which automatically
maintained.
->batch-normalization
)) ¶A weight lump of the same size as SHIFT. This is $\gamma$ in the paper.
normalized-group-chunk
)) ¶The sum of the means after normalization. Can be
changed during training, for instance when clamping. If it is a
vector then its length must be MAX-N-STRIPES which is
automatically maintained when changing the number of stripes.
->normalized
)) ¶The sum of nodes after normalization. Can be
changed during training, for instance when clamping. If it is a
vector then its length must be MAX-N-STRIPES which automatically
maintained.
normalized-group-chunk
)) ¶The sum of the means after normalization. Can be
changed during training, for instance when clamping. If it is a
vector then its length must be MAX-N-STRIPES which is
automatically maintained when changing the number of stripes.
Return the derivatives matrix of SEGMENT. A segment
doesn’t need to be a MAT object itself. For example, it may be a
MGL-BM:CHUNK of a [MGL-BM:BM][CLASS] or a MGL-BP:LUMP of a
[MGL-BP:BPN][CLASS] whose DERIVATIVES slot holds the gradient.
cg-optimizer
)) ¶A predicate function on segments that filters out uninteresting segments. Called from INITIALIZE-OPTIMIZER*.
cg-optimizer
)) ¶Segments to train.
gd-optimizer
)) ¶The set of segments that are to be trained. The
ACCUMULATOR, WEIGHT-DELTAS, etc vectors are indexed by SEGMENT-SET
indices.
Return the weight matrix of SEGMENT. A segment
doesn’t need to be a MAT object itself. For example, it may be a
MGL-BM:CHUNK of a [MGL-BM:BM][CLASS] or a MGL-BP:LUMP of a
[MGL-BP:BPN][CLASS] whose NODES slot holds the weights.
full-cloud
)) ¶mat
)) ¶When the segment is really a MAT, then just return it.
segmented-gd-optimizer
)) ¶When this optimizer is initialized it loops over
the segment of the learner with MAP-SEGMENTS. SEGMENTER is a
function that is called with each segment and returns an optimizer
or NIL. Several segments may be mapped to the same optimizer.
After the segment->optimizer mappings are collected, each
optimizer is initialized by INITIALIZE-OPTIMIZER with the list of
segments mapped to it.
Several weight matrices known as *segments* can be
optimized by a single optimizer. This function returns them as a
list.
cg-optimizer
)) ¶segmented-gd-optimizer
)) ¶automatically generated reader method
gd-optimizer
)) ¶segment-set
)) ¶A list of weight matrices.
->seq-barrier
)) ¶A function of an INDEX argument that returns the lump with that index in some sequence.
->seq-barrier
)) ¶->seq-barrier
)) ¶A sequence of length batch size of indices. The
element at index ‘I‘ is the index to be passed to SEQ-ELT-FN to
find the lump whose stripe ‘I‘ is copied to stripe ‘I‘ of this
this lump.
Set NODES of CHUNK to the means of the probability distribution. When called NODES contains the activations.
conditioning-chunk
)) ¶sigmoid-chunk
)) ¶gaussian-chunk
)) ¶relu-chunk
)) ¶normalized-group-chunk
)) ¶exp-normalized-group-chunk
)) ¶Like SET-HIDDEN-MEAN/1, but settle the mean field
if there are hidden-to-hidden connections. For an RBM it trivially
calls SET-HIDDEN-MEAN/1, for a DBM it calls UP-DBM before
settling.
Set INSTANCES as inputs in MODEL. INSTANCES is
always a SEQUENCE of instances even for models not capable of batch
operation. It sets N-STRIPES to (LENGTH INSTANCES) in a :BEFORE
method.
fnn-clamping-cache
)) ¶rnn
)) ¶RNNs operate on batches of instances just like FNNs. But the
instances here are like datasets: sequences or samplers and they are
turned into sequences of batches of instances with
MAP-DATASETS :IMPUTE NIL. The batch of instances at index 2 is
clamped onto the BPN at time step 2 with SET-INPUT.
When the input sequences in the batch are not of the same length, already exhausted sequences will produce NIL (due to :IMPUTE NIL) above. When such a NIL is clamped with SET-INPUT on a BPN of the RNN, SET-INPUT must set the IMPORTANCE of the ->ERROR lumps to 0 else training would operate on the noise left there by previous invocations.
Allocate the necessary stuff to allow for
MAX-N-STRIPES number of stripes to be worked with simultaneously in
OBJECT. This is called when MAX-N-STRIPES is SETF’ed.
->normalized
)) ¶factored-cloud
)) ¶Set the number of stripes (out of MAX-N-STRIPES)
that are in use in OBJECT. This is called when N-STRIPES is
SETF’ed.
factored-cloud
)) ¶Like SET-VISIBLE-MEAN/1, but settle the mean field
if there are visible-to-visible connections. For an RBM it trivially
calls SET-VISIBLE-MEAN.
->batch-normalization
)) ¶A weight lump of the same size as SCALE. This is $\beta$ in the paper.
lagged-clump
)) ¶lump
)) ¶The number of values in a single stripe.
size
.
chunk
)) ¶automatically generated reader method
size
.
segment-set
)) ¶The sum of the sizes of the weight matrices of SEGMENTS.
size
.
Return a list of CLASSES sorted for presentation purposes.
confusion-matrix
) classes) ¶sparse-bm-learner
)) ¶automatically generated reader method
rnn
)) ¶rnn
)) ¶During training, unfolded ‘BPN‘s corresponding to
previous time steps may be expensive to get at because they are no
longer in GPU memory. This consideration also applies to making
prediction with the additional caveat that with *WARP-TIME* true,
previous states are discarded so it’s not possible to gather
statistics after FORWARD finished.
Add monitor objects to this slot and they will be automatically applied to the RNN after each step when ‘FORWARD‘ing the RNN during training or prediction. To be able to easily switch between sets of monitors, in addition to a list of monitors this can be a symbol or a function, too. If it’s a symbol, then its a designator for its SYMBOL-VALUE. If it’s a function, then it must have no arguments and it’s a designator for its return value.
Return the end index (exclusive) of STRIPE in some array or matrix of OBJECT.
Return the start index of STRIPE in some array or matrix of OBJECT.
For efficiency, forward and backprop phases do
their stuff in batch mode: passing a number of instances through the
network in batches. Thus clumps must be able to store values of and
gradients for each of these instances. However, some clumps produce
the same result for each instance in a batch. These clumps are the
weights, the parameters of the network. STRIPEDP returns true iff
CLUMP does not represent weights (i.e. it’s not a ->WEIGHT).
For striped clumps, their NODES and DERIVATIVES are MAT objects with a leading dimension (number of rows in the 2d case) equal to the number of instances in the batch. Non-striped clumps have no restriction on their shape apart from what their usage dictates.
->softmax-xe-loss
)) ¶->softmax-xe-loss
)) ¶Set in SET-INPUT, this is either a MAT of the same
size as the input lump ‘X‘ or if the target is very sparse, this
can also be a sequence of batch size length that contains the
index value pairs of non-zero entries:
(;; first instance in batch has two non-zero targets
(;; class 10 has 30% expected probability
(10 . 0.3)
;; class 2 has 70% expected probability
(2 . 0.7))
;; second instance in batch puts 100% on class 7
7
;; more instances in the batch follow
...)
Actually, in the rare case where [GROUP-SIZE][(reader
->softmax-xe-loss)] is not SIZE (i.e. there are several softmax
normalization groups for every example), the length of the above
target sequence is BATCH-SIZE * N-GROUPS. Indices are always
relative to the start of the group.
If [GROUP-SIZE][(reader ->softmax-xe-loss)] is large (for example,
in neural language models with a huge number of words), using
sparse targets can make things go much faster, because calculation
of the derivative is no longer quadratic.
Giving different weights to training instances is implicitly supported. While target values in a group should sum to 1, multiplying all target values with a weight ‘W‘ is equivalent to training that ‘W‘ times on the same example.
sparsity-gradient-source
)) ¶automatically generated reader method
iterative-optimizer
)) ¶iterative-optimizer
)) ¶If a number, it’s the number of instances to train
on in the sense of N-INSTANCES. If N-INSTANCES is equal or greater
than this value optimization stops. If TERMINATION is NIL, then
optimization will continue. If it is T, then optimization will
stop. If it is a function of no arguments, then its return value
is processed as if it was returned by TERMINATION.
rnn
)) ¶The UNFOLDER of an RNN is function of no arguments
that builds and returns a BPN. The unfolder is allowed to create
networks with arbitrary topology even different ones for different
[TIME-STEP][]s with the help of LAG, or nested RNNs. Weights of
the same name are shared between the folds. That is, if a ->WEIGHT
lump were to be created and a weight lump of the same name already
exists, then the existing lump will be added to the BPN created by
UNFOLDER.
gd-optimizer
)) ¶Save memory if both the gradient source (the model
being optimized) and the optimizer support this feature. It works
like this: the accumulator into which the gradient source is asked
to place the derivatives of a segment will be SEGMENT-DERIVATIVES
of the segment. This allows the optimizer not to allocate an
accumulator matrix into which the derivatives are summed.
->gaussian-random
)) ¶->gaussian-random
)) ¶The variance of the normal distribution.
adam-optimizer
)) ¶adam-optimizer
)) ¶Within the bowels of adam, the estimated mean is
divided by the square root of the estimated variance (per weight)
which can lead to numerical problems if the denominator is near
zero. To avoid this, VARIANCE-ADJUSTMENT, which should be a small
positive number, is added to the denominator. This is ‘epsilon‘ in
the paper.
->batch-normalization
)) ¶A small positive real number that’s added to the sample variance. This is $\epsilon$ in the paper.
adam-optimizer
)) ¶adam-optimizer
)) ¶A number between 0 and 1 that determines how fast
the estimated variance of derivatives is updated. This is
$\beta_2$ in the paper.
->gaussian-random
)) ¶->gaussian-random
)) ¶If not NIL, then this value overrides VARIANCE
when not in training (i.e. when making predictions).
bm-mcmc-learner
)) ¶bm-mcmc-learner
)) ¶Controls whether visible nodes are sampled during the learning or the mean field is used instead.
gd-optimizer
)) ¶gd-optimizer
)) ¶An L2 penalty. It discourages large weights, much
like a zero mean gaussian prior. WEIGHT-DECAY * WEIGHT is added to
the gradient to penalize large weights. It’s as if the function
whose minimum is sought had WEIGHT-DECAY*sum_i{0.5 * WEIGHT_i^2}
added to it.
gd-optimizer
)) ¶gd-optimizer
)) ¶An L1 penalty. It encourages sparsity.
SIGN(WEIGHT) * WEIGHT-PENALTY is added to the gradient pushing the
weight towards negative infinity. It’s as if the function whose
minima is sought had WEIGHT-PENALTY*sum_i{abs(WEIGHT_i)} added to
it. Putting it on feature biases consitutes a sparsity constraint
on the features.
->embedding
)) ¶A weight lump whose rows indexed by
INPUT-ROW-INDICES are copied to the output of this lump.
full-cloud
)) ¶A chunk is represented as a row vector
disregarding the multi-striped case). If the visible chunk is 1xN
and the hidden is 1xM then the weight matrix is NxM. Hidden =
hidden + weights * visible. Visible = visible + weights^T *
hidden.
cg-optimizer
)) ¶automatically generated reader method
cg-optimizer
)) ¶automatically generated writer method
Write the gnuplot COMMAND to STREAM. Commands represented by the classes PLOT, SET-COMMAND, etc.
string
) stream) ¶Serialize DATA to STREAM in gnuplot format.
list
) stream) ¶array
) stream) ¶data-mapping
) stream) ¶file-mapping
) stream) ¶function-mapping
) stream) ¶Write the gnuplot MAPPING to STREAM.
list
) stream) ¶array
) stream) ¶data-mapping
) stream) ¶file-mapping
) stream) ¶function-mapping
) stream) ¶This is the extension point for WRITE-STATE. It is
guaranteed that primary WRITE-STATE* methods will be called only
once for each OBJECT (under EQ). CONTEXT is an opaque object and must
be passed on to any recursive WRITE-STATE* calls.
->batch-normalization
) stream context) ¶factored-cloud
) stream seen) ¶full-cloud
) stream seen) ¶encoder/decoder
) stream context) ¶batch-gd-optimizer
) g0) ¶segmented-gd-optimizer
) stream) ¶segmented-gd-optimizer
) g0) ¶adam-optimizer
) g0) ¶gd-optimizer
) g0) ¶bp-learner
) stream) ¶bp-learner
) g0) ¶cg-optimizer
) g0) ¶factored-cloud
) g0) ¶sparsity-gradient-source
) g0) ¶full-cloud
) g0) ¶bm-pcd-learner
) g0) ¶bm-learner
) stream) ¶conditioning-chunk
) g0) ¶normalized-group-chunk
) g0) ¶bm-mcmc-learner
) g0) ¶sparse-bm-learner
) stream) ¶sparse-bm-learner
) g0) ¶segment-set
) &key &allow-other-keys) ¶->max-channel
) &key size &allow-other-keys) ¶->sample-binary
) &key size &allow-other-keys) ¶->seq-barrier
) &key size &allow-other-keys) ¶->scaled-tanh
) &key size &allow-other-keys) ¶->weight
) &key dimensions size &allow-other-keys) ¶->squared-difference
) &key size &allow-other-keys) ¶->softmax-xe-loss
) &key &allow-other-keys) ¶->embedding
) &key size &allow-other-keys) ¶->batch-normalized
) &key size batch-size variance-adjustment population-decay &allow-other-keys) ¶factored-cloud
) &key rank &allow-other-keys) ¶bm
) &key &allow-other-keys) ¶Return an BM that consists of VISIBLE-CHUNKS, HIDDEN-CHUNKS and CLOUDS of weights where CLOUDS is a list of cloud specifications. Names of chunks and clouds must be unique under EQUAL. CLOUDS is merged with DEFAULT-CLOUDS. DEFAULT-CLOUDS defaults to connecting all visible and hidden chunks with FULL-CLOUDS without any intralayer connection. See MERGE-CLOUD-SPECS on the semantics of merging.
cheating-sparsity-gradient-source
) &key &allow-other-keys) ¶chunk
) &key size max-n-stripes &allow-other-keys) ¶dbn
) &key layers clouds-up-to-layers rbm-class &allow-other-keys) ¶full-cloud
) &key &allow-other-keys) ¶normal-sparsity-gradient-source
) &key &allow-other-keys) ¶bm-pcd-learner
) &key &allow-other-keys) ¶attributed
) &key prepend-attributes &allow-other-keys) ¶gd-optimizer
) stream) ¶function-sampler
) stream) ¶segment-set
) stream) ¶running-stat
) stream) ¶factored-cloud
) stream) ¶sparsity-gradient-source
) stream) ¶full-cloud
) stream) ¶attributed
) stream) ¶cross-entropy-counter
) stream) ¶confusion-matrix
) stream) ¶classification-accuracy-counter
) stream) ¶basic-counter
) stream) ¶Performs elementwise multiplication on its two
input lumps. The SIZE of this lump is automatically determined from
the size of its inputs. Either input can be a ->WEIGHT lump.
“‘cl-transcript
(->* (->input :size 10) (->weight :size 10 :name ’scale)
:name ’mult)
==> #<->* MULT :SIZE 10 1/1 :NORM 0.00000>
“‘
lump
.
Performs elementwise addition on its input lumps.
The SIZE of this lump is automatically determined from the size of
its inputs if there is at least one. If one of the inputs is a
->WEIGHT lump, then it is added to every stripe.
“‘cl-transcript
(->+ (list (->input :size 10) (->weight :size 10 :name ’bias))
:name ’plus)
==> #<->+ PLUS :SIZE 10 1/1 :NORM 0.00000>
“‘
lump
.
lump
.
backward
.
default-size
.
forward
.
x
.
Activation subnetworks are built by the function
->ACTIVATION and they have a number of lumps hidden inside them.
Ultimately, this subnetwork computes a sum like ‘sum_i x_i * W_i +
sum_j y_j .* V_j + biases‘ where ‘x_i‘ are input lumps, ‘W_i‘ are
dense matrices representing connections, while ‘V_j‘ are peephole
connection vectors that are mulitplied in an elementwise manner with
their corresponding input ‘y_j‘.
bpn
.
The primary purpose of this class is to hold the
estimated mean and variance of the inputs to be normalized and allow
them to be shared between multiple ->BATCH-NORMALIZED lumps that
carry out the computation. These estimations are saved and loaded by
SAVE-STATE and LOAD-STATE.
“‘commonlisp
(->batch-normalization (->weight :name ’(h1 :scale) :size 10)
(->weight :name ’(h1 :shift) :size 10)
:name ’(h1 :batch-normalization))
“‘
A weight lump of the same size as SHIFT. This is $\gamma$ in the paper.
:scale
This slot is read-only.
A weight lump of the same size as SCALE. This is $\beta$ in the paper.
:shift
This slot is read-only.
Normally all stripes participate in the batch.
Lowering the number of stripes may increase the regularization
effect, but it also makes the computation less efficient. By
setting BATCH-SIZE to a divisor of N-STRIPES one can decouple the
concern of efficiency from that of regularization. The default
value, NIL, is equivalent to N-STRIPES. BATCH-SIZE only affects
training.
With the special value :USE-POPULATION, instead of the mean and the variance of the current batch, use the population statistics for normalization. This effectively cancels the regularization effect, leaving only the faster learning.
:batch-size
This slot is read-only.
A small positive real number that’s added to the sample variance. This is $\epsilon$ in the paper.
1.0e-4
:variance-adjustment
This slot is read-only.
While training, an exponential moving average of
batch means and standard deviances (termed _population
statistics_) is updated. When making predictions, normalization is
performed using these statistics. These population statistics are
persisted by SAVE-STATE.
0.99
:population-decay
This slot is read-only.
0
This is an implementation of v3 of the [Batch
Normalization paper](http://arxiv.org/abs/1502.03167). The output of
->BATCH-NORMALIZED is its input normalized so that for all elements
the mean across stripes is zero and the variance is 1. That is, the
mean of the batch is subtracted from the inputs and they are
rescaled by their sample stddev. Actually, after the normalization
step the values are rescaled and shifted (but this time with learnt
parameters) in order to keep the representational power of the model
the same. The primary purpose of this lump is to speed up learning,
but it also acts as a regularizer. See the paper for the details.
To normalize the output of [LUMP][dislocated] without no additional
regularizer effect:
“‘commonlisp
(->batch-normalized lump :batch-size :use-population)
“‘
The above uses an exponential moving average to estimate the mean
and variance of batches and these estimations are used at both
training and test time. In contrast to this, the published version
uses the sample mean and variance of the current batch at training
time which injects noise into the process. The noise is higher for
lower batch sizes and has a regularizing effect. This is the default
behavior (equivalent to ‘:BATCH-SIZE NIL‘):
“‘commonlisp
(->batch-normalized lump)
“‘
For performance reasons one may wish to process a higher number of
instances in a batch (in the sense of N-STRIPES) and get the
regularization effect associated with a lower batch size. This is
possible by setting :BATCH-SIZE to a divisor of the the number of
stripes. Say, the number of stripes is 128, but we want as much
regularization as we would get with 32:
“‘commonlisp
(->batch-normalized lump :batch-size 32)
“‘
The primary input of ->BATCH-NORMALIZED is often an ->ACTIVATION and its output is fed into an activation function (see @MGL-BP-ACTIVATION-FUNCTIONS).
lump
.
The ->BATCH-NORMALIZATION of this lump. May be
shared between multiple ->BATCH-NORMALIZED lumps.
Batch normalization is special in that it has state apart from the
computed results (NODES) and its derivatives (DERIVATIVES). This
state is the estimated mean and variance of its inputs and they
are encapsulated by ->BATCH-NORMALIZATION.
If NORMALIZATION is not given at instantiation, then a new
->BATCH-NORMALIZATION object will be created automatically,
passing :BATCH-SIZE, :VARIANCE-ADJUSTMENT, and :POPULATION-DECAY
arguments on to ->BATCH-NORMALIZATION. See [BATCH-SIZE][(reader
->batch-normalization)], [VARIANCE-ADJUSTMENT][(reader
->batch-normalization)] and [POPULATION-DECAY][(reader
->batch-normalization)]. New scale and shift weight lumps will be
created with names:
‘(,name :scale)
‘(,name :shift)
where ‘\NAME‘ is the NAME of this lump.
This default behavior covers the use-case where the statistics kept by ->BATCH-NORMALIZATION are to be shared only between time steps of an RNN.
:normalization
This slot is read-only.
The output of this lump is identical to its input,
except it randomly zeroes out some of them during training which act
as a very strong regularizer. See Geoffrey Hinton’s ’Improving
neural networks by preventing co-adaptation of feature
detectors’.
The SIZE of this lump is the size of its input which is determined automatically.
Select rows of WEIGHTS, one row for each index in
INPUT-ROW-INDICES. This lump is equivalent to adding an ->INPUT lump
with a one hot encoding scheme and a ->V*M lump on top of it, but it
is more efficient in execution and in memory usage, because it works
with a sparse representation of the input.
The SIZE of this lump is the number of columns of WEIGHTS which is
determined automatically.
“‘cl-transcript
(->embedding :weights (->weight :name ’embedding-weights
:dimensions ’(3 5))
:name ’embeddings)
==> #<->EMBEDDING EMBEDDINGS :SIZE 5 1/1 :NORM 0.00000>
“‘
lump
.
A weight lump whose rows indexed by
INPUT-ROW-INDICES are copied to the output of this lump.
:weights
This slot is read-only.
A sequence of batch size length of row indices. To be set in SET-INPUT.
:input-row-indices
lump
.
backward
.
default-size
.
forward
.
x
.
This lump has no input, it produces normally
distributed independent random numbers with MEAN and VARIANCE (or
VARIANCE-FOR-PREDICTION). This is useful building block for noise
based regularization methods.
“‘cl-transcript
(->gaussian-random :size 10 :name ’normal :mean 1 :variance 2)
==> #<->GAUSSIAN-RANDOM NORMAL :SIZE 10 1/1 :NORM 0.00000>
“‘
lump
.
The mean of the normal distribution.
real
0
:mean
mean
.
The variance of the normal distribution.
real
1
:variance
If not NIL, then this value overrides VARIANCE
when not in training (i.e. when making predictions).
0
:variance-for-prediction
A lump that has no input lumps, does not change its
values in the forward pass (except when [DROPOUT][(ACCESSOR
->INPUT)] is non-zero), and does not compute derivatives. _Clamp_
inputs on NODES of input lumps in SET-INPUT.
For convenience, ->INPUT can perform dropout itself although it
defaults to no dropout.
“‘cl-transcript
(->input :size 10 :name ’some-input)
==> #<->INPUT SOME-INPUT :SIZE 10 1/1 :NORM 0.00000>
“‘
See [DROPOUT][(ACCESSOR ->DROPOUT)].
Calculate the loss for the instances in the batch.
The main purpose of this lump is to provide a training signal.
An error lump is usually a leaf in the graph of lumps (i.e. there are no other lumps whose input is this one). The special thing about error lumps is that 1 (but see IMPORTANCE) is added automatically to their derivatives. Error lumps have exactly one node (per stripe) whose value is computed as the sum of nodes in their input lump.
This is to support weighted instances. That is
when not all training instances are equally important. If non-NIL,
a 1d MAT with the importances of stripes of the batch. When
IMPORTANCE is given (typically in SET-INPUT), then instead of
adding 1 to the derivatives of all stripes, IMPORTANCE is added
elemtwise.
:importance
Long-Short Term Memory subnetworks are built by the
function ->LSTM and they have many lumps hidden inside them. These
lumps are packaged into a subnetwork to reduce clutter.
bpn
.
This is basically maxout without dropout (see
http://arxiv.org/abs/1302.4389). It groups its inputs by
[GROUP-SIZE][(READER ->MAX)], and outputs the maximum of each group.
The SIZE of the output is automatically calculated, it is the size
of the input divided by [GROUP-SIZE][(READER ->MAX)].
“‘cl-transcript
(->max (->input :size 120) :group-size 3 :name ’my-max)
==> #<->MAX MY-MAX :SIZE 40 1/1 :NORM 0.00000 :GROUP-SIZE 3>
“‘
The advantage of ->MAX over ->RELU is that flow gradient is never stopped so there is no problem of units getting stuck in off state.
Called LWTA (Local Winner Take All) or
Channel-Out (see http://arxiv.org/abs/1312.1909) in the literature
it is basically ->MAX, but instead of producing one output per
group, it just produces zeros for all unit but the one with the
maximum value in the group. This allows the next layer to get some
information about the path along which information flowed. The SIZE
of this lump is the size of its input which is determined
automatically.
Same as ->MAX, but it computes the MIN of groups. Rarely useful.
lump
.
:group-size
This slot is read-only.
The sum of nodes after normalization. Can be
changed during training, for instance when clamping. If it is a
vector then its length must be MAX-N-STRIPES which automatically
maintained.
(or real array)
1
:scale
‘max(0,x)‘ activation function. Be careful, relu
units can get stuck in the off state: if they move to far to
negative territory it can be very difficult to get out of it. The
SIZE of this lump is the size of its input which is determined
automatically.
lump
.
Treating values of its input as probabilities,
sample independent binomials. Turn true into 1 and false into 0. The
SIZE of this lump is determined automatically from the size of its
input.
“‘cl-transcript
(->sample-binary (->input :size 10) :name ’binarized-input)
==> #<->SAMPLE-BINARY BINARIZED-INPUT :SIZE 10 1/1 :NORM 0.00000>
“‘
lump
.
Pretty much like TANH but its input and output is
scaled in such a way that the variance of its output is close to 1
if the variance of its input is close to 1 which is a nice property
to combat vanishing gradients. The actual function is ‘1.7159 *
tanh(2/3 * x)‘. The SIZE of this lump is the size of its input which
is determined automatically.
lump
.
In an RNN, processing of stripes (instances in the
batch) may require different number of time step so the final state
for stripe 0 is in stripe 0 of some lump L at time step 7, while for
stripe 1 it is in stripe 1 of sump lump L at time step 42.
This lump copies the per-stripe states from different lumps into a
single lump so that further processing can take place (typically
when the RNN is embedded in another network).
The SIZE of this lump is automatically set to the size of the lump returned by ‘(FUNCALL SEQ-ELT-FN 0)‘.
lump
.
A function of an INDEX argument that returns the lump with that index in some sequence.
:seq-elt-fn
This slot is read-only.
A sequence of length batch size of indices. The
element at index ‘I‘ is the index to be passed to SEQ-ELT-FN to
find the lump whose stripe ‘I‘ is copied to stripe ‘I‘ of this
this lump.
Applies the ‘1/(1 + e^{-x})‘ function elementwise
to its inputs. This is one of the classic non-linearities for neural
networks.
For convenience, ->SIGMOID can perform dropout itself although it
defaults to no dropout.
“‘cl-transcript
(->sigmoid (->activation (->input :size 10) :size 5) :name ’this)
==> #<->SIGMOID THIS :SIZE 5 1/1 :NORM 0.00000>
“‘
The SIZE of this lump is the size of its input which is determined automatically.
See [DROPOUT][(ACCESSOR ->DROPOUT)].
Applies the SIN function to its input in an
elementwise manner. The SIZE of this lump is the size of its input
which is determined automatically.
lump
.
A specialized lump that computes the softmax of its
input in the forward pass and backpropagates a cross-entropy loss.
The advantage of doing these together is numerical stability. The
total cross-entropy is the sum of cross-entropies per group of
[GROUP-SIZE][(reader ->softmax-xe-loss)] elements:
$$
XE(x) = - \sum_{i=1,g} t_i \ln(s_i),
$$
where ‘g‘ is the number of classes ([GROUP-SIZE][(reader
->softmax-xe-loss)]), ‘t_i‘ are the targets (i.e. the true
probabilities of the class, often all zero but one), ‘s_i‘ is the
output of softmax calculated from input ‘X‘:
$$
s_i = {softmax}(x_1, x_2, ..., x_g) =
\frac{e^x_i}{\sum_{j=1,g} e^x_j}
$$
In other words, in the forward phase this lump takes input ‘X‘,
computes its elementwise EXP, normalizes each group of
[GROUP-SIZE][(reader ->softmax-xe-loss)] elements to sum to 1 to get
the softmax which is the result that goes into NODES. In the
backward phase, there are two sources of gradients: the lumps that
use the output of this lump as their input (currently not
implemented and would result in an error) and an implicit
cross-entropy loss.
One can get the cross-entropy calculated in the most recent forward
pass by calling COST on this lump.
This is the most common loss function for classification. In fact, it is nearly ubiquitous. See the @MGL-FNN-TUTORIAL and the @MGL-RNN-TUTORIAL for how this loss and SET-INPUT work together.
lump
.
The number of elements in a softmax group. This is
the number of classes for classification. Often GROUP-SIZE is
equal to SIZE (it is the default), but in general the only
constraint is that SIZE is a multiple of GROUP-SIZE.
:group-size
This slot is read-only.
Set in SET-INPUT, this is either a MAT of the same
size as the input lump ‘X‘ or if the target is very sparse, this
can also be a sequence of batch size length that contains the
index value pairs of non-zero entries:
(;; first instance in batch has two non-zero targets
(;; class 10 has 30% expected probability
(10 . 0.3)
;; class 2 has 70% expected probability
(2 . 0.7))
;; second instance in batch puts 100% on class 7
7
;; more instances in the batch follow
...)
Actually, in the rare case where [GROUP-SIZE][(reader
->softmax-xe-loss)] is not SIZE (i.e. there are several softmax
normalization groups for every example), the length of the above
target sequence is BATCH-SIZE * N-GROUPS. Indices are always
relative to the start of the group.
If [GROUP-SIZE][(reader ->softmax-xe-loss)] is large (for example,
in neural language models with a huge number of words), using
sparse targets can make things go much faster, because calculation
of the derivative is no longer quadratic.
Giving different weights to training instances is implicitly supported. While target values in a group should sum to 1, multiplying all target values with a weight ‘W‘ is equivalent to training that ‘W‘ times on the same example.
:target
This lump takes two input lumps and calculates
their squared difference ‘(x - y)^2‘ in an elementwise manner. The
SIZE of this lump is automatically determined from the size of its
inputs. This lump is often fed into ->LOSS that sums the squared
differences and makes it part of the function to be minimized.
“‘cl-transcript
(->loss (->squared-difference (->activation (->input :size 100)
:size 10)
(->input :name ’target :size 10))
:name ’squared-error)
==> #<->LOSS SQUARED-ERROR :SIZE 1 1/1 :NORM 0.00000>
“‘
Currently this lump is not CUDAized, but it will copy data from the GPU if it needs to.
lump
.
Computes the sum of all nodes of its input per stripe. This SIZE of this lump is always 1.
lump
.
Applies the TANH function to its input in an
elementwise manner. The SIZE of this lump is the size of its input
which is determined automatically.
lump
.
Perform ‘X * WEIGHTS‘ where ‘X‘ (the input) is of
size ‘M‘ and WEIGHTS is a ->WEIGHT whose single stripe is taken to
be of dimensions ‘M x N‘ stored in row major order. ‘N‘ is the size
of this lump. If TRANSPOSE-WEIGHTS-P then WEIGHTS is ‘N x M‘ and ‘X
* WEIGHTS’‘ is computed.
lump
.
A ->WEIGHT lump.
mgl-bp:->weight
:weights
This slot is read-only.
Determines whether the input is multiplied by WEIGHTS or its transpose.
:transpose-weights-p
This slot is read-only.
A set of optimizable parameters of some kind. When
a BPN is is trained (see @MGL-BP-TRAINING) the NODES of weight lumps
will be changed. Weight lumps perform no computation.
Weights can be created by specifying the total size or the
dimensions:
“‘cl-transcript
(dimensions (->weight :size 10 :name ’w))
=> (1 10)
(dimensions (->weight :dimensions ’(5 10) :name ’w))
=> (5 10)
“‘
Adam is a first-order stochasistic gradient descent
optimizer. It maintains an internal estimation for the mean and raw
variance of each derivative as exponential moving averages. The step
it takes is basically ‘M/(sqrt(V)+E)‘ where ‘M‘ is the estimated
mean, ‘V‘ is the estimated variance, and ‘E‘ is a small adjustment
factor to prevent the gradient from blowing up. See version 5 of the
[paper](http://arxiv.org/abs/1412.6980) for more.
Note that using momentum is not supported with Adam. In fact, an
error is signalled if it’s not :NONE.
See @MGL-GD-BATCH-GD-OPTIMIZER for the description of the various options common to all batch based optimizers.
(setf adam-time-step)
.
adam-time-step
.
describe-object
.
initialize-optimizer*
.
(setf learning-rate)
.
learning-rate
.
maybe-update-weights
.
(setf mean-decay)
.
mean-decay
.
(setf mean-decay-decay)
.
mean-decay-decay
.
(setf mean-estimates)
.
mean-estimates
.
(setf variance-adjustment)
.
variance-adjustment
.
(setf variance-decay)
.
variance-decay
.
(setf variance-estimates)
.
variance-estimates
.
Same thing as [LEARNING-RATE][(ACCESSOR
GD-OPTIMIZER)] but with the default suggested by the Adam paper.
2.0e-4
:none
A number between 0 and 1 that determines how fast
the estimated mean of derivatives is updated. 0 basically gives
you RMSPROP (if VARIANCE-DECAY is not too large) or AdaGrad (if
VARIANCE-DECAY is close to 1 and the learning rate is annealed.
This is $\beta_1$ in the paper.
0.9
:mean-decay
A value that should be close to 1. MEAN-DECAY is
multiplied by this value after each update. This is $\lambda$ in
the paper.
(- 1 1.0d-7)
:mean-decay-decay
A number between 0 and 1 that determines how fast
the estimated variance of derivatives is updated. This is
$\beta_2$ in the paper.
0.999
:variance-decay
Within the bowels of adam, the estimated mean is
divided by the square root of the estimated variance (per weight)
which can lead to numerical problems if the denominator is near
zero. To avoid this, VARIANCE-ADJUSTMENT, which should be a small
positive number, is added to the denominator. This is ‘epsilon‘ in
the paper.
1.0d-7
:variance-adjustment
0
:none
This is a utility class that all counters subclass.
The ATTRIBUTES plist can hold basically anything. Currently the
attributes are only used when printing and they can be specified by
the user. The monitor maker functions such as those in
@MGL-CLASSIFICATION-MONITOR also add attributes of their own to the
counters they create.
With the :PREPEND-ATTRIBUTES initarg when can easily add new
attributes without clobbering the those in the :INITFORM, (:TYPE
"rmse") in this case.
(princ (make-instance ’rmse-counter
:prepend-attributes ’(:event "pred."
:dataset "test")))
;; pred. test rmse: 0.000e+0 (0)
=> #<RMSE-COUNTER pred. test rmse: 0.000e+0 (0)>
A plist of attribute keys and values.
:attributes
ENCODE all features of a document with a sparse
vector. Get the features of document from MAPPER, encode each
feature with FEATURE-ENCODER. FEATURE-ENCODER may return NIL if the
feature is not used. The result is a vector of encoded-feature/value
conses. encoded-features are unique (under ENCODED-FEATURE-TEST)
within the vector but are in no particular order.
Depending on KIND, value is calculated in various ways:
- For :FREQUENCY it is the number of times the corresponding feature
was found in DOCUMENT.
- For :BINARY it is always 1.
- For :NORMALIZED-FREQUENCY and :NORMALIZED-BINARY are like the
unnormalized counterparts except that as the final step values in
the assembled sparse vector are normalized to sum to 1.
- Finally, :COMPACTED-BINARY is like :BINARY but the return values
is not a vector of conses, but a vector of element-type
ENCODED-FEATURE-TYPE.
“‘cl-transcript
(let* ((feature-indexer
(make-indexer
(alexandria:alist-hash-table ’(("I" . 3) ("me" . 2) ("mine" . 1)))
2))
(bag-of-words-encoder
(make-instance ’bag-of-words-encoder
:feature-encoder feature-indexer
:feature-mapper (lambda (fn document)
(map nil fn document))
:kind :frequency)))
(encode bag-of-words-encoder ’("All" "through" "day" "I" "me" "mine"
"I" "me" "mine" "I" "me" "mine")))
=> #((0 . 3.0d0) (1 . 3.0d0))
“‘
:feature-encoder
This slot is read-only.
:feature-mapper
This slot is read-only.
(function eql)
:encoded-feature-test
This slot is read-only.
t
:encoded-feature-type
This slot is read-only.
(member :binary :frequency :normalized-binary :normalized-frequency)
:binary
:kind
This slot is read-only.
A simple counter whose ADD-TO-COUNTER takes two
additional parameters: an increment to the internal sums of called
the NUMERATOR and DENOMINATOR. COUNTER-VALUES returns two
values:
- NUMERATOR divided by DENOMINATOR (or 0 if DENOMINATOR is 0) and
- DENOMINATOR
Here is an example the compute the mean of 5 things received in two
batches:
(let ((counter (make-instance ’basic-counter)))
(add-to-counter counter 6.5 3)
(add-to-counter counter 3.5 2)
counter)
=> #<BASIC-COUNTER 2.00000e+0 (5)>
Another abstract base class for gradient based
optimizers tath updates all weights simultaneously after chewing
through BATCH-SIZE inputs. See subclasses SGD-OPTIMIZER,
ADAM-OPTIMIZER and NORMALIZED-BATCH-GD-OPTIMIZER.
PER-WEIGHT-BATCH-GD-OPTIMIZER may be a better choice when some weights can go unused for instance due to missing input values.
0
:n-instances-in-batch
A list of functions of no parameters. Each
function is called just before a weight update takes place (after
accumulated gradients have been divided the length of the batch).
Convenient to hang some additional gradient accumulating code
on.
list
:before-update-hook
The network is assembled from CHUNKS (nodes of the
same behaviour) and CLOUDs (connections between two chunks). To
instantiate, arrange for VISIBLE-CHUNKS, HIDDEN-CHUNKS,
CLOUDS (either as initargs or initforms) to be set.
Usage of CLOUDS is slightly tricky: you may pass a list of CLOUD objects connected to chunks in this network. Alternatively, a cloud spec may stand for a cloud. Also, the initial value of CLOUDS is merged with the default cloud spec list before the final cloud spec list is instantiated. The default cloud spec list is what FULL-CLOUDS-EVERYWHERE returns for VISIBLE-CHUNKS and HIDDEN-CHUNKS. See MERGE-CLOUD-SPECS for the gory details. The initform, ’(:MERGE), simply leaves the default cloud specs alone.
chunks
.
clouds
.
conditioning-chunks
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
default-mean-field-supervisor
.
describe-object
.
find-chunk
.
find-cloud
.
has-hidden-to-hidden-p
.
has-visible-to-visible-p
.
hidden-and-conditioning-chunks
.
hidden-chunks
.
(setf importances)
.
importances
.
initialize-instance
.
make-classification-accuracy-monitors*
.
make-cross-entropy-monitors*
.
make-reconstruction-monitors*
.
map-segments
.
max-n-stripes
.
n-stripes
.
print-object
.
read-state*
.
set-hidden-mean
.
set-hidden-mean
.
set-input
.
set-input
.
set-max-n-stripes
.
set-n-stripes
.
set-visible-mean
.
set-visible-mean
.
visible-and-conditioning-chunks
.
visible-chunks
.
write-state*
.
A list of all the chunks in this BM. It’s VISIBLE-CHUNKS and HIDDEN-CHUNKS appended.
list
This slot is read-only.
A list of CHUNKs whose values come from the outside world: SET-INPUT sets them.
list
:visible-chunks
This slot is read-only.
A list of CHUNKs that are not directly observed. Disjunct from VISIBLE-CHUNKS.
list
:hidden-chunks
This slot is read-only.
list
This slot is read-only.
list
This slot is read-only.
Normally, a list of CLOUDS representing the
connections between chunks. During initialization cloud specs are
allowed in the list.
list
(quote (:merge))
:clouds
This slot is read-only.
This slot is read-only.
1
:max-n-stripes
This slot is read-only.
:importances
Persistent Contrastive Divergence trainer.
The number of persistent chains to run. Also known as the number of fantasy particles.
unsigned-byte
:n-particles
This slot is read-only.
A BM that keeps the states of the persistent
chains (each stripe is a chain), initialized from the BM being
trained by COPY with ’PCD as the context. Suitable for training BM
and RBM.
mgl-bm:bm
This slot is read-only.
The BPN for which this BP-LEARNER provides the gradients.
:bpn
bpn
.
This slot is read-only.
This slot is read-only.
A list of ‘MONITOR‘s.
:monitors
Abstract base class for FNN and RNN.
->activation
.
->lstm
.
fnn
.
rnn
.
backward
.
backward-bpn
.
clumps
.
cost
.
derivatives
.
describe-object
.
forward
.
forward-bpn
.
initialize-instance
.
(setf last-cost)
.
last-cost
.
make-classification-accuracy-monitors*
.
make-cross-entropy-monitors*
.
map-clumps
.
map-segments
.
max-n-stripes
.
n-stripes
.
nodes
.
non-constant-mats
.
print-object
.
read-state*
.
set-input
.
set-max-n-stripes
.
set-n-stripes
.
size
.
write-state*
.
A topological sorted adjustable array with a fill
pointer that holds the clumps that make up the network. Clumps are
added to it by ADD-CLUMP or, more often, automatically when within
a BUILD-FNN or BUILD-RNN. Rarely needed, FIND-CLUMP takes care of
most uses.
(array mgl-bp:clump (*))
(make-array 0 :element-type (quote mgl-bp:clump) :adjustable t :fill-pointer t)
:clumps
This slot is read-only.
The current number of instances the network has.
This is automatically set to the number of instances passed to
SET-INPUT, so it rarely has to be manipulated directly although it
can be set. When set N-STRIPES of all CLUMPS get set to the same
value.
The maximum number of instances the network can
operate on in parallel. Within BUILD-FNN or BUILD-RNN, it defaults
to MAX-N-STRIPES of that parent network, else it defaults to 1.
When set MAX-N-STRIPES of all CLUMPS get set to the same value.
(or mgl-util:index null)
:max-n-stripes
This slot is read-only.
(list 0 0)
Updates all weights simultaneously after chewing through BATCH-SIZE inputs.
After having gone through BATCH-SIZE number of
instances, weights are updated. Normally, CG operates on all
available data, but it may be useful to introduce some noise into
the optimization to reduce overfitting by using smaller batch
sizes. If BATCH-SIZE is not set, it is initialized to the size of
the dataset at the start of optimization.
:batch-size
(quote nil)
:cg-args
A predicate function on segments that filters out uninteresting segments. Called from INITIALIZE-OPTIMIZER*.
(constantly t)
:segment-filter
This slot is read-only.
An event hook called when processing a conjugate
gradient batch is done. The handlers on the hook are called with 8
arguments:
(optimizer gradient-source instances
best-w best-f n-line-searches
n-succesful-line-searches n-evaluations)
The latter 5 of which are the return values of the CG function.
:on-cg-batch-done
Segments to train.
This slot is read-only.
(or mgl-mat:mat null)
Pre-allocated vectors to make CG less consy.
list
This slot is read-only.
Like NORMAL-SPARSITY-GRADIENT-SOURCE, but it needs
less memory because it only tracks average activation levels of
nodes independently (as opposed to simultaneous activations) and
thus it may produce the wrong gradient an example for which is when
two connected nodes are on a lot, but never at the same time.
Clearly, it makes little sense to change the weight but this is
exactly what happens.
A chunk is a set of nodes of the same type in a Boltzmann Machine. This is an abstract base class.
copy-nodes
.
copy-object-extra-initargs
.
copy-object-extra-initargs
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
describe-object
.
(setf indices-present)
.
indices-present
.
initialize-instance
.
inputs
.
make-reconstruction-monitors*
.
max-n-stripes
.
means
.
n-stripes
.
name
.
nodes
.
old-nodes
.
print-object
.
(setf random-numbers)
.
random-numbers
.
resize-chunk
.
(setf scratch)
.
scratch
.
set-chunk-mean
.
set-max-n-stripes
.
set-n-stripes
.
size
.
stripe-end
.
stripe-start
.
A value for each node in the chunk. First,
activations are put here (weights*inputs) then the mean of the
probability distribution is calculated from the activation and
finally (optionally) a sample is taken from the probability
distribution. All these values are stored in this vector. This is
also where SET-INPUT is supposed to clamp the values. Note that
not only the values in the matrix but also the matrix object
itself can change when the network is used.
This slot is read-only.
The previous value of each node. Used to provide parallel computation semantics when there are intralayer connections. Swapped with NODES or MEANS at times.
This slot is read-only.
Saved values of the means (see SET-MEAN) last computed.
This slot is read-only.
This is where the after method of SET-INPUT saves
the input for later use by RECONSTRUCTION-ERROR, INPUTS->NODES. It
is NIL in CONDITIONING-CHUNKS.
This slot is read-only.
Another matrix that parallels NODES. Used as a temporary.
NIL or a simple vector of array indices into the
layer’s NODES. Need not be ordered. SET-INPUT sets it. Note, that
if it is non-NIL then N-STRIPES must be 1.
(or null mgl-util:index-vector)
:indices-present
A BASIC-COUNTER with "acc." as its :TYPE
attribute and a PRINT-OBJECT method that prints percentages.
(quote (:type "acc."))
A set of connections between two chunks. The chunks
may be the same, be both visible or both hidden subject to
constraints imposed by the type of boltzmann machine the cloud is
part of.
cached-activations1
.
cached-activations2
.
(setf cached-version1)
.
cached-version1
.
(setf cached-version2)
.
cached-version2
.
chunk1
.
chunk2
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
copy-object-slot
.
describe-object
.
name
.
print-object
.
scale1
.
scale2
.
set-max-n-stripes
.
set-n-stripes
.
When CHUNK1 is being activated count activations coming from this cloud multiplied by SCALE1.
mgl-util:flt
1.0d0
:scale1
This slot is read-only.
When CHUNK2 is being activated count activations coming from this cloud multiplied by SCALE2.
mgl-util:flt
1.0d0
:scale2
This slot is read-only.
(gensym)
(gensym)
This slot is read-only.
This slot is read-only.
A CLUMP is a LUMP or a [BPN][class]. It represents
a differentiable function. Arguments of clumps are given during
instantiation. Some arguments are clumps themselves so they get
permenantly wired together like this:
“‘commonlisp
(->v*m (->input :size 10 :name ’input)
(->weight :dimensions ’(10 20) :name ’weight)
:name ’activation)
“‘
The above creates three clumps: the vector-matrix multiplication
clumps called ‘ACTIVATION‘ which has a reference to its operands:
INPUT and WEIGHT. Note that the example just defines a function, no
actual computation has taken place, yet.
This wiring of ‘CLUMP‘s is how one builds feed-forward nets (FNN) or recurrent neural networks (RNN) that are ‘CLUMP‘s themselves so one can build nets in a hiearchical style if desired. Non-composite ‘CLUMP‘s are called LUMP (note the loss of ‘C‘ that stands for composite). The various LUMP subtypes correspond to different layer types (->SIGMOID, ->DROPOUT, ->RELU, ->TANH, etc).
A gnuplot command. There are separate classes for plot/splot, etc.
A counter that simply concatenates
sequences.
“‘cl-transcript
(let ((counter (make-instance ’concat-counter)))
(add-to-counter counter ’(1 2 3) #(4 5))
(add-to-counter counter ’(6 7))
(counter-values counter))
=> (1 2 3 4 5 6 7)
““
:concatenation
A type designator suitable as the RESULT-TYPE argument to CONCATENATE.
(quote list)
:concatenation-type
This slot is read-only.
Nodes in CONDITIONING-CHUNK never change their
values on their own so they are to be clamped. Including this chunk
in the visible layer allows ‘conditional’ RBMs.
A confusion matrix keeps count of classification
results. The correct class is called ‘target’ and the output of the
classifier is called ‘prediction’.
A special kind of CONDITIONING-CHUNK whose NODES
are always DEFAULT-VALUE. This conveniently allows biases in the
opposing layer.
1.0d0
This slot is read-only.
Poisson units with normalized (EXP ACTIVATION) means.
A BASIC-COUNTER with "xent" as its :TYPE attribute.
(quote (:type "xent"))
A hash table mapping labels to the cross entropy counters for samples with that label.
(make-hash-table)
:per-label-counters
This slot is read-only.
Consider plot ’-’, sin(x). For gnuplot ’-’
indicates inline data that will be read from the command stream
after all mappings are read. The DATA slot of this class holds an
object that can be serialized by WRITE-DATA.
:data
This slot is read-only.
A Deep Boltzmann Machine. See "Deep Boltzmann
Machines" by Ruslan Salakhutdinov and Geoffrey Hinton at
<http://www.cs.toronto.edu/~hinton/absps/dbm.pdf>.
To instantiate, set up LAYERS and CLOUDS but not VISIBLE-CHUNKS and
HIDDEN-CHUNKS, because contrary to how initialization works in the
superclass (BM), the values of these slots are inferred from LAYERS
and CLOUDS: chunks without a connection from below are visible while
the rest are hidden.
The default cloud spec list is computed by calling FULL-CLOUDS-EVERYWHERE-BETWEEN-LAYERS on LAYERS.
bm
.
A list of layers from bottom up. A layer is a list
of chunks. The layers partition the set of all chunks in the BM.
Chunks with no connections to layers below are visible (including
constant and conditioning) chunks. The layered structure is used
in the single, bottom-up, approximate inference pass. When
instantiating a DBM, VISIBLE-CHUNKS and HIDDEN-CHUNKS are inferred
from LAYERS and CLOUDS.
list
:layers
This slot is read-only.
Each element of this list is a list of clouds connected from below to the layer of the same index.
list
This slot is read-only.
Deep Belief Network: a stack of RBMs. DBNs with
multiple hidden layers are not Boltzmann Machines. The chunks in the
hidden layer of a constituent RBM and the chunk in the visible layer
of the RBM one on top of it must be EQ for the DBN to consider them
the same. Naming them the same is not enough, in fact, all chunks
must have unique names under EQUAL as usual.
Similarly to DBMs, DBNs can be constructed using the :LAYERS initarg. When using this feature, a number of RBMs are instantiated. Often one wants to create a DBN that consists of some RBM subclass, this is what the :RBM-CLASS initarg is for.
chunks
.
clouds
.
find-chunk
.
find-cloud
.
hidden-chunks
.
inactive-rbms
.
initialize-instance
.
initialize-instance
.
make-classification-accuracy-monitors*
.
make-cross-entropy-monitors*
.
make-reconstruction-monitors*
.
max-n-stripes
.
n-stripes
.
rbms
.
read-state*
.
set-input
.
set-max-n-stripes
.
set-n-stripes
.
visible-chunks
.
write-state*
.
list
This slot is read-only.
1
:max-n-stripes
This slot is read-only.
DIFFUN dresses a lisp function (in its FN slot) as
a gradient source (see MGL-OPT::@MGL-OPT-GRADIENT-SOURCE), which
allows it to be used in MINIMIZE. See the examples in
MGL-GD::@MGL-GD and MGL-CG::@MGL-CG.
A real valued lisp function. It may have any number of parameters.
:fn
fn
.
This slot is read-only.
The list of indices of parameters that we don’t
optimize. Values for these will come from the DATASET argument of
MINIMIZE.
:parameter-indices
This slot is read-only.
The list of indices of parameters to be optimized,
the values of which will come from the [WEIGHTS][dislocated]
argument of MINIMIZE.
:weight-indices
This slot is read-only.
Implements O(1) ENCODE and DECODE by having an
internal decoded-to-encoded and an encoded-to-decoded EQUAL hash
table. ENCODER/DECODER objects can be saved and loaded (see
@MGL-PERSISTENCE) as long as the elements in the hash tables have
read/write consitency.
“‘cl-transcript
(let ((indexer
(make-indexer
(alexandria:alist-hash-table ’(("I" . 3) ("me" . 2) ("mine" . 1)))
2)))
(values (encode indexer "I")
(encode indexer "me")
(encode indexer "mine")
(decode indexer 0)
(decode indexer 1)
(decode indexer 2)))
=> 0
=> 1
=> NIL
=> "I"
=> "me"
=> NIL
“‘
Means are normalized (EXP ACTIVATION).
Like FULL-CLOUD but the weight matrix is factored
into a product of two matrices: A*B. At activation time, HIDDEN +=
VISIBLE*A*B.
A full cloud whose visible chunk is the same as
the visible chunk of this cloud and whose hidden chunk is the same
as the visible chunk of CLOUD-B.
mgl-bm:full-cloud
:cloud-a
This slot is read-only.
When the data comes from a file. As in plot ’filename’ title ’something’.
A feed-forward neural net (as opposed to a recurrent one, see RNN).
bpn
.
This slot is a sample -> (lump array)* list hash
table. Inherit from this and set input will clamp the arrays to the
respective lumps for the right sample.
(make-hash-table)
This slot is read-only.
(function identity)
:populate-key
This slot is read-only.
(function identity)
:populate-convert-to-dbm-sample-fn
This slot is read-only.
(or null mgl-bm:dbm)
:populate-map-cache-lazily-from-dbm
This slot is read-only.
:populate-periodic-fn
This slot is read-only.
accumulate-cloud-statistics
.
accumulate-cloud-statistics*
.
activate-cloud*
.
copy-object-slot
.
copy-object-slot
.
describe-object
.
incoming->fnn-defintion
.
initialize-from-cloud
.
initialize-instance
.
map-segment-runs
.
map-segments
.
print-object
.
read-state*
.
segment-weights
.
weights
.
write-state*
.
zero-weight-to-self
.
A chunk is represented as a row vector
disregarding the multi-striped case). If the visible chunk is 1xN
and the hidden is 1xM then the weight matrix is NxM. Hidden =
hidden + weights * visible. Visible = visible + weights^T *
hidden.
:weights
This slot is read-only.
:expression
This slot is read-only.
A sampler with a function in its GENERATOR that
produces a stream of samples which may or may not be finite
depending on MAX-N-SAMPLES. FINISHEDP returns T iff MAX-N-SAMPLES is
non-nil, and it’s not greater than the number of samples
generated (N-SAMPLES).
(list-samples (make-instance ’function-sampler
:generator (lambda ()
(random 10))
:max-n-samples 5)
10)
=> (3 5 2 3 3)
A generator function of no arguments that returns the next sample.
:generator
This slot is read-only.
:max-n-samples
Nodes are real valued. The sample of a node is its activation plus guassian noise of unit variance.
An abstract base class of MGL-GD::@MGL-GD and
MGL-CG::@MGL-CG based optimizers that iterate over instances until a
termination condition is met.
The number of instances this optimizer has seen so far. Incremented automatically during optimization.
0
:n-instances
This slot is read-only.
If a number, it’s the number of instances to train
on in the sense of N-INSTANCES. If N-INSTANCES is equal or greater
than this value optimization stops. If TERMINATION is NIL, then
optimization will continue. If it is T, then optimization will
stop. If it is a function of no arguments, then its return value
is processed as if it was returned by TERMINATION.
:termination
An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called after initializations are
performed (INITIALIZE-OPTIMIZER*, INITIALIZE-GRADIENT-SOURCE*) but
before optimization is started.
:on-optimization-started
An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called when optimization has
finished.
:on-optimization-finished
An event hook with parameters ‘(OPTIMIZER
GRADIENT-SOURCE N-INSTANCES)‘. Called when optimization of a batch
of instances is done and N-INSTANCES is incremented.
:on-n-instances-changed
A LUMP is a simple, layerlike component of a neural
network. There are many kinds of lumps, each of which performs a
specific operation or just stores inputs and weights. By convention,
the names of lumps start with the prefix ‘->‘. Defined as classes,
they also have a function of the same name as the class to create
them easily. These maker functions typically have keyword arguments
corresponding to initargs of the class, with some (mainly the input
lumps) turned into normal positional arguments. So instead of having
to do
(make-instance ’->tanh :x some-input :name ’my-tanh)
one can simply write
(->tanh some-input :name ’my-tanh)
Lumps instantiated in any way within a BUILD-FNN or BUILD-RNN are
automatically added to the network being built.
A lump has its own NODES and DERIVATIVES matrices allocated for it
in which the results of the forward and backward passes are stored.
This is in contrast to a [BPN][class] whose NODES and DERIVATIVES
are those of its last constituent CLUMP.
Since lumps almost always live within a BPN, their [N-STRIPES][(reader bpn)] and [MAX-N-STRIPES][(reader bpn)] are handled automagically behind the scenes.
->*
.
->+
.
->abs
.
->batch-normalized
.
->constant
.
->dropout
.
->embedding
.
->exp
.
->gaussian-random
.
->gp
.
->max
.
->max-channel
.
->min
.
->normalized
.
->periodic
.
->ref
.
->relu
.
->rep
.
->rough-exponential
.
->sample-binary
.
->scaled-tanh
.
->seq-barrier
.
->sigmoid
.
->sin
.
->softmax-xe-loss
.
->squared-difference
.
->stretch
.
->sum
.
->tanh
.
->v*m
.
->weight
.
The number of values in a single stripe.
mgl-util:index
:size
size
.
This slot is read-only.
The values computed by the lump in the forward
pass are stored here. It is an ‘N-STRIPES * SIZE‘ matrix that has
storage allocated for ‘MAX-N-STRIPES * SIZE‘ elements for
non-weight lumps. ->WEIGHT lumps have no stripes nor restrictions
on their shape.
(or mgl-mat:mat null)
This slot is read-only.
The derivatives computed in the backward pass are
stored here. This matrix is very much like [NODES][(reader lump)]
in shape and size.
(or mgl-mat:mat null)
This slot is read-only.
Upon creation or resize the lump’s nodes get filled with this value.
real
0
:default-value
This slot is read-only.
A plot can contain multiple mappings. For example
in
plot sin(x) title ’sine’, "datafile" with lines
there is a function and a file mapping. Subclasses of MAPPING represent the function or the data and the options (e.g. ’title’, ’with’ above).
A monitor that has another monitor called MEASURER
embedded in it. When this monitor is applied, it applies the
measurer and passes the returned values to ADD-TO-COUNTER called on
its COUNTER slot. One may further specialize APPLY-MONITOR to change
that.
This class is useful when the same event monitor is applied repeatedly over a period and its results must be aggregated such as when training statistics are being tracked or when predictions are begin made. Note that the monitor must be compatible with the event it handles. That is, the embedded MEASURER must be prepared to take the arguments that are documented to come with the event.
This must be a monitor itself which only means
that APPLY-MONITOR is defined on it (but see @MGL-MONITORING). The
returned values are aggregated by [COUNTER][(READER MONITOR)]. See
@MGL-MEASURER for a library of measurers.
:measurer
This slot is read-only.
Keep track of how much pairs of nodes connected by
CLOUD are simultaneously active. If a node in CHUNK deviates from
the target sparsity, that is, its average activation is different
from the target, then decrease or increase the weight to nodes to
which it’s connected by CLOUD in such a way that it will be closer
to the target. Smooth the empirical estimates in simultaneous
activations in PRODUCTS by DAMPING.
:old-products
This slot is read-only.
Like BATCH-GD-OPTIMIZER but keeps count of how many
times each weight was used in the batch and divides the accumulated
gradient by this count instead of dividing by N-INSTANCES-IN-BATCH.
This only makes a difference if there are missing values in the
learner that’s being trained. The main feature that distuinguishes
this class from PER-WEIGHT-BATCH-GD-OPTIMIZER is that batches end at
same time for all weights.
Number of uses of the weight in its current batch.
Mix this into a model, implement
INSTANCE-TO-EXECUTOR-PARAMETERS and MAKE-EXECUTOR-WITH-PARAMETERS
and DO-EXECUTORS will be to able build executors suitable for
different instances. The canonical example is using a BPN to compute
the means and convariances of a gaussian process. Since each
instance is made of a variable number of observations, the size of
the input is not constant, thus we have a bpn (an executor) for each
input dimension (the parameters).
(make-hash-table :test (function equal))
This slot is read-only.
This is much like @MGL-GD-BATCH-GD-OPTIMIZER but it
is more clever about when to update weights. Basically every weight
has its own batch independent from the batches of others. This has
desirable properties. One can for example put two neural networks
together without adding any connections between them and the
learning will produce results equivalent to the separated case.
Also, adding inputs with only missing values does not change
anything.
Due to its very non-batch nature, there is no CUDA implementation of this optimizer.
Number of uses of the weight in its current batch.
gp
.
:inverted-covariances
This slot is read-only.
:evidence-inputs
This slot is read-only.
:evidence-outputs
This slot is read-only.
:centered-evidence-outputs
This slot is read-only.
A GP whose mean and covariance are defined by two lisp functions. Can be updated, but it’s not trainable.
An RBM is a BM with no intralayer connections. An
RBM when trained with PCD behaves the same as a BM with the same
chunks, clouds but it can also be trained by contrastive
divergence (see RBM-CD-TRAINER) and stacked in a DBN.
bm
.
A contrastive divergence based learner for RBMs.
A BASIC-COUNTER with whose nominator accumulates
the square of some statistics. It has the attribute :TYPE "rmse".
COUNTER-VALUES returns the square root of what BASIC-COUNTER’s
COUNTER-VALUES would return.
(let ((counter (make-instance ’rmse-counter)))
(add-to-counter counter (+ (* 3 3) (* 4 4)) 2)
counter)
=> #<RMSE-COUNTER rmse: 3.53553e+0 (2)>
(quote (:type "rmse"))
A recurrent neural net (as opposed to a
feed-forward one. It is typically built with BUILD-RNN that’s no
more than a shallow convenience macro.
An RNN takes instances as inputs that are sequences of variable
length. At each time step, the next unprocessed elements of these
sequences are set as input until all input sequences in the batch
run out. To be able to perform backpropagation, all intermediate
‘LUMP‘s must be kept around, so the recursive connections are
transformed out by
[unfolding](http://en.wikipedia.org/wiki/Backpropagation_through_time)
the network. Just how many lumps this means depends on the length of
the sequences.
When an RNN is created, ‘MAX-LAG + 1‘ BPNs are instantiated so that all weights are present and one can start training it.
bpn
.
backward-bpn
.
cost
.
(setf cuda-window-start-time)
.
cuda-window-start-time
.
(setf current-time)
.
current-time
.
describe-object
.
forward-bpn
.
initialize-instance
.
(setf input-seqs)
.
input-seqs
.
map-segments
.
max-lag
.
(setf max-time)
.
max-time
.
print-object
.
(setf remove-trailing-nil-instances)
.
remove-trailing-nil-instances
.
set-input
.
(setf step-monitors)
.
step-monitors
.
unfolder
.
warp-length
.
warp-start
.
(setf weight-lumps)
.
weight-lumps
.
The UNFOLDER of an RNN is function of no arguments
that builds and returns a BPN. The unfolder is allowed to create
networks with arbitrary topology even different ones for different
[TIME-STEP][]s with the help of LAG, or nested RNNs. Weights of
the same name are shared between the folds. That is, if a ->WEIGHT
lump were to be created and a weight lump of the same name already
exists, then the existing lump will be added to the BPN created by
UNFOLDER.
:unfolder
This slot is read-only.
The networks built by UNFOLDER may contain new
weights up to time step MAX-LAG. Beyond that point, all weight
lumps must be reappearances of weight lumps with the same name at
previous time steps. Most recurrent networks reference only the
state of lumps at the previous time step (with the function LAG),
hence the default of 1. But it is possible to have connections to
arbitrary time steps. The maximum connection lag must be specified
when creating the RNN.
1
:max-lag
This slot is read-only.
0
0
The TIME-STEP from which UNFOLDER will create
‘BPN‘s that essentially repeat every WARP-LENGTH steps.
1
:warp-start
This slot is read-only.
An integer such that the BPN UNFOLDER creates at
time step ‘I‘ (where ‘(<= WARP-START I)‘) is identical to the BPN
created at time step ‘(+ WARP-START (MOD (- I WARP-START)
WARP-LENGTH))‘ except for a shift in its time lagged
connections.
1
:warp-length
This slot is read-only.
Due to unfolding, the memory footprint of an RNN
is almost linear in the number of time steps (i.e. the max
sequence length). For prediction, this is addressed by
@MGL-RNN-TIME-WARP. For training, we cannot discard results of
previous time steps because they are needed for backpropagation,
but we can at least move them out of GPU memory if they are not
going to be used for a while and copy them back before they are
needed. Obviously, this is only relevant if CUDA is being used.
If CUDA-WINDOW-START-TIME is NIL, then this feature is turned off.
Else, during training, at CUDA-WINDOW-START-TIME or later time
steps, matrices belonging to non-weight lumps may be forced out of
GPU memory and later brought back as neeeded.
This feature is implemented in terms of MGL-MAT:WITH-SYNCING-CUDA-FACETS that uses CUDA host memory (also known as _page-locked_ or _pinned memory_) to do asynchronous copies concurrently with normal computation. The consequence of this is that it is now main memory usage that’s unbounded which toghether with page-locking makes it a potent weapon to bring a machine to a halt. You were warned.
mgl-bp:*cuda-window-start-time*
:cuda-window-start-time
During training, unfolded ‘BPN‘s corresponding to
previous time steps may be expensive to get at because they are no
longer in GPU memory. This consideration also applies to making
prediction with the additional caveat that with *WARP-TIME* true,
previous states are discarded so it’s not possible to gather
statistics after FORWARD finished.
Add monitor objects to this slot and they will be automatically applied to the RNN after each step when ‘FORWARD‘ing the RNN during training or prediction. To be able to easily switch between sets of monitors, in addition to a list of monitors this can be a symbol or a function, too. If it’s a symbol, then its a designator for its SYMBOL-VALUE. If it’s a function, then it must have no arguments and it’s a designator for its return value.
:step-monitors
:remove-trailing-nil-instances
This is a utility class for optimizers that have a
list of SEGMENTS and (the weights being optimized) is able to copy
back and forth between those segments and a single MAT (the
accumulator).
A list of weight matrices.
:segments
This slot is read-only.
This slot is read-only.
An optimizer that delegates training of segments to
other optimizers. Useful to delegate training of different segments
to different optimizers (capable of working with segmentables) or
simply to not train all segments.
When this optimizer is initialized it loops over
the segment of the learner with MAP-SEGMENTS. SEGMENTER is a
function that is called with each segment and returns an optimizer
or NIL. Several segments may be mapped to the same optimizer.
After the segment->optimizer mappings are collected, each
optimizer is initialized by INITIALIZE-OPTIMIZER with the list of
segments mapped to it.
:segmenter
This slot is read-only.
list
This slot is read-only.
With BATCH-SIZE 1 this is Stochastic Gradient
Descent. With higher batch sizes, one gets mini-batch and Batch
Gradient Descent.
Assuming that ACCUMULATOR has the sum of gradients for a mini-batch,
the weight update looks like this:
$$
\Delta_w^{t+1} = momentum * \Delta_w^t
+ \frac{accumulator}{batchsize}
+ l_2 w + l_1 sign(w)
$$
$$
w^{t+1} = w^{t} - learningrate * \Delta_w,
$$
which is the same as the more traditional formulation:
$$
\Delta_w^{t+1} = momentum * \Delta_w^{t}
+ learningrate * \left(\frac{\frac{df}{dw}}{batchsize}
+ l_2 w + l_1 sign(w)\right)
$$
$$
w^{t+1} = w^{t} - \Delta_w,
$$
but the former works better when batch size, momentum or learning
rate change during the course of optimization. The above is with
normal momentum, Nesterov’s momentum (see MOMENTUM-TYPE) momentum is
also available.
See @MGL-GD-BATCH-GD-OPTIMIZER for the description of the various options common to all batch based optimizers.
Nodes in a sigmoid chunk have two possible samples:
0 and 1. The probability of a node being on is given by the sigmoid
of its activation.
Binary units with normalized (EXP ACTIVATION)
firing probabilities representing a multinomial distribution. That
is, samples have exactly one 1 in each group of GROUP-SIZE.
After a SET-HIDDEN-MEAN, the means of
HIDDEN-SOURCE-CHUNK are stored in NEXT-NODE-INPUTS and on the next
SET-INPUT copied onto NODES. If there are multiple SET-HIDDEN-MEAN
calls between two SET-INPUT calls then only the first set of values
are remembered.
Iterate over the indices of nodes of CHUNK skipping missing ones.
Iterate over consecutive runs of weights present in CLOUD.
Bind ACCUMULATOR to the accumulator MAT associated with SEGMENT of SOURCE in SINK. ACCUMULATOR is dynamic extent. This is a convenience macro on top of CALL-WITH-SINK-ACCUMULATOR.
See if SEGMENT of SOURCE has an accumulator associated with it in SINK.
From CHUNK1 calculate the activations of CHUNK2 and _add_ them to CHUNK2. If REVERSEP then swap the roles of the chunks. FROM-FN and TO-FN are the accessors to use to get the nodes value arrays (one of #’NODES, #’OLD-NODES, #’MEANS. In the simplest case it adds weights (of CLOUD) * OLD-NODES (of CHUNK1) to the nodes of the hidden chunk.
The name of the lump that computes the activations of CHUNK.
The name of the lump that represents part of the activation of a chunk. CLOUD comes from the rbm in the DBN. TRANSPOSEP determines from which direction the activation crosses the cloud.
The name of the lump that represents the weights of CLOUD or its transpose. CLOUD comes from the rbm in the DBN.
Return a list of chunk, lump sublists. Elements are MAP lumps in FNN and the corresponding chunk in DBM.
Return a list of cloud specifications suitable for instantiating a BM. Put a cloud between each pair of visible and hidden chunks unless they are both conditioning chunks. The names of the clouds are two element lists of the names of the visible and hidden chunks.
Set NODES of CHUNKS to the activations calculated from CLOUDS. Skip chunks that don’t need activations. If ADDP don’t zero NODES first, but add to it.
Take scored features as a feature -> score hash table (returned by COUNT-FEATURES or COMPUTE-FEATURE-LLR, for instance) and return a feature -> index hash table that maps the first N (or less) features with the highest scores to distinct dense indices starting from START.
lag
.
name
.
path
.
kind
.
Return the average of the absolute values of NODES - OLD-NODES over CHUNKS. The second value returned is the number of nodes that contributed to the average.
Return a sample from the multivariate normal distribution defined by GP at INPUTS as a column vector.
Partition SEQ into two sequences: one with the elements with indices for which PRED returns true, one with the rest. The order of elements remains is stable within the two splits.
Take the accumulator of TRAINER that corresponds to
CLOUD and add MULTIPLIER times the cloud statistics of [persistent]
contrastive divergence.
factored-cloud
) gradient-sink multiplier) ¶full-cloud
) gradient-sink multiplier) ¶full-cloud
) v1 v2 v1-scratch importances multiplier accumulator) ¶bm-pcd-learner
) gradient-sink multiplier) ¶rbm-cd-learner
) gradient-sink multiplier) ¶bm-pcd-learner
) gradient-sink multiplier) ¶rbm-cd-learner
) gradient-sink multiplier) ¶sparse-bm-learner
) gradient-sink multiplier) ¶normal-sparsity-gradient-source
) importances multiplier) ¶cheating-sparsity-gradient-source
) importances multiplier) ¶gd-optimizer
)) ¶automatically generated reader method
gd-optimizer
)) ¶automatically generated writer method
cg-optimizer
)) ¶automatically generated reader method
Like ACTIVATE-CLOUD but without keyword parameters.
factored-cloud
) reversep from-chunk to-chunk from to) ¶full-cloud
) reversep from-chunk to-chunk from to) ¶adam-optimizer
)) ¶automatically generated reader method
adam-optimizer
)) ¶automatically generated writer method
Accumulate derivatives of weights.
->batch-normalization
)) ¶automatically generated reader method
->batch-normalization
)) ¶automatically generated writer method
->batch-normalization
)) ¶automatically generated reader method
->batch-normalization
)) ¶automatically generated writer method
bm-pcd-learner
) trainer) ¶posterior-gp
)) ¶automatically generated reader method
Return a fnn definition form (that is a list of
lump definition forms) for CHUNK that takes a single activation
parameter given by the symbol ACTIVATION-SYMBOL with NAME and SIZE.
Only called for non-conditioning chunks. Second value is a list of
clamp inits, the third is a list of inits.
sigmoid-chunk
) sym name size activation-symbol) ¶gaussian-chunk
) sym name size activation-symbol) ¶relu-chunk
) sym name size activation-symbol) ¶exp-normalized-group-chunk
) sym name size activation-symbol) ¶concat-counter
)) ¶automatically generated reader method
concat-counter
)) ¶automatically generated writer method
factored-cloud-shared-chunk
)) ¶conditioning-chunk
)) ¶confusion-matrix
)) ¶automatically generated reader method
encoder/decoder
)) ¶automatically generated reader method
basic-counter
)) ¶automatically generated reader method
encoder/decoder
)) ¶automatically generated reader method
posterior-gp
)) ¶automatically generated reader method
posterior-gp
)) ¶automatically generated reader method
parameterized-executor-cache-mixin
)) ¶automatically generated reader method
file-mapping
)) ¶automatically generated reader method
bp-learner
)) ¶automatically generated reader method
normal-sparsity-gradient-source
) accumulator n-instances-in-batch multiplier) ¶cheating-sparsity-gradient-source
) accumulator n-instances-in-batch multiplier) ¶Propagate the values from the already clamped
inputs and return total cost of all inputs (i.e. all stripes) and
the sum of importances. These values are also returned by COST until
the next forward pass on BPN.
function-mapping
)) ¶automatically generated reader method
Returns the matrix of covariances between X1 and
X2. X1 and X2 are vectors of arbitrary objects. Noise is assumed to
be included in the covariance function.
posterior-gp
) x1 x2) ¶temporal-chunk
)) ¶automatically generated reader method
Return a list of four elemenets. The first is a
list of lump definitions that represent the flow from FROM-LUMPY
through CLOUD. The chunk of FROM-LUMPY may be either of the end
points of CLOUD. The third values is the cloud inits. The third is
name of the ‘end’ lump.
full-cloud
) transposep) ¶factored-cloud
) transposep) ¶full-cloud
) args) ¶factored-cloud
) args) ¶posterior-gp
)) ¶automatically generated reader method
->rough-exponential
)) ¶automatically generated reader method
data-mapping
)) ¶automatically generated reader method
data
.
Update the weights being trained. N-NEW-INPUTS have been seen since the last time this was called.
segmented-gd-optimizer
) gradient-source n-new-inputs) ¶per-weight-batch-gd-optimizer
) gradient-source n-new-inputs) ¶normalized-batch-gd-optimizer
) gradient-source n-new-inputs) ¶adam-optimizer
) gradient-source n-new-inputs) ¶sgd-optimizer
) gradient-source n-new-inputs) ¶adam-optimizer
)) ¶automatically generated reader method
adam-optimizer
)) ¶automatically generated writer method
batch-gd-optimizer
)) ¶automatically generated reader method
batch-gd-optimizer
)) ¶automatically generated writer method
Return the largest number of inputs guaranteed not to cause a change in the learner being trained.
segmented-gd-optimizer
)) ¶per-weight-batch-gd-optimizer
)) ¶normalized-batch-gd-optimizer
)) ¶batch-gd-optimizer
)) ¶->batch-normalization
)) ¶automatically generated reader method
->batch-normalization
)) ¶automatically generated writer method
temporal-chunk
)) ¶automatically generated reader method
basic-counter
)) ¶automatically generated reader method
normal-sparsity-gradient-source
)) ¶automatically generated reader method
cheating-sparsity-gradient-source
)) ¶automatically generated reader method
segmented-gd-optimizer
)) ¶automatically generated reader method
cross-entropy-counter
)) ¶A hash table mapping labels to the cross entropy counters for samples with that label.
periodic-fn
)) ¶automatically generated reader method
->periodic
)) ¶automatically generated reader method
fnn-clamping-cache
)) ¶automatically generated reader method
fnn-clamping-cache
)) ¶automatically generated reader method
->batch-normalization
)) ¶automatically generated reader method
->batch-normalization
)) ¶automatically generated writer method
->batch-normalization
)) ¶automatically generated reader method
->batch-normalization
)) ¶automatically generated writer method
->softmax-xe-loss
) stream) ¶->max-channel
) stream) ¶->batch-normalized
) stream) ¶normal-sparsity-gradient-source
)) ¶automatically generated reader method
->sample-binary
)) ¶automatically generated reader method
temporal-chunk
) size max-n-stripes) ¶normalized-group-chunk
) size max-n-stripes) ¶constant-chunk
) size max-n-stripes) ¶lagged-clump
)) ¶->rough-exponential
)) ¶automatically generated reader method
decayed-cg-optimizer-mixin
)) ¶decayed-cg-optimizer-mixin
)) ¶If not NIL, it’s a designator for a function that
returns the decay for a given segment. For convenience NIL is also
treated as 0 decay.
->rough-exponential
)) ¶automatically generated reader method
cg-optimizer
)) ¶cg-optimizer
)) ¶Pre-allocated vectors to make CG less consy.
sparse-bm-learner
)) ¶automatically generated reader method
sparsity-gradient-source
)) ¶automatically generated reader method
segment-set
)) ¶automatically generated reader method
cheating-sparsity-gradient-source
)) ¶automatically generated reader method
sum1
.
cheating-sparsity-gradient-source
)) ¶automatically generated reader method
sum2
.
adam-optimizer
)) ¶automatically generated reader method
adam-optimizer
)) ¶automatically generated writer method
gd-optimizer
)) ¶automatically generated reader method
gd-optimizer
)) ¶automatically generated writer method
->normalized
)) ¶automatically generated reader method
x
.
->sample-binary
)) ¶automatically generated reader method
x
.
->softmax-xe-loss
)) ¶automatically generated reader method
x
.
->squared-difference
)) ¶automatically generated reader method
x
.
->max-channel
)) ¶Input comes from here.
x
.
->scaled-tanh
)) ¶automatically generated reader method
x
.
->batch-normalized
)) ¶automatically generated reader method
x
.
->periodic
)) ¶automatically generated reader method
x
.
->rough-exponential
)) ¶automatically generated reader method
x
.
->stretch
)) ¶automatically generated reader method
x
.
->squared-difference
)) ¶automatically generated reader method
y
.
In a BM W_{i,i} is always zero.
factored-cloud
)) ¶full-cloud
)) ¶structure-object
.
(quote nil)
(member nil :reconstruction :map)
common-lisp
.
(gensym)
Paramaters for Markov Chain Monte Carlo based trainers for BMs.
Controls whether visible nodes are sampled during the learning or the mean field is used instead.
:visible-sampling
Controls whether and how hidden nodes are sampled
during the learning or mean field is used instead. :HALF-HEARTED,
the default value, samples the hiddens but uses the hidden means
to calculate the effect of the positive and negative phases on the
gradient. The default should almost always be preferable to T, as
it is a less noisy estimate.
(member nil :half-hearted t)
:half-hearted
:hidden-sampling
The number of steps of Gibbs sampling to perform.
This is how many full (HIDDEN -> VISIBLE -> HIDDEN) steps are
taken for CD learning, and how many times each chunk is sampled
for PCD.
(integer 1)
1
:n-gibbs
Mix this before a CG based optimizer to conveniently add decay on a per-segment basis.
If not NIL, it’s a designator for a function that
returns the decay for a given segment. For convenience NIL is also
treated as 0 decay.
:segment-decay-fn
Gradient descent optimizer with momentum, weight
decay, weight penalty. Batch size and all other parameters can be
changed during training. One may even want to subclass this
optimizer, define a method for BATCH-SIZE make it a function of
N-INSTANCES.
Depending on BATCH-SIZE, this may be stochastic or non-stochastic gradient descent.
(setf accumulator)
.
accumulator
.
(setf after-update-hook)
.
after-update-hook
.
(setf batch-size)
.
batch-size
.
describe-object
.
initialize-optimizer*
.
(setf learning-rate)
.
learning-rate
.
map-gradient-sink
.
(setf momentum)
.
momentum
.
momentum-type
.
print-object
.
segment-set
.
segments
.
use-segment-derivatives-p
.
(setf weight-decay)
.
weight-decay
.
(setf weight-deltas)
.
weight-deltas
.
(setf weight-penalty)
.
weight-penalty
.
The set of segments that are to be trained. The
ACCUMULATOR, WEIGHT-DELTAS, etc vectors are indexed by SEGMENT-SET
indices.
This slot is read-only.
mgl-mat:mat
Save memory if both the gradient source (the model
being optimized) and the optimizer support this feature. It works
like this: the accumulator into which the gradient source is asked
to place the derivatives of a segment will be SEGMENT-DERIVATIVES
of the segment. This allows the optimizer not to allocate an
accumulator matrix into which the derivatives are summed.
:use-segment-derivatives-p
This slot is read-only.
mgl-mat:mat
After having gone through BATCH-SIZE number of
inputs, weights are updated. With BATCH-SIZE 1, one gets
Stochastics Gradient Descent. With BATCH-SIZE equal to the number
of instances in the dataset, one gets standard, ’batch’ gradient
descent. With BATCH-SIZE between these two extremes, one gets the
most practical ’mini-batch’ compromise.
1
:batch-size
This is the step size along the gradient. Decrease
it if optimization diverges, increase it if it doesn’t make
progress.
0.1
:learning-rate
A value in the [0, 1) interval. MOMENTUM times the
previous weight change is added to the gradient. 0 means no
momentum.
0
:momentum
One of :NORMAL, :NESTEROV or :NONE. For pure
optimization Nesterov’s momentum may be better, but it may also
increases chances of overfitting. Using :NONE is equivalent to 0
momentum, but it also uses less memory. Note that with :NONE,
MOMENTUM is ignored even it it is non-zero.
(member :none :normal :nesterov)
:normal
:momentum-type
This slot is read-only.
An L2 penalty. It discourages large weights, much
like a zero mean gaussian prior. WEIGHT-DECAY * WEIGHT is added to
the gradient to penalize large weights. It’s as if the function
whose minimum is sought had WEIGHT-DECAY*sum_i{0.5 * WEIGHT_i^2}
added to it.
0
:weight-decay
An L1 penalty. It encourages sparsity.
SIGN(WEIGHT) * WEIGHT-PENALTY is added to the gradient pushing the
weight towards negative infinity. It’s as if the function whose
minima is sought had WEIGHT-PENALTY*sum_i{abs(WEIGHT_i)} added to
it. Putting it on feature biases consitutes a sparsity constraint
on the features.
0
:weight-penalty
A list of functions with no arguments called after each weight update.
list
:after-update-hook
Means are normalized to SCALE within node groups of GROUP-SIZE.
The sum of the means after normalization. Can be
changed during training, for instance when clamping. If it is a
vector then its length must be MAX-N-STRIPES which is
automatically maintained when changing the number of stripes.
(or mgl-util:flt mgl-mat:mat)
1.0d0
:scale
(error "group-size must be specified.")
:group-size
This slot is read-only.
For the chunks with . Collect the average means
over samples in a batch and adjust weights in each cloud connected
to it so that the average is closer to SPARSITY-TARGET. This is
implemented by keeping track of the average means of the chunks
connected to it. The derivative is (M* (MATLISP:TRANSPOSE (M.-
C1-MEANS TARGET)) C2-MEANS) and this is added to derivative at the
end of the batch. Batch size comes from the superclass.
list
This slot is read-only.
Jump to: | (
-
A B C D E F G H I K L M N O P R S T U V W X Y Z |
---|
Jump to: | (
-
A B C D E F G H I K L M N O P R S T U V W X Y Z |
---|
Jump to: | *
+
@
A B C D E F G H I K L M N O P R S T U V W X Y |
---|
Jump to: | *
+
@
A B C D E F G H I K L M N O P R S T U V W X Y |
---|
Jump to: | -
A B C D E F G I L M N O P R S T U |
---|
Jump to: | -
A B C D E F G I L M N O P R S T U |
---|