This is the micmac Reference Manual, version 0.0.2, generated automatically by Declt version 4.0 beta 2 "William Riker" on Wed May 15 06:15:44 2024 GMT+0.
The main system appears first, followed by any subsystem dependency.
micmac
Micmac is mainly a library of graph search algorithms
such as alpha-beta, UCT and beam search, but it also has some MCMC
and other slightly unrelated stuff.
Gábor Melis <mega@retes.hu>
(GIT https://github.com/melisgl/mgl-gpr.git)
MIT, see COPYING.
0.0.2
alexandria
(system).
mgl-pax
(system).
src
(module).
Modules are listed depth-first from the system components tree.
micmac/src
micmac
(system).
package.lisp
(file).
uct.lisp
(file).
graph-search.lisp
(file).
metropolis-hastings.lisp
(file).
game-theory.lisp
(file).
micmac.lisp
(file).
doc.lisp
(file).
Files are sorted by type and then listed depth-first from the systems components trees.
micmac/micmac.asd
micmac/src/package.lisp
micmac/src/uct.lisp
micmac/src/graph-search.lisp
micmac/src/metropolis-hastings.lisp
micmac/src/game-theory.lisp
micmac/src/micmac.lisp
micmac/src/doc.lisp
micmac/src/uct.lisp
package.lisp
(file).
src
(module).
action
(reader method).
average-reward
(method).
average-reward
(reader method).
(setf average-reward)
(writer method).
depth
(reader method).
edge-score
(generic function).
edges
(reader method).
(setf edges)
(writer method).
from-node
(reader method).
(setf from-node)
(writer method).
list-edges
(generic function).
make-uct-node
(generic function).
outcome->reward
(generic function).
play-out
(generic function).
print-object
(method).
print-object
(method).
select-edge
(generic function).
state
(generic function).
to-node
(reader method).
(setf to-node)
(writer method).
uct
(function).
uct-edge
(class).
uct-node
(class).
unvisited-edges
(function).
update-uct-statistics
(generic function).
visited-edges
(function).
@micmac-uct
(special variable).
extremum
(function).
n-visits
(reader method).
n-visits
(reader method).
(setf n-visits)
(writer method).
(setf n-visits)
(writer method).
random-element
(function).
micmac/src/graph-search.lisp
uct.lisp
(file).
src
(module).
alpha-beta
(function).
beam-search
(function).
parallel-beam-search
(function).
@micmac-graph-search
(special variable).
apply-key
(macro).
dup
(macro).
insert-into-sorted-vector
(function).
micmac/src/metropolis-hastings.lisp
graph-search.lisp
(file).
src
(module).
accept-jump
(generic function).
accept-swap-chain-states
(generic function).
acceptance-probability
(generic function).
enumerating-chain
(class).
initialize-instance
(method).
jump
(generic function).
jump-between-chains
(generic function).
jump-to-sample
(function).
jump-to-sample*
(generic function).
log-jump-probability-ratio
(generic function).
log-probability-ratio
(generic function).
log-probability-ratio-to-jump-target
(generic function).
maybe-jump
(generic function).
maybe-swap-chain-states
(generic function).
mc-chain
(class).
mc3-chain
(class).
prepare-jump-distribution
(generic function).
random-jump
(generic function).
reject-jump
(generic function).
reject-swap-chain-states
(generic function).
(setf state)
(setf expander).
state
(reader method).
temperature
(reader method).
(setf temperature)
(writer method).
tracing-chain
(class).
@micmac-metropolis-hastings
(special variable).
candidate
(reader method).
(setf candidate)
(writer method).
hot-chains
(reader method).
jump-distribution-prepared-p
(reader method).
(setf jump-distribution-prepared-p)
(writer method).
log->acceptance-probability
(function).
p-jumps
(reader method).
random-element
(function).
random-until-different
(function).
set-state
(generic function).
sum-seq
(function).
micmac/src/game-theory.lisp
metropolis-hastings.lisp
(file).
src
(module).
find-nash-equilibrium
(function).
@micmac-game-theory
(special variable).
extremum
(function).
list-to-2d-matrix
(function).
micmac/src/micmac.lisp
game-theory.lisp
(file).
src
(module).
@micmac-introduction
(special variable).
@micmac-links
(special variable).
@micmac-manual
(special variable).
@micmac-overview
(special variable).
micmac/src/doc.lisp
micmac.lisp
(file).
src
(module).
pax-pages
(function).
pax-sections
(function).
Packages are listed by definition order.
micmac
See MICMAC::@MICMAC-MANUAL.
common-lisp
.
mgl-pax
.
alpha-beta
(function).
beam-search
(function).
parallel-beam-search
(function).
@micmac-graph-search
(special variable).
@micmac-introduction
(special variable).
@micmac-links
(special variable).
@micmac-manual
(special variable).
@micmac-overview
(special variable).
apply-key
(macro).
dup
(macro).
insert-into-sorted-vector
(function).
pax-pages
(function).
pax-sections
(function).
micmac.metropolis-hastings
See MICMAC.METROPOLIS-HASTINGS::@MICMAC-METROPOLIS-HASTINGS.
micmac.mh
common-lisp
.
mgl-pax
.
accept-jump
(generic function).
accept-swap-chain-states
(generic function).
acceptance-probability
(generic function).
enumerating-chain
(class).
jump
(generic function).
jump-between-chains
(generic function).
jump-to-sample
(function).
jump-to-sample*
(generic function).
log-jump-probability-ratio
(generic function).
log-probability-ratio
(generic function).
log-probability-ratio-to-jump-target
(generic function).
maybe-jump
(generic function).
maybe-swap-chain-states
(generic function).
mc-chain
(class).
mc3-chain
(class).
prepare-jump-distribution
(generic function).
random-jump
(generic function).
reject-jump
(generic function).
reject-swap-chain-states
(generic function).
(setf state)
(setf expander).
state
(generic reader).
temperature
(generic reader).
(setf temperature)
(generic writer).
tracing-chain
(class).
@micmac-metropolis-hastings
(special variable).
candidate
(generic reader).
(setf candidate)
(generic writer).
hot-chains
(generic reader).
jump-distribution-prepared-p
(generic reader).
(setf jump-distribution-prepared-p)
(generic writer).
log->acceptance-probability
(function).
p-jumps
(generic reader).
random-element
(function).
random-until-different
(function).
set-state
(generic function).
sum-seq
(function).
micmac.game-theory
See MICMAC.GAME-THEORY::@MICMAC-GAME-THEORY.
common-lisp
.
mgl-pax
.
find-nash-equilibrium
(function).
@micmac-game-theory
(special variable).
extremum
(function).
list-to-2d-matrix
(function).
micmac.uct
See MICMAC.UCT::@MICMAC-UCT.
common-lisp
.
mgl-pax
.
action
(generic reader).
average-reward
(generic function).
(setf average-reward)
(generic writer).
depth
(generic reader).
edge-score
(generic function).
edges
(generic reader).
(setf edges)
(generic writer).
from-node
(generic reader).
(setf from-node)
(generic writer).
list-edges
(generic function).
make-uct-node
(generic function).
outcome->reward
(generic function).
play-out
(generic function).
select-edge
(generic function).
state
(generic function).
to-node
(generic reader).
(setf to-node)
(generic writer).
uct
(function).
uct-edge
(class).
uct-node
(class).
unvisited-edges
(function).
update-uct-statistics
(generic function).
visited-edges
(function).
@micmac-uct
(special variable).
extremum
(function).
n-visits
(generic reader).
(setf n-visits)
(generic writer).
random-element
(function).
Definitions are sorted by export status, category, package, and then by lexicographic order.
Alpha-beta pruning for two player, zero-sum maximax (like minimax
but both players maximize and the score is negated when passed
between depths). Return the score of the game STATE from the point
of view of the player to move at DEPTH and as the second value the
list of actions of the principal variant.
CALL-WITH-ACTION is a function of (STATE DEPTH ACTION FN). It carries out ACTION (returned by LIST-ACTIONS or NIL) to get the state corresponding to DEPTH and calls FN with that state. It may destructively modify STATE provided it undoes the damage after FN returns. CALL-WITH-ACTION is called with NIL as ACTION for the root of the tree, in this case STATE need not be changed. FN returns the same kinds of values as ALPHA-BETA. They may be useful for logging.
MAYBE-EVALUATE-STATE is a function of (STATE DEPTH). If STATE at
DEPTH is a terminal node then it returns the score from the point of
view of the player to move and as the second value a list of actions
that lead from STATE to the position that was evaluated. The list of
actions is typically empty. If we are not at a terminal node then
MAYBE-EVALUATE-STATE returns NIL.
LIST-ACTIONS is a function of (STATE DEPTH) and returns a non-empty
list of legal candidate moves for non-terminal nodes. Actions are
tried in the order LIST-ACTIONS returns them: stronger moves
CALL-WITH-ACTION, MAYBE-EVALUATE-STATE and LIST-ACTIONS are
mandatory.
RECORD-BEST, if non-NIL, is a function of (DEPTH SCORE ACTIONS). It
is called when at DEPTH a new best action is found. ACTIONS is a
list of all the actions in the principle variant corresonding to the
newly found best score. RECORD-BEST is useful for graceful
degradation in case of timeout.
ALPHA and BETA are typically NIL (equivalent to -infinity,
+infinity) but any real number is allowed if the range of scores can
be boxed.
See ‘test/test-alpha-beta.lisp‘ for an example.
In a graph, search for nodes that with the best scores with [beam
search](http://en.wikipedia.org/wiki/Beam_search). That is, starting
from START-NODES perform a breadth-first search but at each depth
only keep BEAM-WIDTH number of nodes with the best scores. Keep the
best N-SOLUTIONS (at most) complete solutions. Discard nodes known
to be unable to get into the best N-SOLUTIONS (due to
UPPER-BOUND-FN). Finally, return the solutions and the active
nodes (the _beam_) as adjustable arrays sorted by score in
descending order.
START-NODES (a sequence of elements of arbitrary type). SCORE-FN,
UPPER-BOUND-FN, SOLUTIONP-FN, FINISHEDP-FN are all functions of one
argument: the node. SOLUTIONP-FN checks whether a node represents a
complete solution (i.e. some goal is reached). SCORE-FN returns a
real number that’s to be maximized, it’s only called for node for
which SOLUTIONP-FN returned true. UPPER-BOUND-FN (if not NIL)
returns a real number that equal or greater than the score of all
solutions reachable from that node. FINISHEDP-FN returns true iff
there is nowhere to go from the node.
EXPAND-NODE-FN is also a function of a single node argument. It returns a sequence of nodes to ’one step away’ from its argument node. EXPAND-BEAM-FN is similar, but it takes a vector of nodes and returns all nodes one step away from any of them. It’s enough provide either EXPAND-NODE-FN or EXPAND-BEAM-FN. The purpose of EXPAND-BEAM-FN. is to allow more efficient, batch-like operations.
See ‘test/test-beam-search.lisp‘ for an example.
Find a Nash equilibrium of a zero-sum game represented by PAYOFF matrix (a 2d matrix or a nested list). PAYOFF is from the point of view of the row player: the player who choses column wants to minimize, the row player wants to maximize. The first value returned is a vector of unnormalized probabilities assigned to each action of the row player, the second value is the same for the column player and the third is the expected payoff of the row player in the nash equilibrium represented by the oddment vectors.
From the current state of CHAIN make JUMP (from the current distribution of CHAIN) and return the sample where we landed. Reuse RESULT-SAMPLE when possible.
This is very much like BEAM-SEARCH except it solves a number of
instances of the same search problem starting from different sets of
nodes. The sole purpose of PARALLEL-BEAM-SEARCH is to amortize the
cost EXPAND-BEAM-FN if possible.
EXPAND-BEAMS-FN is called with sequence of beams (i.e. it’s a
sequence of sequence of nodes) and it must return another sequence
of sequences of nodes. Each element of the returned sequence is the
reachable nodes of the nodes in the corresponding element of its
argument sequence.
PARALLEL-BEAM-SEARCH returns a sequence of solutions sequences, and
a sequence of active node sequences.
See ‘test/test-beam-search.lisp‘ for an example.
Starting from the ROOT node, search the tree expanding it one node for each playout. Finally return the mutated ROOT. ROOT may be the root node of any tree, need not be a single node with no edges. FRESH-ROOT-STATE is a function that returns a fresh state corresponding to ROOT. This state will be destroyed unless special care is taken in STATE.
Called when CHAIN accepts JUMP to CANDIDATE.
tracing-chain
) jump candidate) ¶Swap the states of CHAIN1 and CHAIN2 of MC3.
tracing-chain
) chain1 chain2) ¶Calculate the acceptance probability of CANDIDATE to which JUMP leads from the current STATE of CHAIN.
Take a step on the markov chain. Return a boolean indicating whether the proposed jump was accepted.
Choose two chains randomly and swap their states with MC3 acceptance probability.
This function is called by JUMP-TO-SAMPLE. It is where JUMP-TO-SAMPLE behaviour shall be customized.
Return a list of edges representing the possible
actions from NODE with STATE. This function must be customized.
Return Q(TARGET->STATE) / Q(STATE->TARGET) where Q
is the jump distribution and JUMP is from the current STATE of CHAIN
to TARGET sample.
Return P(SAMPLE1)/P(SAMPLE2). It’s in the log
domain to avoid overflows and the ratio part is because that it may
allow computational shortcuts as opposed to calculating unnormalized
probabilities separately.
Return P(TARGET)/P(STATE) where JUMP is from the
current state of CHAIN to TARGET sample. This can be specialized for
speed. The default implementation just falls back on
LOG-PROBABILITY-RATIO.
Create a node representing the state that EDGE
leads to (from PARENT). Specialize this if you want to keep track of
the state, which is not done by default as it can be expensive,
especially in light of TAKE-ACTION mutating it. The default
implementation simply creates an instance of the class of PARENT so
that one can start from a subclass of UCT-NODE and be sure that that
class is going to be used for nodes below it.
Randomly accept or reject JUMP to CANDIDATE from
the current state of CHAIN with ACCEPTANCE-PROBABILITY.
tracing-chain
) jump candidate acceptance-probability) ¶Swap of states of CHAIN1 and CHAIN2 of MC3 with ACCEPTANCE-PROBABILITY.
tracing-chain
) chain1 chain2 acceptance-probability) ¶Compute the reward for a node in the tree from
OUTCOME that is the result of a playout. This is called by the
default implementation of UPDATE-UCT-STATISTICS. This is where one
typically negates depending on the parity of DEPTH in two player
games.
Play a random game from NODE with STATE and return
the outcome that’s fed into UPDATE-UCT-STATISTICS. The way the
random game is played is referred to as ‘default policy’ and that’s
what makes or breaks UCT search. This function must be
customized.
Prepare for sampling from the F(X) = Q(SAMPLE->X)
distribution. Called by RANDOM-JUMP. The around method ensures that
nothing is done unless there was a state change.
Sample a jump from the current distribution of jumps that was computed by PREPARE-JUMP-DISTRIBUTION.
enumerating-chain
)) ¶Called when CHAIN rejects JUMP to CANDIDATE. It
does nothing by default, it’s just a convenience for debugging.
tracing-chain
) jump candidate) ¶Called when the swap of states of CHAIN1 and CHAIN2
is rejected. It does nothing by default, it’s just a convenience for
debugging.
tracing-chain
) chain1 chain2) ¶Choose an action to take from a state, in other
words an edge to follow from NODE in the tree. The default
implementation chooses randomly from the yet unvisited edges or if
there is none moves down the edge with the maximum EDGE-SCORE. If
you are thinking of customizing this, for example to make it choose
the minimum at odd depths, the you may want to consider specializing
REWARD or UPDATE-UCT-STATISTICS instead.
Return the state that corresponds to NODE. This is
not a straightforward accessor unless NODE is customized to store
it. The rest of the parameters are provided so that one can
reconstruct the state by taking the action of EDGE in the
PARENT-STATE of PARENT. It’s allowed to mutate PARENT-STATE and
return it. This function must be specialized.
mc-chain
)) ¶mc-chain
)) ¶The PROBABILITY-RATIO of samples is raised to the
power of 1 / TEMPERATURE before calculating the acceptance
probability. This effectively flattens the peaks if TEMPERATURE >
1 which makes it easier for the chain to traverse deep valleys.
Increment the number of visits and update the
average reward in nodes and edges of PATH. By default, edges simply
get their visit counter incremented while nodes also get an update
to AVERAGE-REWARD based on what OUTCOME->REWARD returns.
enumerating-chain
) &key n-jumps &allow-other-keys) ¶A simple abstract chain subclass that explicitly enumerates the probabilities of the distribution.
A simple markov chain for Metropolis Hastings. With temperature it is suitable for MC3.
The PROBABILITY-RATIO of samples is raised to the
power of 1 / TEMPERATURE before calculating the acceptance
probability. This effectively flattens the peaks if TEMPERATURE >
1 which makes it easier for the chain to traverse deep valleys.
1.0d0
:temperature
This is the current sample where the chain is.
:state
This slot is read-only.
High probability island separated by low valley
make the chain poorly mixing. MC3-CHAIN has a number of HOT-CHAINS
that have state probabilities similar to that of the main chain but
less jagged. Often it suffices to set the temperatures of the
HOT-CHAINS higher use the very same base probability
distribution.
list
:hot-chains
This slot is read-only.
Mix this in with your chain to have it print trace of acceptances/rejections.
An edge in the UCT tree. Represents an action taken
from a state. The value of an action is the value of its target
state which is not quite as generic as it could be; feel free to
specialize AVERAGE-REWARD for the edges if that’s not the case.
The action represented by the edge.
:action
This slot is read-only.
The node this edge starts from.
:from-node
The node this edge points to if the edge has been visited or NIL.
The number of times this action was taken from the parent state.
0
A node in the UCT tree. Roughly translates to a
state in the search space. Note that the state itself is not stored
explicity, but it can be recovered by ‘replaying’ the actions from
the starting state or by customizing MAKE-UCT-NODE.
The number of times this node was visited.
0
Outgoing edges.
Average reward over random playouts started from
below this node. See UPDATE-UCT-STATISTICS and REWARD.
0
:average-reward
Find the first extreme value of the [START,END) subsequence of VECTOR and return it along with its index. The extremum is what would be the first element of VECTOR sorted by SORT with PRED.
Insert ITEM into VECTOR while keeping it sorted by PRED. Extend the vector if needed while respecting MAX-LENGTH.
Choose an element randomly from the [START,END) subsequence of SEQ with given probabilities. KEY returns the unormalized probability of an element, SUM is the sum of these unnormalized probalities contains unnormalized probabilties. Return the element chosen and its index.
Return the sum of elements in the [START,END) subsequence of SEQ.
enumerating-chain
)) ¶automatically generated reader method
Jump to: | (
A B C D E F G H I J L M N O P R S T U V |
---|
Jump to: | (
A B C D E F G H I J L M N O P R S T U V |
---|
Jump to: | @
A C D E F H J N P S T |
---|
Jump to: | @
A C D E F H J N P S T |
---|
Jump to: | C D E F G M P S T U |
---|
Jump to: | C D E F G M P S T U |
---|