I think that's ok: the prediction systems are already used to deal with
a huge number of positions during training, it's just a matter of
changing the quality of these positions. Say instead of training on 100%
good answers to good moves from games, we could take half as many and
train on 50% p
My comment was addressed to the original question, which mentioned more
traditional pattern based work, such as Remi’s.
Let’s think about how you might build a NN using a large pattern base as inputs.
A NN has K features per point on the board, and you don’t want K to be a large
number. I
On 17-04-17 15:04, David Wu wrote:
> If you want an example of this actually mattering, here's example where
> Leela makes a big mistake in a game that I think is due to this kind of
> issue.
Ladders have specific treatment in the engine (which also has both known
limitations and actual bugs in 0
Now, I love this idea. A super fast cheap pattern matcher can act as input
into a neural network input layer in sort of a "pay additional attention
here and here and...".
On Apr 18, 2017 6:31 AM, "Brian Sheppard via Computer-go" <
computer-go@computer-go.org> wrote:
Adding patterns is very cheap:
Adding patterns is very cheap: encode the patterns as an if/else tree, and it
is O(log n) to match.
Pattern matching as such did not show up as a significant component of Pebbles.
But that is mostly because all of the machinery that makes pattern-matching
cheap (incremental updating of 3x3 n
Many Faces likes L6 for about 600 playouts, then switches to L9 from then on.
This is because the old engine does local tactical searches and uses the
results to bias the playout policy for nodes with more than about 100 visits.
This is for the new unreleased version with a policy network.
Davi