sorry to self-reply, but:
> alternatively, it does sphere packing over the direct product of open
> or closed (but bounded) intervals and discrete sets, so you can get a
> set of points that is slightly better than a random set of experiments
> (i.e. guaranteed to cover the space well).
arguably
> That doesn't seem to directly support deriving information from random
> trials. For computer go tuning, would you play multiple games with each
> parameter set in order to get a meaningful figure? That seems likely to
> be less efficient than treating it as a bandit problem.
you'd decide how ma
On Wed, Nov 25, 2009 at 5:01 PM, Matthew Woodcraft
wrote:
> Don Dailey wrote:
> > Matthew Woodcraft wrote:
>
> >> That doesn't seem to directly support deriving information from
> >> random trials. For computer go tuning, would you play multiple games
> >> with each parameter set in order to get a
Don Dailey wrote:
> Matthew Woodcraft wrote:
>> That doesn't seem to directly support deriving information from
>> random trials. For computer go tuning, would you play multiple games
>> with each parameter set in order to get a meaningful figure? That
>> seems likely to be less efficient than tre
On Wed, Nov 25, 2009 at 2:00 PM, Matthew Woodcraft
wrote:
> steve uurtamo wrote:
> > the way to do all of this exactly is with experimental design.
> >
> > to design experiments correctly that handle inter-term interactions of
> > moderate degree, this tool is quite useful:
> >
> > http://www2.res
steve uurtamo wrote:
> the way to do all of this exactly is with experimental design.
>
> to design experiments correctly that handle inter-term interactions of
> moderate degree, this tool is quite useful:
>
> http://www2.research.att.com/~njas/gosset/index.html
That doesn't seem to directly supp
On Wed, Nov 25, 2009 at 10:44 AM, Heikki Levanto wrote:
> On Wed, Nov 25, 2009 at 09:01:22AM -0500, Don Dailey wrote:
> > You could of course just play games where you choose each player
> randomly.
> > If you have 256 feature you have a ridiculous number of combinations,
> more
> > than you coul
On Wed, Nov 25, 2009 at 09:01:22AM -0500, Don Dailey wrote:
> You could of course just play games where you choose each player randomly.
> If you have 256 feature you have a ridiculous number of combinations, more
> than you could possibly test but before each test game you just pick a
> combinatio
I know there are heuristics for trying to understand the interactions and
without looking too hard I assume this package is just a more comprehensive
version of this.
On Wed, Nov 25, 2009 at 9:11 AM, steve uurtamo wrote:
> the way to do all of this exactly is with experimental design.
>
> to de
the way to do all of this exactly is with experimental design.
to design experiments correctly that handle inter-term interactions of
moderate degree, this tool is quite useful:
http://www2.research.att.com/~njas/gosset/index.html
s.
___
computer-go ma
A few months ago there was a post in the computer chess forums about
optimizing combinations of features. It was called orthogonal
multi-testing.
Did I mention that on this forum already? If not, here is a brief on how
it works:
Suppose you have 1 feature you want to test - you might normal
>What do you do when you add a new parameter? Do you retain your existing
>'history', considering each game to have been played with the value of
>the new parameter set to zero?
Yes, exactly.
>If you have 50 parameters already, doesn't adding a new parameter create
>a rather large number of new p
Brian Sheppard wrote:
> I think that I am assuming only that the objective function is convex. The
> parameters in Go programs are always inter-dependent.
What do you do when you add a new parameter? Do you retain your existing
'history', considering each game to have been played with the value of
>Your system seems very interesting but it seems to me that you assume
>that each parameters are independant.
>What happen if, for example, two parameters works well when only one of
>the is active and badly if the two are actives at the same time ?
I think that I am assuming only that the objecti
Your system seems very interesting but it seems to me that you assume
that each parameters are independant.
What happen if, for example, two parameters works well when only one of
the is active and badly if the two are actives at the same time ?
Tom
--
Thomas Lavergne"Entia n
>From what I understand, for each parameter you take with some high
>probability the best so far, and with some lower probability the least
>tried one. This requires (manually) enumerating all parameters on some
>integer scale, if I got it correctly.
Yes, for each parameter you make a range of val
16 matches
Mail list logo