Sounds similar to adversarial networks

On Thu, Feb 4, 2016, 04:50 Huazuo Gao <gaohua...@gmail.com> wrote:

> Sounds like some kind of boosting, I suppose?
>
> On Thu, Feb 4, 2016 at 7:52 PM Marc Landgraf <mahrgel...@gmail.com> wrote:
>
>> Hi,
>>
>> lately a friend and me wondered about the following idea.
>>
>> Let's assume you have a reasonably strong move prediction DCNN. What
>> happens if you now train a second net on the same database.
>> When training the first net, you tried to maximize the judgement value
>> of the expert move. But for the second net you now try to maximize the
>> maximum of the judgement of both nets. This means, that the second net
>> does not profit from finding moves the first net can easily find, but
>> instead will try to fill in the weaknesses of the first net.
>> In practical application the easy static usage would be to first
>> expand the top2 candidates of the first net, then mix in the top
>> candidate of the second net, then again the next 2 candidates from the
>> first net, etc.
>>
>> What do you guys think about that?
>>
>> Cheers, Marc
>> _______________________________________________
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to