I have very little experience with tensor flow but am hoping to make a simple
version of the [Karpathy
game](http://cs.stanford.edu/people/karpathy/reinforcejs/waterworld.html)
eventually. However, Already at first attempt I get stuck with the kernel dying
on me at the last line (julia 0.5 rc 3)
```julia
using TensorFlow
using Distributions
situationDim=10
actionDim=10
n_hidden_1=100
n_hidden_2=100
function weight_variable(shape)
initial = map(Float64, rand(Normal(0, .001), shape...))
return Variable(initial)
end
function bias_variable(shape)
initial = fill(Float64(.1), shape...)
return Variable(initial)
end
session = Session(Graph())
x = placeholder(Float64, shape=[-1, situationDim])
y = placeholder(Float64, shape=[-1, actionDim])
layer1=nn.tanh(x*weight_variable([situationDim,
n_hidden_1])+bias_variable(n_hidden_1))
layer2=nn.tanh(layer1*weight_variable([n_hidden_1,
n_hidden_2])+bias_variable(n_hidden_2))
y_out=nn.softmax(layer2*weight_variable([n_hidden_2,
actionDim])+bias_variable(actionDim))
cross_entropy = -reduce_sum(y.*log(y_out))
train_step = train.minimize(train.AdamOptimizer(1e-4), cross_entropy)
```
The error is:
```
train_step = train.minimize(train.AdamOptimizer(1e-4), cross_entropy)
F ./tensorflow/core/lib/gtl/inlined_vector.h:155] Check failed: i < size() (0
vs. 0)
signal (6): Abort trap: 6
while loading no file, in expression starting on line 0
__pthread_kill at /usr/lib/system/libsystem_kernel.dylib (unknown line)
Allocations: 10484772 (Pool: 10481619; Big: 3153); GC: 20
Abort trap: 6
```
Any suggestions would be much appreciated. If anyone has a simple MLP
reinforcement example implemented and sharable I'd appreciate learning from it.