I put together a little test app, but can't reproduce the problem.
However, it researching, I've tracked down the error to the following:
Traceback
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
Traceback (most recent call last):
File
Using MySQL. I'll put something together to see if I can reproduce the
issue.
-Jim
On Wednesday, April 8, 2015 at 8:24:02 AM UTC-5, Paolo Valleri wrote:
>
> Hi Jim,
> Which back-engine are you using? Have you tried with a different one?
> Can you pack a simple app that reproduce the issue?
>
>
Hi Jim,
Which back-engine are you using? Have you tried with a different one?
Can you pack a simple app that reproduce the issue?
Paolo
On Tuesday, April 7, 2015 at 9:40:29 PM UTC+2, Jim S wrote:
>
> Just wondering if this is posted in the right place. Should I be
> reporting this issue elsewhe
Just wondering if this is posted in the right place. Should I be reporting
this issue elsewhere?
-Jim
On Monday, April 6, 2015 at 9:53:46 AM UTC-5, Jim S wrote:
>
> Upgraded to
>
> 2.10.3-stable+timestamp.2015.04.02.21.42.07
> (Running on nginx/1.4.6, Python 2.7.6)
>
> ...still having the same
Upgraded to
2.10.3-stable+timestamp.2015.04.02.21.42.07
(Running on nginx/1.4.6, Python 2.7.6)
...still having the same problem with record versioning.
-Jim
On Thursday, April 2, 2015 at 4:37:03 PM UTC-5, Jim S wrote:
>
> Version 2.10.1-stable+timestamp.2015.04.01.03.27.40
>
> I have one t
Version 2.10.1-stable+timestamp.2015.04.01.03.27.40
I have one table with record versioning turned on. I get the following in
my nginx setup...
nginx 502 Bad Gateway
Looking in the nginx log I see:
2015/04/02 15:37:13 [error] 1040#0: *67 upstream prematurely closed
connection while reading r
6 matches
Mail list logo