Hello all,

I now have a new problem with places. This time, it has to do with the
interaction of places, sandboxes, threads and PostgreSQL.

I promise I’m not actively looking for ways to break Racket, and that
this is as simple as I have been able to make a reproducible error
(although I’m also trying just spinning PostgreSQL queries in multiple
places). I /am/ genuinely trying to use Racket along this pattern.
(The threads are are “serve/servlet”s in my case).

---------------------------------------------------------------------------
#lang racket/base
(require racket/place
         racket/match
         racket/sandbox
         db)

(provide server-place)

(define (server-place pl)
  (define pgc (postgresql-connect #:database "my_db"
                                  #:user "tim"
                                  #:password "letmein"))
  (define bcast-read-ch (place-channel-get pl))

  ;; This is either necessary or aggravating
  ;; [6 places survives 2 runs of 10,000,000 without a crash]
  (define do-sbx? (not (getenv "NO_SANDBOX")))
  (unless do-sbx? (displayln "NO SANDBOX!"))
  (define sbx (when do-sbx? (parameterize ([sandbox-eval-limits #f])
                                          (make-evaluator 'racket/base))))

  (place-channel-put pl "standing by")
  (let loop ()
    (match-define reply-to-pch (place-channel-get bcast-read-ch))
    (define n 1)
    (define rsp-val
      ;; This causes: SIGSEGV MAPERR si_code 1 fault on addr ...
      (query-value pgc "SELECT $1 + 1" n)
     ;;(query-value pgc "SELECT 1") ; This causes a spin+grow
     )

    (place-channel-put reply-to-pch 1)
    (loop)))

(module main racket/base
  (require racket/place syntax/location)

  (define (env->int e) (let ((v (getenv e))) (and v (string->number v))))
  (define max-clients   (or (env->int "CLIENT_THREADS") 20))
  (define report-period (or (env->int "REPORT_PERIOD") 10000))
  (define n-tests (or (env->int "TESTS") 10000000))
  (define n-servers (or (env->int "SERVER_PLACES") 3))

  (define-values (bcast-to-ch bcast-from-ch) (place-channel))

  (for/list ((n (in-range n-servers)))
    (define srv-pl (dynamic-place (quote-module-path "..") 'server-place))
    ;; Tell server where to get its broadcast requests
    (place-channel-put srv-pl bcast-from-ch)
    (place-channel-get srv-pl)) ; Synchronise

  (define thread-limiter-sem (make-semaphore max-clients))

  (define (consumer n)
   (unless (zero? n)
    (semaphore-wait thread-limiter-sem)
    (thread
      (lambda ()
        (define-values (reply-read-ch reply-write-ch) (place-channel))
        (when (zero? (modulo n report-period)) (displayln (- n-tests n)))
        (place-channel-put bcast-to-ch reply-write-ch)
        (place-channel-get reply-read-ch)
        (semaphore-post thread-limiter-sem)))
    (consumer (sub1 n))))
  (consumer n-tests))
---------------------------------------------------------------------------

-----------------------------------------------------------------------
$ racket --version
Welcome to Racket v6.2.900.10.
# racket is vanilla ./configure and make
$ uname -a
Linux XXX 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u1 x86_64 GNU/Linux
$ grep 'model name' /proc/cpuinfo
model name      : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
model name      : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
model name      : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
model name      : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
-----------------------------------------------------------------------

I run the code with tests of 10,000,000; and for more than 2 places the
program crashes. If I use “select $1 + n” under a normal build the crash
is of the form:
  SIGSEGV MAPERR si_code 1 fault on addr (nil)

Under a debug build:
  SIGSEGV MAPERR si_code 1 fault on addr 0x7ffd8e99cfb8
AFAICT, that address maps to a GC call (but don’t quote me on that).

If the query is simply “select 1”, then racket spins and grows.

The sandbox “make-evaluator” seems to be necessary; since I cannot get a
crash in a 10M run even when there are 6 server places. (Although it
could be that it’s simply aggravating it and I don’t have the patience).

Here is a table of when the crashes were observed (it’s similar to what
I’ve been seeing in my experimentation, except these are properly
recorded).

| SERVER |  CLIENT | SAND | Crash Point(s)           | Debug? |
| PLACES | THREADS | BOX  |                          | racket |
|--------+---------+------+--------------------------+--------|
|      3 |      20 | Y    | 3,290,000 3,710,000      | N      |
|      2 |      20 | Y    | [Survived 10M]           | N      |
|      6 |      20 | Y    | 5,810,000 620,000        | N      |
|      3 |      10 | Y    | 200,000 1,810,000        | N      |
|      3 |       6 | Y    | 6,800,000 530,000        | N      |
|      3 |       3 | Y    | 4,860,000 [Survived 10M] | N      |
|      3 |       4 | Y    | 6,030,000 2,710,000      | N      |
|      6 |      20 | N    | [Survived 10M]           | N      |

Each run was over 10,000,000 tests. Anything that survived that and
took too long was not repeated -- I haven’t ever reproduced the crash
with 2 server places.

I’m currently trying to boil the problem down to places, PostgreSQL and
sandbox (places drive themselves, and no threads); but I’m not having
much (bad) luck!

Regards,

Tim

-- 
Tim Brown CEng MBCS <tim.br...@cityc.co.uk>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
                City Computing Limited · www.cityc.co.uk
      City House · Sutton Park Rd · Sutton · Surrey · SM1 2AE · GB
                T:+44 20 8770 2110 · F:+44 20 8770 2130
────────────────────────────────────────────────────────────────────────
City Computing Limited registered in London No:1767817.
Registered Office: City House, Sutton Park Road, Sutton, Surrey, SM1 2AE
VAT No: GB 918 4680 96

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to