Clay, thanks for a very comprehensive answer!

I'll have to delve a bit more and see what I can achieve using the puppet module. I guess back patching in the change when it arrives is a possible option (I'm using the Icehouse branch of puppet-swift - our Openstack setup is running Icehouse).

With respect Aapche wsgi integration, we have recently moved to running a number of other Openstack services (e.g Keystone) this way, and I was hoping to "leverage" some of our existing puppet code to do likewise for Swift (ahem - so probably a bit of "We have this hammer here...use it to hit everything")!

Cheers

Mark

On 12/06/15 03:55, Clay Gerrard wrote:
What a well timed question!

A swift core maintainer recently did some analysis on this very question
and the results strongly favored using multiple workers on different
ports each handling only a single physical filesystem device.

To make it easier to achieve that configuration there's a patch to
enable the swift-object-server wsgi worker handler to layout processes
like this automatically based on the ports in the ring:

https://review.openstack.org/#/c/184189/

However, that isn't in (yet) so it's not available to you in swift 1.13
- but the references to the benchmarks and graphs and i/o isolation
should indicate that even in swift 1.13 you'll want to run multiple
workers per disk - and if possible have those workers handling only one
device for isolation (which until this change lands means config file
per disk)

Unrelated, but I wonder why you think apache/mod_wsgi is better than
having the swift-proxy-server process back right up to a simple ssl
termination (i.e. stud)

-Clay

On Wed, Jun 10, 2015 at 5:08 PM, Mark Kirkwood
<mark.kirkw...@catalyst.net.nz <mailto:mark.kirkw...@catalyst.net.nz>>
wrote:

    Hi,

    I'm looking at setting up a Swift cluster and am wondering if there
    is any strong preference for one vs many config files in this case.

    I note that devstack will create one config per device, e.g for a 2
    device install:

    $ ls -l /opt/stack/data/swift
    total 16
    lrwxrwxrwx  1 root  root    35 Jun 11 11:50 1 ->
    /opt/stack/data/swift/drives/sdb1/1
    lrwxrwxrwx  1 root  root    35 Jun 11 11:50 2 ->
    /opt/stack/data/swift/drives/sdb1/2

    $ ls -l /etc/swift/object-server/
    total 16
    -rw-r--r-- 1 stack stack 8148 Jun 11 11:49 1.conf
    -rw-r--r-- 1 stack stack 8148 Jun 11 11:49 2.conf

    $ head /etc/swift/object-server/1.conf
    [DEFAULT]
    # bind_ip = 0.0.0.0
    bind_port = 6013
    # bind_timeout = 30
    # backlog = 4096
    user = stack
    swift_dir = /etc/swift
    devices = /opt/stack/data/swift/1
    mount_check = false
    disable_fallocate = true


    Whereas puppet-swift module seems to create just one, e.g:

    $ ls -l /srv/node
    total 0
    drwxr-xr-x 5 swift swift 47 Jun 10 04:21 1
    drwxr-xr-x 6 swift swift 62 Jun 10 04:21 2

    $ head /etc/swift/object-server.conf
    [DEFAULT]
    devices = /srv/node
    bind_ip = 192.168.5.181
    bind_port = 6000
    mount_check = false
    user = swift
    log_facility = LOG_LOCAL2
    workers = 1


    (both of these are Swift 1.13). Is there a scalability advantage to
    having each device having its own port? Or any other reason to
    prefer one of the other?

    I'm hoping to use Puppet + puppet-swift to actually deploy Swift,
    and actually run the proxy, account, container and object servers
    under Apache mod_wsgi (which is my next struggle with Puppet no
    doubt...).

    Cheers

    Mark

    _______________________________________________
    Mailing list:
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
    Post to     : openstack@lists.openstack.org
    <mailto:openstack@lists.openstack.org>
    Unsubscribe :
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to