Am 23.09.2013 21:56:56, schrieb Alfredo Deza:
>
> On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm > <>
> bernhard.gl...@ecologic.eu> >> wrote:
> > Hi all,
> >
> > something with ceph-deploy doesen't work at all anymore.
> > After an upgrade ceph-depoly failed to roll out a new monitor
> >
w
Hi ceph-users,
I deployed a Ceph cluster (including RadosGW) with use of ceph-deploy on
RHEL6.4, during the deployment, I have a couple of questions which need your
help.
1. I followed the steps http://ceph.com/docs/master/install/rpm/ to deploy the
RadosGW node, however, after the deployment,
John Wilkins schrieb:
> Clients use the public network. The cluster network is principally for
> OSD-to-OSD communication--heartbeats, replication, backfill, etc.
Hmm, well, I'm aware of this, but the question is, if it is nevertheless
possible, ie. is it actively prohibited or "just" not recomme
Hi there,
I want to set the flag hashpspool on an existing pool. "ceph osd pool set
{pool-name} {field} {value}" does not seem to work. So I wonder how I can set/
unset flags on pools?
Corin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On 09/24/2013 10:22 AM, Corin Langosch wrote:
Hi there,
I want to set the flag hashpspool on an existing pool. "ceph osd pool
set {pool-name} {field} {value}" does not seem to work. So I wonder how
I can set/ unset flags on pools?
I believe that at the moment you'll only be able to have that
On 09/20/2013 10:27 AM, Maciej Gałkiewicz wrote:
Hi guys
Do you have any list of companies that use Ceph in production?
regards
Inktank has a list of customers up on the site:
http://www.inktank.com/customers/
-Joao
--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ce
On 09/23/2013 10:10 AM, Fuchs, Andreas (SwissTXT) wrote:
I'm following different threads here, mainly the poor radosgw performance one.
And what I see there are often recommendation to put a certain config to
ceph.conf, but often it's unclear to me where exactly to put them
- does it matter if
On Tue, Sep 24, 2013 at 3:27 AM, Bernhard Glomm
wrote:
> Am 23.09.2013 21:56:56, schrieb Alfredo Deza:
>
>
>
>
> On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm <
> bernhard.gl...@ecologic.eu> wrote:
>
>> Hi all,
>>
>> something with ceph-deploy doesen't work at all anymore.
>> After an upgrade c
On Tue, Sep 24, 2013 at 6:44 AM, bernhard glomm
wrote:
>
>
> *From: *bernhard glomm
> *Subject: **Re: [ceph-users] ceph-deploy again*
> *Date: *September 24, 2013 11:47:00 AM GMT+02:00
> *To: *"Fuchs, Andreas (SwissTXT)"
>
> Andi thnx,
>
> but as I said, ssh is not the problem.
> since the first
Authentication works. I was interested in trying it without authentication. I
didn't see the upstart link earlier.
Is the plan to only use upstart and not service for Dumpling and beyond?
Tim
From: Gary Mazzaferro [mailto:ga...@oedata.com]
Sent: Tuesday, September 24, 2013 1:16 AM
To: John Wilkin
On Tue, Sep 24, 2013 at 12:46 AM, Guang wrote:
> Hi ceph-users,
> I deployed a Ceph cluster (including RadosGW) with use of ceph-deploy on
> RHEL6.4, during the deployment, I have a couple of questions which need your
> help.
>
> 1. I followed the steps http://ceph.com/docs/master/install/rpm/ to
I did the same thing, restarted with upstart, and I still need to use
authentication. Not sure why yet. Maybe I didn't change the /etc/ceph
configs on all the nodes
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Snider, Tim
Sent: Tuesday,
Am 24.09.2013 12:24, schrieb Joao Eduardo Luis:
I believe that at the moment you'll only be able to have that flag set on a
pool at creation time, if 'osd pool default flag hashpspool = true' on your conf.
I just updated my config like this:
[osd]
osd journal size = 100
filestore xattr use
Hi there,
do snapshots have an impact on write performance? I assume on each write all
snapshots have to get updated (cow) so the more snapshots exist the worse write
performance will get?
Is there any way to see how much disk space a snapshot occupies? I assume
because of cow snapshots star
From your pastie details, it looks like you are using "auth supported
= none". That's pre 0.51, as noted in the documentation. Perhaps I
should omit the old usage or omit it entirely.
It should look like this:
auth cluster required = none
auth service required = none
auth client required = none
On Tue, Sep 24, 2013 at 1:14 AM, Kurt Bauer wrote:
>
>
> John Wilkins schrieb:
>
> Clients use the public network. The cluster network is principally for
> OSD-to-OSD communication--heartbeats, replication, backfill, etc.
>
> Hmm, well, I'm aware of this, but the question is, if it is nevertheless
On Sun, Sep 22, 2013 at 10:00 AM, Serge Slipchenko
wrote:
> On Fri, Sep 20, 2013 at 11:44 PM, Gregory Farnum wrote:
>>
>> [ Re-added the list — please keep emails on there so everybody can
>> benefit! ]
>>
>> On Fri, Sep 20, 2013 at 12:24 PM, Serge Slipchenko
>> wrote:
>> >
>> >
>> >
>> > On Fri
On Sun, Sep 22, 2013 at 5:25 AM, Gaylord Holder wrote:
>
>
> On 09/22/2013 02:12 AM, yy-nm wrote:
>>
>> On 2013/9/10 6:38, Gaylord Holder wrote:
>>>
>>> Indeed, that pool was created with the default 8 pg_nums.
>>>
>>> 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
>>>
>>> I bumped up
On Sat, Sep 21, 2013 at 11:05 PM, yy-nm wrote:
> On 2013/9/10 4:57, Samuel Just wrote:
>>
>> That's normal, each osd listens on a few different ports for different
>> reasons.
>> -Sam
>>
>> On Mon, Sep 9, 2013 at 12:27 AM, Timofey Koolin wrote:
>>>
>>> I use ceph 0.67.2.
>>> When I start
>>> ceph
Is the form: auth cluster required = none or auth_cluster_required = none?
("_"s as a word separator)
-Original Message-
From: John Wilkins [mailto:john.wilk...@inktank.com]
Sent: Tuesday, September 24, 2013 11:43 AM
To: Aronesty, Erik
Cc: Snider, Tim; Gary Mazzaferro; ceph-users@lists.c
Either one should work. For RHEL, CentOS, etc., use sysvinit.
I rewrote the ops doc, but it's in a wip branch right now. Here:
http://ceph.com/docs/wip-doc-quickstart/rados/operations/operating/
I still may make some edits to it, but follow the sysvinit section.
On Tue, Sep 24, 2013 at 10:08 AM
Hi
I want to use ceph and kvm with rdb hosting mysql and oracle
I have already use kvm with iscsi but with sgbd it suffer of io limitation
is there some people who have good and bad experience on hosting sgbd.
thank
___
ceph-users mailing list
ceph-us
This "noshare" option may have just helped me a ton -- I sure wish I would
have asked similar questions sooner, because I have seen the same failure
to scale. =)
One question -- when using the "noshare" option (or really, even without
it) are there any practical limits on the number of RBDs that
On Tue, 24 Sep 2013, Travis Rhoden wrote:
> This "noshare" option may have just helped me a ton -- I sure wish I would
> have asked similar questions sooner, because I have seen the same failure to
> scale. =)
>
> One question -- when using the "noshare" option (or really, even without it)
> are
On Tue, Sep 24, 2013 at 5:16 PM, Sage Weil wrote:
> On Tue, 24 Sep 2013, Travis Rhoden wrote:
>> This "noshare" option may have just helped me a ton -- I sure wish I would
>> have asked similar questions sooner, because I have seen the same failure to
>> scale. =)
>>
>> One question -- when using
Hi Sage,
We did quite a few experiment to see how ceph read performance can scale up.
Here is the summary.
1.
First we tried to see how far a single node cluster with one osd can scale up.
We started with cuttlefish release and the entire osd file system is on the
ssd. What we saw with 4K s
Hi Somnath!
On Tue, 24 Sep 2013, Somnath Roy wrote:
>
> Hi Sage,
>
> We did quite a few experiment to see how ceph read performance can scale up.
> Here is the summary.
>
>
>
> 1.
>
> First we tried to see how far a single node cluster with one osd can scale
> up. We started with cuttlefish
Hi Sage,
Thanks for your input. I will try those. Please see my response inline.
Thanks & Regards
Somnath
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Tuesday, September 24, 2013 3:47 PM
To: Somnath Roy
Cc: Travis Rhoden; Josh Durgin; ceph-de...@vger.kernel.org; Anir
Hi,
I'm exploring a configuration with multiple Ceph block devices used with
LVM. The goal is to provide a way to grow and shrink my file systems
while they are on line.
I've created three block devices:
$ sudo ./ceph-ls | grep home
jpr-home-lvm-p01: 102400 MB
jpr-home-lvm-p02: 102400 MB
jpr-h
You need to add a line to /etc/lvm/lvm.conf:
types = [ "rbd", 1024 ]
It should be in the "devices" section of the file.
On Tue, Sep 24, 2013 at 5:00 PM, John-Paul Robinson wrote:
> Hi,
>
> I'm exploring a configuration with multiple Ceph block devices used with
> LVM. The goal is to provide a
On 09/25/2013 02:00 AM, John-Paul Robinson wrote:
Hi,
I'm exploring a configuration with multiple Ceph block devices used with
LVM. The goal is to provide a way to grow and shrink my file systems
while they are on line.
I've created three block devices:
$ sudo ./ceph-ls | grep home
jpr-home-
31 matches
Mail list logo