18.04.2013 10:49, Wolfgang Hennerbichler пишет:
Ceph doesn't support data stripes, and you probably also don't need it.
Ceph distributes reads of data anyways, because large objects are spread
automatically to the OSDs, reads happen concurrently, this is somehow
like striping, but better :)
Well
Can you explain more?
Because I found here :
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html
It says: "Shared storage: NOVA-INST-DIR/instances/ (eg
/var/lib/nova/instances) has to be mounted by shared storage."
And from here:
http://www.mail-arc
Dear ceph users,
I just set up a small cluster with two osds and 3 mon.
(0.56.4-1~bpo70+1)
OSDs are xfs (defaults mkfs options, mounted defaults,noatime) over lvm over
hwraid.
dd if=/dev/zero of=... bs=1M count=1 conv=fdatasync on each ceph-*
osd mounted partitions show 120MB/s on one ser
Thanks, it's just the thing I was searching for.
On 04/17/2013 05:29 PM, Patrick McGarry wrote:
> Hey Wolfgang,
>
> There are several slide decks with associated imagery floating around
> out there. I'd be happy to get you images that correspond to what you
> want to focus on. A good place to s
On 04/18/2013 05:19 AM, Emmanuel Lacour wrote:
Dear ceph users,
I just set up a small cluster with two osds and 3 mon.
(0.56.4-1~bpo70+1)
OSDs are xfs (defaults mkfs options, mounted defaults,noatime) over lvm over
hwraid.
dd if=/dev/zero of=... bs=1M count=1 conv=fdatasync on each ceph
I've not been following the list for long, so forgive me if this has been
covered, but is there a plan for image 2 support in the kernel RBD driver? I
assume with Linux 3.9 in the RC phase, its not likely to appear there?
Thanks!
NOTICE: Protect the information in this message in accordance wi
If I well understand the roadmap
( http://tracker.ceph.com/projects/ceph/roadmap ), it's planed for Ceph
v0.62B :
Le jeudi 18 avril 2013 à 09:28 -0400, Whelan, Ryan a écrit :
> I've not been following the list for long, so forgive me if this has been
> covered, but is there a plan for image 2 su
Does this mean its in linux-next? (released in 3.10?)
- Original Message -
From: "Olivier B."
To: "Ryan Whelan"
Cc: ceph-users@lists.ceph.com
Sent: Thursday, April 18, 2013 9:36:22 AM
Subject: Re: [ceph-users] Format 2 Image support in the RBD driver
If I well understand the roadmap
( h
On Thu, Apr 18, 2013 at 08:25:50AM -0500, Mark Nelson wrote:
>
thanks for your answer!
> It makes me a bit nervous that you are seeing such a discrepancy
> between the drives. Were you expecting that one server would be so
> much faster than the other? If a drive is is starting to fail your
>
Hi,
tl;dr: something deleted the objects from the .rgw.gc and then the pgs
went inconsistent. Is this normal??!!
Just now we had scrub errors and resulting inconsistencies on many of
the pgs belonging to our .rgw.gc pool.
HEALTH_ERR 119 pgs inconsistent; 119 scrub errors
pg 11.1f0 is active+clea
On 04/18/2013 08:42 AM, Emmanuel Lacour wrote:
On Thu, Apr 18, 2013 at 08:25:50AM -0500, Mark Nelson wrote:
thanks for your answer!
It makes me a bit nervous that you are seeing such a discrepancy
between the drives. Were you expecting that one server would be so
much faster than the other
Replying to myself...
I just noticed this:
[root@ceph-radosgw01 ceph]# ls -lh /var/log/ceph/
total 27G
-rw-r--r--. 1 root root 27G Apr 18 16:08 radosgw.log
-rw-r--r--. 1 root root 20 Apr 5 03:13 radosgw.log-20130405.gz
-rw-r--r--. 1 root root 20 Apr 6 03:14 radosgw.log-20130406.gz
-rw-r--r--.
Hello, it is possible to configure ceph.conf RAM for each daemon instance ?___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Sorry for the noise.. we now have a better idea what happened here.
For those that might care, basically we had one client looping while
trying to list the / bucket with an incorrect key. rgw was handling
this at 1kHz, so congratulations on that. I will now go and read how
to either decrease the l
On Thu, Apr 18, 2013 at 09:05:12AM -0500, Mark Nelson wrote:
>
> So Ceph pseudo-randomly distributes data to different OSDs, which
> means that you are more or less limited by the slowest OSD in your
> system. IE if one node can only process X objects per second,
> outstanding operations will slo
On Thu, Apr 18, 2013 at 04:19:09PM +0200, Emmanuel Lacour wrote:
> > 1) If you put your journals on the same devices, you are doing 2
> > writes for every incoming write since we do full data journalling.
> > Assuming that's the case we are down to 25MB/s.
> >
>
to reduce this double write over
On 04/18/2013 10:12 AM, Emmanuel Lacour wrote:
On Thu, Apr 18, 2013 at 04:19:09PM +0200, Emmanuel Lacour wrote:
1) If you put your journals on the same devices, you are doing 2
writes for every incoming write since we do full data journalling.
Assuming that's the case we are down to 25MB/s.
Hi,
I configured a test 'cluster' and did play with it (moving osd
folders around ie. journal file) and did break something. Now I think
that this can occurs again when we go prod. so I would like to know how
I can fix it.. I don't care about loosing my files..
Anyone can help? here's the
On Thu, Apr 18, 2013 at 10:18:29AM -0500, Mark Nelson wrote:
>
> SSD journals definitely help, especially when doing large writes and
> targeting high throughput.
>
clusters I will build will be used mainly for kvm servers images ;)
> If you get a chance, it still may be worth giving 0.60 a try
I believe Alex just merged format 2 reading into our testing branch, and is
working on writes now.
-Greg
On Thursday, April 18, 2013, Whelan, Ryan wrote:
> Does this mean its in linux-next? (released in 3.10?)
>
> - Original Message -
> From: "Olivier B." >
> To: "Ryan Whelan" >
> Cc: cep
On 04/18/2013 10:29 AM, Emmanuel Lacour wrote:
On Thu, Apr 18, 2013 at 10:18:29AM -0500, Mark Nelson wrote:
SSD journals definitely help, especially when doing large writes and
targeting high throughput.
clusters I will build will be used mainly for kvm servers images ;)
If you get a chanc
What version was this on?
-Greg
On Thursday, April 18, 2013, Dan van der Ster wrote:
> Sorry for the noise.. we now have a better idea what happened here.
>
> For those that might care, basically we had one client looping while
> trying to list the / bucket with an incorrect key. rgw was handling
Hi,
Has anyone successfully installed Ceph using the ceph-barclamp with crowbar.
if yes what version are you using and how did you created the barclamp
and did you integrated it with Openstack folsom/Grizzly?
GreetZ
Ramonskie
For informati
This is 0.56.4 on a RHEL6 derivative.
Cheers,
Arne
From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on
behalf of Gregory Farnum [g...@inktank.com]
Sent: 18 April 2013 17:34
To: Dan van der Ster
Cc: ceph-users@lists.ceph.com
Subject: R
The barclamps were written against the crowbar "Betty" release, OpenStack
Essex (which is the last one supported by Crowbar), and Ceph "argonaut". JJ
has updated them to use "Bobtail", but I don't think anybody's run them
against newer versions of Openstack. :(
You should be able to find built vers
well i tried to build the barclamp from https://github.com/ceph/barclamp-ceph
and pacakge it with https://github.com/ceph/package-ceph-barclamp
but the install fails
so i also found a barclamp that is installing argonaut
and it installs ceph
but when i manually try to add a image in the volumes
On Thu, Apr 18, 2013 at 7:57 AM, Dan van der Ster wrote:
>
> Sorry for the noise.. we now have a better idea what happened here.
>
> For those that might care, basically we had one client looping while
> trying to list the / bucket with an incorrect key. rgw was handling
> this at 1kHz, so congrat
Keep me posted on this, and I'll update the docs when we have a resolution.
On Thu, Apr 18, 2013 at 8:55 AM, Makkelie, R - SPLXL wrote:
> **
> well i tried to build the barclamp from
> https://github.com/ceph/barclamp-ceph
> and pacakge it with https://github.com/ceph/package-ceph-barclamp
>
>
Oh, yeah. Bobtail isn't going to play nicely without some
modifications, but I'll have to wait for JJ to speak about those.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Apr 18, 2013 at 8:55 AM, Makkelie, R - SPLXL
wrote:
> well i tried to build the barclamp from
> h
On Wed, Apr 17, 2013 at 7:40 AM, Guido Winkelmann
wrote:
> Hi,
>
> I just tried upgrading parts of our experimental ceph cluster from 0.56.1 to
> 0.60, and it looks like the new mon-daemon from 0.60 cannot talk to those from
> 0.56.1 at all.
>
> Long story short, we had to move some hardware aroun
On 04/18/2013 05:28 PM, Gregory Farnum wrote:
On Wed, Apr 17, 2013 at 7:40 AM, Guido Winkelmann
wrote:
Hi,
I just tried upgrading parts of our experimental ceph cluster from 0.56.1 to
0.60, and it looks like the new mon-daemon from 0.60 cannot talk to those from
0.56.1 at all.
Long story shor
We're making sure that the modified barclamps are successfully going
through the Tempest tests, once they do I'll be sending a pull request
with all the changes for a bobtail enabled barclamp to the repo.
The main problem with using bobtail is actually with the Nova package,
it currently includes
Isn't the new leveldb tuning part of cuttlefish.
Stefan
Am 18.04.2013 um 19:40 schrieb Joao Eduardo Luis :
> On 04/18/2013 05:28 PM, Gregory Farnum wrote:
>> On Wed, Apr 17, 2013 at 7:40 AM, Guido Winkelmann
>> wrote:
>>> Hi,
>>>
>>> I just tried upgrading parts of our experimental ceph cluste
On 04/18/2013 04:23 PM, konradwro wrote:
Hello, it is possible to configure ceph.conf RAM for each daemon instance ?
No, the daemons will use as much as they need and is available. You can
put the daemons in a cgroup to limit their memory usage, but that comes
with the problem that they could
Stephane,
The monitoring section of operations explains what's happening, but I think
I probably need to do a better job of explaining unfound objects.
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/
http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#unfound-objects
Bryan,
It seems you got crickets with this question. Did you get any further? I'd
like to add it to my upcoming CRUSH troubleshooting section.
On Wed, Apr 3, 2013 at 9:27 AM, Bryan Stillwell
wrote:
> I have two test clusters running Bobtail (0.56.4) and Ubuntu Precise
> (12.04.2). The problem
Seeing this go by again it's simple enough to provide a quick
answer/hint — by setting the tunables it's of course getting a better
distribution of data, but the reason they're optional to begin with is
that older clients won't support them. In this case, the kernel client
being run; so it returns
John,
Thanks for your response. I haven't spent a lot of time on this issue
since then, so I'm still in the same situation. I do remember seeing an
error message about an unsupported feature at one point after setting the
tunables to bobtail.
Bryan
On Thu, Apr 18, 2013 at 1:51 PM, John Wilkin
What's the fix for people running precise (12.04)? I believe I see the
same issue with quantal (12.10) as well.
On Thu, Apr 18, 2013 at 1:56 PM, Gregory Farnum wrote:
> Seeing this go by again it's simple enough to provide a quick
> answer/hint — by setting the tunables it's of course getting
There's not really a fix — either update all your clients so they support
the tunables (I'm not sure how new a kernel you need), or else run without
the tunables. In setups where your branching factors aren't very close to
your replication counts they aren't normally needed, if you want to reshape
I'm new to Ceph, and considering using it to store a bunch of static
files in the RADOS Gateway. My files are all versioned, so we never
modify files. We only add new files, and delete unused files.
I'm trying to figure out how to back everything up, to protect against
administrative and ap
Hi,
will RDMA be supported in the shortterm?
I'm planning an infrastructure and I don't know if starting with IB
QDR or 10GbE.
IB is much cheaper than 10GbE and with RDMA should be 4x faster, but
with IPoIB as workaround I've read that is very very heavy on CPU and
very slow (15gbit more or less)
On 04/18/2013 03:40 PM, Gandalf Corvotempesta wrote:
Hi,
will RDMA be supported in the shortterm?
I'm planning an infrastructure and I don't know if starting with IB
QDR or 10GbE.
Depends on your definition of RDMA, supported, and short term. ;)
We like the idea of using rsockets as it would b
Ahh, I think I have a better understanding now. I had my crush map set up
like this:
default
basement
rack1
server1
osd.0
osd.1
osd.2
osd.3
osd.4
server2
osd.5
2013/4/18 Mark Nelson :
> 10GbE is fully supported and widely used with Ceph while IB is a bit more
> complicated with fewer users. Having said that, IPoIB seems to work just
> fine, and there is potential in the future for even better performance.
> Which one is right for you probably depends on
On 04/18/2013 04:15 PM, Gandalf Corvotempesta wrote:
2013/4/18 Mark Nelson :
10GbE is fully supported and widely used with Ceph while IB is a bit more
complicated with fewer users. Having said that, IPoIB seems to work just
fine, and there is potential in the future for even better performance.
Hey guys,
I finally had enough time to coordinate with a few other people and
figure out what's going on with the ceph-create-keys access denied
messages and create a ticket: http://tracker.ceph.com/issues/4752.
(I believe your monitor crash is something else, Matthew; if that
hasn't been dealt wit
On 04/18/2013 10:36 PM, Gregory Farnum wrote:
(I believe your monitor crash is something else, Matthew; if that
hasn't been dealt with yet. Unfortunately all that log has is
messages, so it probably needs a bit more. Can you check it out, Joao?
The stack trace below is #3495, and Matthew is alr
2013/4/18 Mark Nelson :
> SDP is deprecated:
>
> http://comments.gmane.org/gmane.network.openfabrics.enterprise/5371
>
> rsockets is the future I think.
I don't know rsockets. Any plans about support for this or are they
"transparent" like SDP?
___
ceph-
2013/4/18 Sage Weil :
> I'm no expert, but I've heard SDP is not likely to be supported/maintained
> by anyone in the long-term. (Please, anyone, correct me if that is not
> true!) That said, one user has tested it successfully (with kernel and
> userland ceph) and it does seem to work..
Do you
On 04/18/2013 04:46 PM, Gandalf Corvotempesta wrote:
2013/4/18 Mark Nelson :
SDP is deprecated:
http://comments.gmane.org/gmane.network.openfabrics.enterprise/5371
rsockets is the future I think.
I don't know rsockets. Any plans about support for this or are they
"transparent" like SDP?
I
On Thu, Apr 18, 2013 at 2:46 PM, Joao Eduardo Luis
wrote:
> On 04/18/2013 10:36 PM, Gregory Farnum wrote:
>>
>> (I believe your monitor crash is something else, Matthew; if that
>> hasn't been dealt with yet. Unfortunately all that log has is
>> messages, so it probably needs a bit more. Can you c
On Thu, 18 Apr 2013, Gandalf Corvotempesta wrote:
> 2013/4/18 Sage Weil :
> > I'm no expert, but I've heard SDP is not likely to be supported/maintained
> > by anyone in the long-term. (Please, anyone, correct me if that is not
> > true!) That said, one user has tested it successfully (with kerne
On 04/18/2013 10:49 PM, Gregory Farnum wrote:
On Thu, Apr 18, 2013 at 2:46 PM, Joao Eduardo Luis
wrote:
On 04/18/2013 10:36 PM, Gregory Farnum wrote:
(I believe your monitor crash is something else, Matthew; if that
hasn't been dealt with yet. Unfortunately all that log has is
messages, so it
On 04/18/2013 06:03 PM, Joao Eduardo Luis wrote:
>
> There's definitely some command messages being forwarded, but AFAICT
> they're being forwarded to the monitor, not by the monitor, which by
> itself is a good omen towards the monitor being the leader :-)
>
> In any case, nothing in the trace's
There's a little bit of python called ceph-create-keys, which is
invoked by the upstart scripts. You can kill the running processes,
and edit them out of the scripts, without direct harm. (Their purpose
is to create some standard keys which the newer deployment tools rely
on to do things like creat
the user land preloader library like sdp isn't enough?
Is the kernel version needed just for librbd?
Il giorno 18/apr/2013 23:48, "Mark Nelson" ha
scritto:
> On 04/18/2013 04:46 PM, Gandalf Corvotempesta wrote:
>
>> 2013/4/18 Mark Nelson :
>>
>>> SDP is deprecated:
>>>
>>> http://comments.gmane.o
Greg,
Looks like Sage has a fix for this problem. In case it matters, I have
seen a few cases that conflict with your notes in this thread and the
bug report.
I have seen the bug exclusively on new Ceph installs (without upgrading
from bobtail), so it is not isolated to upgrades.
Further,
58 matches
Mail list logo