Is it safe to delete all default pools?
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I know it is 100*numofosds/replfactor. But I also read somewhere that it should
be a value of 2^X. It this still correct? So for 24 osds and repl 3 100*24/3 =>
800 => to be 2^X => 1024?
Greets
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.c
Hi,
I have a backup script, which every night :
* create a snapshot of each RBD image
* then delete all snapshot that have more than 15 days
The problem is that "rbd snap rm XXX" will overload my cluster for hours
(6 hours today...).
Here I see several problems :
#1 "rbd snap rm XXX" is not bloc
A little bit more.
I have tried deploy RGW via http://ceph.com/docs/master/radosgw/ and than
connect S3 Browser, CrossFTP and CloudBerry Explorer clients, but all
unsuccessfully.
Again my question, does anybody use S3 desktop clients with RGW?
On Fri, Apr 19, 2013 at 10:54 PM, Igor Laskovy wrot
Hi,
is there a way to copy a rbd disk image incl snapshots from one pool to another?
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
Have any comments about 1) or 2) ?
Thanks!!!
Hi Mark,
Sorry, reply too late, because I didn't receive this mail so missed this
message in the several days...
http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00624.html
Your advice are very very helpful to me !!! thanks :)
I
So, I've restarted the new osds as many as possible and the cluster
started to move data to the 2 new nodes overnight.
This morning there was not netowrk traffic and the healt was
HEALTH_ERR 1323 pgs backfill; 150 pgs backfill_toofull; 100 pgs
backfilling; 114 pgs degraded; 3374 pgs peering; 36 pg
On Sun, Apr 21, 2013 at 12:35 AM, Stefan Priebe - Profihost AG
wrote:
> Is it safe to delete all default pools?
As long as you don't have any data you need in there; the system won't
break without them or anything like that. They're favored only in that
tools default to using them (eg the rbd too
Which version of Ceph are you running right now and seeing this with
(Sam reworked it a bit for Cuttlefish and it was in some of the dev
releases)? Snapshot deletes are a little more expensive than we'd
like, but I'm surprised they're doing this badly for you. :/
-Greg
Software Engineer #42 @ http:
On Sun, Apr 21, 2013 at 5:01 AM, Stefan Priebe - Profihost AG
wrote:
> Hi,
>
> is there a way to copy a rbd disk image incl snapshots from one pool to
> another?
Not directly and not right now, sorry. What are you trying to do?
Would it suffice for instance to manually create all the snapshots y
On Sun, Apr 21, 2013 at 12:39 AM, Stefan Priebe - Profihost AG
wrote:
> I know it is 100*numofosds/replfactor. But I also read somewhere that it
> should be a value of 2^X. It this still correct? So for 24 osds and repl 3
> 100*24/3 => 800 => to be 2^X => 1024?
PG counts of 2^x ensure that each
On Sun, Apr 21, 2013 at 3:02 AM, Igor Laskovy wrote:
> A little bit more.
>
> I have tried deploy RGW via http://ceph.com/docs/master/radosgw/ and than
> connect S3 Browser, CrossFTP and CloudBerry Explorer clients, but all
> unsuccessfully.
>
> Again my question, does anybody use S3 desktop clien
Am 21.04.2013 um 17:41 schrieb Gregory Farnum :
> On Sun, Apr 21, 2013 at 12:35 AM, Stefan Priebe - Profihost AG
> wrote:
>> Is it safe to delete all default pools?
>
> As long as you don't have any data you need in there; the system won't
> break without them or anything like that. They're favo
Am 21.04.2013 um 17:47 schrieb Gregory Farnum :
> On Sun, Apr 21, 2013 at 5:01 AM, Stefan Priebe - Profihost AG
> wrote:
>> Hi,
>>
>> is there a way to copy a rbd disk image incl snapshots from one pool to
>> another?
>
> Not directly and not right now, sorry. What are you trying to do?
> Woul
Well, in each case something specific. For CrossFTP, for example, it says
that asking the server it receive text data instead of XML.
In logs on servers side I don't found something interested.
I do everything shown at http://ceph.com/docs/master/radosgw/ and only
that, excluding swift compatible
On Sun, Apr 21, 2013 at 9:39 AM, Igor Laskovy wrote:
> Well, in each case something specific. For CrossFTP, for example, it says
> that asking the server it receive text data instead of XML.
When doing what? Are you able to do anything?
> In logs on servers side I don't found something intereste
What I can try to do/delete to regain access?
Those osd are crazy, flapping up and down. I think that the situation
is without control
HEALTH_WARN 2735 pgs backfill; 13 pgs backfill_toofull; 157 pgs
backfilling; 188 pgs degraded; 251 pgs peering; 13 pgs recovering;
1159 pgs recovery_wait; 159 pgs
Just initial connect to rgw server, nothing further.
Please see below behavior for CrossFTP and S3Browser cases.
On CrossFTP side:
[R1] Connect to rgw.labspace
[R1] Current path: /
[R1] Current path: /
[R1] LIST /
[R1] Expected XML document response from S3 but received content type
text/html
[R1]
Greg, your supposition about the small amount data to be written is
right but the rebalance is writing an insane amount of data to the new
nodes and the mount is not working again
this is the node S203 (the os is on /dev/sdl, not listed)
/dev/sda1 1.9T 467G 1.4T 26% /var/lib/ceph/osd/cep
Am 21.04.2013 um 17:50 schrieb Gregory Farnum :
> On Sun, Apr 21, 2013 at 12:39 AM, Stefan Priebe - Profihost AG
> wrote:
>> I know it is 100*numofosds/replfactor. But I also read somewhere that it
>> should be a value of 2^X. It this still correct? So for 24 osds and repl 3
>> 100*24/3 => 800
I use ceph 0.56.4 ; and to be fair, a lot of stuff are «doing badly» on
my cluster, so maybe I have a general OSD problem.
Le dimanche 21 avril 2013 à 08:44 -0700, Gregory Farnum a écrit :
> Which version of Ceph are you running right now and seeing this with
> (Sam reworked it a bit for Cuttlefi
I like s3cmd, but it allows you only manipulate buckets with at least one
capital letter
On Sun, Apr 21, 2013 at 2:05 PM, Igor Laskovy wrote:
> Just initial connect to rgw server, nothing further.
> Please see below behavior for CrossFTP and S3Browser cases.
>
> On CrossFTP side:
> [R1] Connec
On Sun, Apr 21, 2013 at 10:05 AM, Igor Laskovy wrote:
>
> Just initial connect to rgw server, nothing further.
> Please see below behavior for CrossFTP and S3Browser cases.
>
> On CrossFTP side:
> [R1] Connect to rgw.labspace
> [R1] Current path: /
> [R1] Current path: /
> [R1] LIST /
> [R1] Expec
23 matches
Mail list logo