-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
We have had two situations where I/O just seems to be indefinitely
blocked on our production cluster today (0.94.3). In the case this
morning, it was just normal I/O traffic, no recovery or backfill. The
case this evening, we were backfilling to some
Hi Paul,
I hit the same problem here (see last post):
https://groups.google.com/forum/#!topic/bareos-users/mEzJ7IbDxvA
If I ever get to the bottom of it, I will let you know. Sorry I can't be of
any more help.
Nick
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@list
Hello,
I have built a ceph cluster for test, after I performed some recover testing,
some osd down as no available disk space, when I check the osd data folder, I
found that there are many huge objects which have prefix obj-xvrzfdsafd, I
would like to know how those objects generated and what i
I know another way: out 1TB osd, up 2TB osd as osd.X without data, then rados
will backfill the data to 2TB disks.
Now I use rsync to mv data form 1TB disk to 2TB disk, but the new osd coredump.
What's the problem?
ceph version:0.80.1
osd.X
host1 with 1TB disks
host2 with 2TB disks
on host1:
On 18/09/15 17:28, Sage Weil wrote:
Make that download.ceph.com .. the packages url was temporary while we got
the new site ready and will go away shortly!
(Also, HTTPS is enabled now.)
But still no jessie packages available... :(
___
ceph-users
On 09/19/2015 10:30 AM, wsnote wrote:
> I know another way: out 1TB osd, up 2TB osd as osd.X without data, then rados
> will backfill the data to 2TB disks.
> Now I use rsync to mv data form 1TB disk to 2TB disk, but the new osd
> coredump.
> What's the problem?
>
Did you use rsync with the -X
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Just use the built in Ceph recovery to move data to the new disk. By
changing disk sizes, you also change the mapping across the cluster so
you are going to be moving more data than necessary.
My recommendation, bring the new disk in as a new OSD. T
Ok, so if I understand correctly, for replication level 3 or 4 I would have
to use the rule
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take root
step choose firstn 2 type datacenter
step chooseleaf firstn 2 type host
step emit
}
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
You will want size=4 min_size=2 if you want to keep I/O going if a DC
fails and ensure some data integrity. Data checksumming (I think is
being added) would provide much stronger data integrity checking in a
two copy situation as you would be able to
Just to be clear, there's no longer going to be a generic
http://downloads.ceph.com/debian (sans -{ceph-release-name}) path? In
other words, we'll have to monitor something else to determine what's
considered stable for our {distro-release} and then update the sources to
point at a new debian-{cep
On Sat, 19 Sep 2015, Brian Kroth wrote:
> Just to be clear, there's no longer going to be a generic
> http://downloads.ceph.com/debian (sans -{ceph-release-name}) path? In other
> words, we'll have to monitor something else to determine what's considered
> stable for our {distro-release} and then
On 19 September 2015 at 01:55, Ken Dreyer wrote:
> To avoid confusion here, I've deleted packages.ceph.com from DNS
> today, and the change will propagate soon.
>
> Please use download.ceph.com (it's the same IP address and server,
> 173.236.248.54)
>
I'm getting:
W: GPG error: http://downlo
12 matches
Mail list logo