Hi Prabu,
We generally use SCSI - PR(persistent reservation) supported drives (drive
firmware should support) for HA/CFS. RBD would not support this feature because
it is not physical drive.
But as you did we can mount the same rbd across multiple clients after writing
file system(here OCFS2).
Hi all,
We experienced some serious trouble with our cluster: A running cluster
started failing and started a chain reaction until the ceph cluster was
down, as about half the OSDs are down (in a EC pool)
Each host has 8 OSDS of 8 TB (i.e. RAID 0 of 2 4TB disk) for an EC pool
(10+3, 14 hosts
On Tue, 5 Jan 2016, Guang Yang wrote:
> On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote:
> > On Mon, 4 Jan 2016, Guang Yang wrote:
> >> Hi Cephers,
> >> Happy New Year! I got question regards to the long PG peering..
> >>
> >> Over the last several days I have been looking into the *long peering*
Heya,
we are using a ceph cluster (6 Nodes with each having 10x4TB HDD + 2x SSD (for
journal)) in combination with KVM virtualization. All our virtual machine hard
disks are stored on the ceph cluster. The ceph cluster was updated to the
'infernalis' release recently.
We are experiencing proble
I think you are running out of memory(?), or at least of the memory for the
type of allocation krbd tries to use.
I'm not going to decode all the logs but you can try increasing min_free_kbytes
as the first step. I assume this is amd64 when there's no HIGHMEM trouble (I
don't remember how to sol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
There has been a lot of "discussion" about osd_backfill_scan[min,max]
lately. My experience with hammer has been opposite that of what
people have said before. Increasing those values for us has reduced
the load of recovery and has prevented a lot of
Hi,
Also make sure that you optimize the debug log config. There's a lot on the
ML on how to set them all to low values (0/0).
Not sure how it's in infernalis but it did a lot in previous versions.
Regards,
Josef
On 6 Jan 2016 18:16, "Robert LeBlanc" wrote:
> -BEGIN PGP SIGNED MESSAGE-
Hi guys,
Should I create a partition table on a rbd image or it is enough to create the
filesystem only ?
Every time I map a rbd image I get the message "unknown partition table" but I
was able to create the filesystem.
Thank
Dan
___
ceph-users maili
Hi guys,
I there a way to replicate the rbd images of a pool on another cluster ? other
than clone/snap ?
--
Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
I am curious what practices other people follow when removing OSDs from a
cluster. According to the docs, you are supposed to:
1. ceph osd out
2. stop daemon
3. ceph osd crush remove
4. ceph auth del
5. ceph osd rm
What value does ceph osd out (1) add to the removal process and why is it
I followed these steps and worked just fine
http://www.sebastien-han.fr/blog/2015/12/11/ceph-properly-remove-an-osd/
--
Dan
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Rafael
Lopez
Sent: Thursday, January 7, 2016 1:53 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-u
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
It is just a block device, you can use it with or without a partition.
I should be careful with that statement as bcache looks like a block
device, but you can not partition it directly.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC
I'm not sure which is *better* performance wise in terms of RBD but I've never
created any tables on top of that.
Thank you,
Shinobu
- Original Message -
From: "Dan Nica"
To: ceph-users@lists.ceph.com
Sent: Thursday, January 7, 2016 8:27:54 AM
Subject: [ceph-users] rbd partition table
Hi,
You can export rbd images and import to another cluster.
Note:you have to purge images if it has snapshots.
AFAIK, there is no way to export images keeping their clones and snapshots.
On Jan 7, 2016 6:36 AM, "Dan Nica" wrote:
> Hi guys,
>
>
>
> I there a way to replicate the rbd images of a
Hello Cephers,
A very happy new year to you all!
I wanted to enable LTTng tracepoints for a few tests with infernalis and
configured Ceph with the -with-lttng option. Seeing a recent post on conf file
options for tracing, I added these lines:
osd_tracing = true
osd_objectstore_tracing = true
r
Hi Cephers,
We have a Ceph cluster running 0.80.9, which consists of 36 OSDs with 3
replicas. Recently, some OSDs keep reporting slow request and the
cluster has a performance downgrade.
From the log of one OSD, I observe that all the slow requests are
resulted from waiting for the replicas
Hi All...
If I can step in on this issue, I just would like to report that I am
experiencing the same problem.
1./ I am installing my infernalis OSDs in a Centos 7.2.1511, and 'ceph
disk prepare' fails with the following message
# ceph-disk prepare --cluster ceph --cluster-uuid
a9431b
17 matches
Mail list logo