* We are trying to assess if we are going to see a data loss if an SSD that
is hosting journals for few OSDs crashes. In our configuration, each SSD is
partitioned into 5 chunks and each chunk is mapped as a journal drive for one
OSD. What I understand from the Ceph documentation: "Consisten
Hi,
I am contemplating using a NVRAM card for OSD journals in place of SSD drives
in our ceph cluster.
Configuration:
* 4 Ceph servers
* Each server has 24 OSDs (each OSD is a 1TB SAS drive)
* 1 PCIe NVRAM card of 16GB capacity per ceph server
* Both Client &
So, which is correct, all replicas must be written or only min_size before ack?
But for me the takeaway is that writes are protected - even if the journal
drive crashes, I am covered.
- epk
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
A
Hi,
I am looking for a way to monitor the utilization of OSD journals - by
observing the utilization pattern over time, I can determine if I have over
provisioned them or not. Is there a way to do this?
When I googled on this topic, I saw one similar request about 4 years back. I
am wondering
Hi All,
Have a question on the performance of sequential write @ 4K block sizes.
Here is my configuration:
Ceph Cluster: 6 Nodes. Each node with :-
20x HDDs (OSDs) - 10K RPM 1.2 TB SAS disks
SSDs - 4x - Intel S3710, 400GB; for OSD journals shared across 20 HDDs (i.e.,
SSD journal ratio 1:5)
Ne
Hi,
I am seeing an issue. I created 5 images testvol11-15 and I mapped them to
/dev/rbd0-4. When I execute the command 'rbd showmapped', it shows correctly
the image and the mappings as shown below:
[root@ep-compute-2-16 run1]# rbd showmapped
id pool image snap device
0 testpool test
Thanks. It works.
From: c.y. lee [mailto:c...@inwinstack.com]
Sent: Wednesday, July 13, 2016 6:17 PM
To: EP Komarla
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] rbd command anomaly
Hi,
You need to specify pool name.
rbd -p testpool info testvol11
On Thu, Jul 14, 2016 at 8:55
. Can someone help me on how to bring
these OSDs back? I know I am making some mistake, but can't figure out.
Thanks in advance,
- epk
EP KOMARLA,
[Flex_RGB_Sml_tm]
Emal: ep.koma...@flextronics.com
Address: 677 Gibraltor Ct, Building #2, Milpitas, CA 94035, USA
Phone: 408-674-6090 (m
The first question I have is to understand why some disks/OSDs showed status of
'DOWN' - there was no activity on the cluster. Last night all the OSDs were
up. What can cause OSDs to go down?
- epk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of EP
Ko
Team,
Have a performance related question on Ceph.
I know performance of a ceph cluster depends on so many factors like type of
storage servers, processors (no of processor, raw performance of processor),
memory, network links, type of disks, journal disks, etc. On top of the
hardware feature
Hi,
I am showing below fio results for Sequential Read on my Ceph cluster. I am
trying to understand this pattern:
- why there is a dip in the performance for block sizes 32k-256k?
- is this an expected performance graph?
- have you seen this kind of pattern before
[cid:image001.png@01D1E75C.2
Thanks Somnath.
I am running with CentOS7.2. Have you seen this pattern before?
- epk
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: Tuesday, July 26, 2016 4:44 PM
To: EP Komarla ; ceph-users@lists.ceph.com
Subject: RE: Ceph performance pattern
Which OS/kernel you are running with
orks. That can cause havoc with RBD sequential reads in general.
Mark
On 07/26/2016 06:38 PM, EP Komarla wrote:
> Hi,
>
>
>
> I am showing below fio results for Sequential Read on my Ceph cluster.
> I am trying to understand this pattern:
>
>
>
> - why there is a di
I am using O_DIRECT=1
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: Wednesday, July 27, 2016 8:33 AM
To: EP Komarla ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph performance pattern
Ok. Are you using O_DIRECT? That will disable readahead on the
[ep-c2-mon-01][DEBUG ] You could try running: rpm -Va --nofiles --nodigest
[ep-c2-mon-01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y install
ceph ceph-radosgw
EP KOMARLA,
[Flex_RGB_Sml_tm]
Emal: ep.koma
,
- epk
EP KOMARLA,
[Flex_RGB_Sml_tm]
Emal: ep.koma...@flextronics.com
Address: 677 Gibraltor Ct, Building #2, Milpitas, CA 94035, USA
Phone: 408-674-6090 (mobile)
Legal Disclaimer:
The information contained in this message may be privileged and confidential.
It is intended to be read only by the
missing required protocol
features
[1198606.813825] libceph: mon1 172.20.60.52:6789 feature set mismatch, my
102b84a842a42 < server's 40102b84a842a42, missing 400
[1198606.820929] libceph: mon1 172.20.60.52:6789 missing required protocol
features
[test@ep-c2-client-01 ~]$ sudo rbd
00
[1204476.810578] libceph: mon0 172.20.60.51:6789 missing required protocol
features
[1204486.821279] libceph: mon0 172.20.60.51:6789 feature set mismatch, my
102b84a842a42 < server's 40102b84a842a42, missing 400
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: Tuesday,
: Somnath Roy
Cc: EP Komarla ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Rbd map command doesn't work
EP,
Try setting the crush map to use legacy tunables. I've had the same issue with
the"feature mismatch" errors when using krbd that didn't support format 2 and
ru
19 matches
Mail list logo