Just wondering if this was ever resolved �C I am seeing the exact same issue
when I moved from Centos 6.5 firefly to Centos7 on giant release using
“ceph-deploy osd prepare . . . ” the script fails to umount and then posts a
device is busy message. Details are below in yang bin18’s posting belo
Did you set permissions to "sudo chmod +r /etc/ceph/ceph.client.admin.keyring"?
Thx
Alan
From: ceph-users on behalf of SUNDAY A.
OLUTAYO
Sent: Tuesday, February 17, 2015 4:59 PM
To: Jacob Weeks (RIS-BCT)
Cc: ceph-de...@lists.ceph.com; ceph-users@lists.ceph.c
Try sudo chmod +r /etc/ceph/ceph.client.admin.keyring for the error below?
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg,
Pankaj
Sent: Wednesday, February 25, 2015 4:04 PM
To: Travis Rhoden
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph
-Original Message-
From: Garg, Pankaj [mailto:pankaj.g...@caviumnetworks.com]
Sent: Wednesday, February 25, 2015 4:26 PM
To: Alan Johnson; Travis Rhoden
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Ceph-deploy issues
Hi Alan,
Thanks. Worked like magic.
Why did this happen though? I have
And also this needs the correct permission set as otherwise it will give this
error.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of B,
Naga Venkata
Sent: Thursday, June 18, 2015 10:07 AM
To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco);
ceph-users@lists.ceph
For the permissions use sudo chmod +r /etc/ceph/ceph.client.admin.keyring
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Teclus
Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Sent: Thursday, June 18, 2015 10:21 AM
To: B, Naga Venkata; ceph-users@lists.ceph.com
Subject
I use sudo visudo and then add in a line under
Defaults requiretty
-->
Defaults: !requiretty
Where is the username.
Hope this helps?
Alan
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vida
Ahmadi
Sent: Monday, June 22, 2015 6:31 AM
To: ceph-users@lists.ceph.com
Subj
Yes, I am also getting this error.
Thx
Alan
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Iban
Cabrillo
Sent: Saturday, September 26, 2015 6:58 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Debian repo down?
HI cepher,
I am getting download error form debi
Quorum can be achieved with one monitor node (for testing purposes this would
be OK, but of course it is a single point of failure) however the default for
the OSD nodes is three way replication (can be changed) but easier to set up
three OSD nodes to start with and one monitor node. For your c
Are the journals on the same device – it might be better to use the SSDs for
journaling since you are not getting better performance with SSDs?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Marek
Dohojda
Sent: Monday, November 23, 2015 10:24 PM
To: Haomai Wang
Cc: ceph
much better may
mean to a bottleneck elsewhere – network perhaps?
From: Marek Dohojda [mailto:mdoho...@altitudedigital.com]
Sent: Tuesday, November 24, 2015 10:37 AM
To: Alan Johnson
Cc: Haomai Wang; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Performance question
Yeah they are, that is
Or separate the journals as this will bring the workload down on the spinners
to 3Xrather than 6X
From: Marek Dohojda [mailto:mdoho...@altitudedigital.com]
Sent: Tuesday, November 24, 2015 1:24 PM
To: Nick Fisk
Cc: Alan Johnson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Performance
Try with --release
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Roozbeh Shafiee
Sent: Friday, May 06, 2016 2:54 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Installing Ceph Hammer
Hi,
I need to install Ceph Hammer because of some
We have found that we can place 18 journals on the Intel 3700 PCI-e devices
comfortably, We also tried it with fio adding more jobs to ensure that
performance did not drop off (via Sebastian Han’s tests described at
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-sui
I am trying to compare FileStore performance against Bluestore. With Luminous
12.20, Bluestore is working fine but if I try and create a Filestore volume
with a separate journal using Jewel like Syntax - "ceph-deploy osd create
:sdb:nvme0n1", device nvme0n1 is ignored and it sets up two partit
If using defaults try
chmod +r /etc/ceph/ceph.client.admin.keyring
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
GiangCoi Mr
Sent: Thursday, October 26, 2017 11:09 AM
To: ceph-us...@ceph.com
Subject: [ceph-users] Install Ceph on Fedora 26
H
I did have some similar issues and resolved it by installing parted 3.2 (I
can't say if this was definitive) but it worked for me. I also only used create
(after disk zap) rather than prepare/activate.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steve
Taylor
Sent: W
We use the 800GB version as journal devices with up to an 1:18 ratio and have
had good experiences no bottleneck on the journal side. These also feature good
endurance characteristics. I would think that higher capacities are hard to
justify as journals
-Original Message-
From: ceph-use
Could we infer from this if the usage model is large object sizes rather than
small I/Os the benefit of offloading WAL/DB is questionable given that the
failure of the SSD (assuming shared amongst HDDs) could take down a number of
OSDs and in this case a best practice would be to collocate?
--
I would also add that the journal activity is write intensive so a small part
of the drive would get excessive writes if the journal and data are co-located
on an SSD. This would also be the case where an SSD has multiple journals
associated with many HDDs.
-Original Message-
From: ceph
Can you check the value of kernel.pid_max. This may have to be increased for
larger OSD counts, it may have some bearing?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John
Hogenmiller (yt)
Sent: Friday, February 12, 2016 8:52 AM
To: ceph-users@lists.ceph.com
Subject:
I would strongly consider your journaling setup, (you do mention that you will
revisit this) but we have found that co-locating journals does impact
performance and usually separating them on flash is a good idea. Also not sure
of your networking setup which can also have significant impact.
Fr
number of good discussions relating to
endurance, and suitability as a journal device.
From: Sergio A. de Carvalho Jr. [mailto:scarvalh...@gmail.com]
Sent: Thursday, April 07, 2016 11:18 AM
To: Alan Johnson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph performance expectations
Thanks
Confirm that no pools are created by default with Mimic.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
solarflow99
Sent: Friday, February 1, 2019 2:28 PM
To: Ceph Users
Subject: [ceph-users] RBD default pool
I thought a new cluster would have the 'rbd' pool already cr
If this is Skylake the 6 channel memory architecture lends itself better to
configs such as 192GB (6 x 32) so yes even though 128GB is most likely
sufficient usng (6 x 16GB) might be too small.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Martin
Verges
Sent: Saturday
Just to add, that a more general formula is that the number of nodes should be
greater than or equal to k+m+m so N>=k+m+m for full recovery
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Eugen
Block
Sent: Thursday, February 7, 2019 8:47 AM
To
26 matches
Mail list logo