Hi,
I would recommend marking the OSD as out in ceph cluster. It will move
the data out of it even if you don't stop the process itself and after
starting it will not be marked as it.
Now that you have run purge on the osd itself I would recommend
disabling the systemd unit from starting or remo
If you just destroy the osd, it won't change the crush weight. Once the
drive is replaced you can recreate the osd with the same OSD.
On Tue, Aug 27, 2019, 8:53 PM Cory Hawkless wrote:
> I have an OSD that is throwing sense errors – It’s at it’s end of life and
> needs to be replaced.
>
> The se
I have an OSD that is throwing sense errors - It's at it's end of life and
needs to be replaced.
The server is in the datacentre and I won't get there for a few weeks so I've
stopped the service (systemctl stop ceph-osd@208) and let the cluster
rebalance, all is well.
My thinking is that if for
We've run into a problem on our test cluster this afternoon which is running
Nautilus (14.2.2). It seems that any time PGs move on the cluster (from
marking an OSD down, setting the primary-affinity to 0, or by using the
balancer), a large number of the OSDs in the cluster peg the CPU cores the
Solved it. Even if you're subscribed to a list you still have to
create an account at https://lists.ceph.io/accounts/signup/ before you
can admin your membership.
--
dan
On Tue, Aug 27, 2019 at 5:05 PM Dan van der Ster wrote:
>
> Sorry to post this to the list, but does this lists.ceph.io passw
Sorry to post this to the list, but does this lists.ceph.io password
reset work for anyone?
https://lists.ceph.io/accounts/password/reset/
For my accounts which are getting mail I have "The e-mail address is
not assigned to any user account".
Best Regards, Dan
___
Thank you for reply.
dd read is 410KB/s。 fio read is 991.23MB/s .
dd*30 410KB*30/1024 =12MB is also so huge diffrent with the fio 991.23MB/s .
At 2019-08-27 22:31:51, jes...@krogh.cc wrote:
concurrency is widely different 1:30
Jesper
Sent from myMail for iOS
Tuesday, 27 Au
Hi Dominic,
I just created a feature ticket in the Ceph tracker to keep track of
this issue.
Here's the ticket: https://tracker.ceph.com/issues/41537
Cheers,
Ricardo Dias
On 17/07/19 20:06, dhils...@performair.com wrote:
> All;
>
> I'm trying to firm up my understanding of how Ceph works, and
concurrency is widely different 1:30
Jesper
Sent from myMail for iOS
Tuesday, 27 August 2019, 16.25 +0200 from linghucongs...@163.com
:
>The performance with the dd and fio diffrent is so huge?
>
>I have 25 OSDS with 8TB hdd. with dd I only get 410KB/s read perfomance,but
>with fio I ge
The performance with the dd and fio diffrent is so huge?
I have 25 OSDS with 8TB hdd. with dd I only get 410KB/s read perfomance,but
with fio I get 991.23MB/s read perfomance.
like below:
Thanks in advance!
root@Server-d5754749-cded-4964-8129-ba1accbe86b3:~# time dd of=/dev/zero
if=/mnt
On Tue, Aug 27, 2019 at 12:26 PM Wido den Hollander wrote:
>
>
>
> > Op 27 aug. 2019 om 11:38 heeft Max Krasilnikov het
> > volgende geschreven:
> >
> > Hello!
> >
> > Sat, Aug 24, 2019 at 10:47:55PM +0200, wido wrote:
> >
> >>> Op 24 aug. 2019 om 16:36 heeft Darren Soothill
> >>> het volgende
Hi,
First, do not panic :)
Secondly, verify that the number of pg pools is adapted to the need and the
cluster.
Third, if I understood correctly, there was a too small number of pg in the
pool and the data is essentially on a PG => wait for the data to be distributed
correctly.
If possible,
> Op 27 aug. 2019 om 11:38 heeft Max Krasilnikov het
> volgende geschreven:
>
> Hello!
>
> Sat, Aug 24, 2019 at 10:47:55PM +0200, wido wrote:
>
>>> Op 24 aug. 2019 om 16:36 heeft Darren Soothill
>>> het volgende geschreven:
>>>
>>> So can you do it.
>>>
>>> Yes you can.
>>>
>>> Should
If you have decent CPU and RAM on the OSD nodes, you can try Erasure Coding,
even just 4:2 should keep the cost per GB/TB lower than 2:1 replica (as that's
basically 1.5:1 for cost) and much safer (same protection as 3:1 replica). We
use that on our biggest production SSD pool.
_
Hello!
Sat, Aug 24, 2019 at 10:47:55PM +0200, wido wrote:
> > Op 24 aug. 2019 om 16:36 heeft Darren Soothill
> > het volgende geschreven:
> >
> > So can you do it.
> >
> > Yes you can.
> >
> > Should you do it is the bigger question.
> >
> > So my first question would be what type of driv
15 matches
Mail list logo