Hello Mark,
Ceph itself does it incremental. Just select the value you will have
at the end, and wait for Ceph to do so.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Mar
Thanks for you reply!
Yes, it a Nvme, and on node has two Nvmes as db/wal, one for ssd(0-2) and
another for hdd(3-6).
I have no spare to try.
It’s very strange, the load not very high at that time. and both ssd and nvme
seems healthy.
If cannot fix it. I am afraid I need to setup more nodes a
> One nvme sudden crash again. Could anyone please help shed some light here?
It looks like a flaky NVMe drive. Do you have a spare to try?
On Mon, Feb 22, 2021 at 1:56 AM zxcs wrote:
>
> One nvme sudden crash again. Could anyone please help shed some light here?
> Thank a ton!!!
> Below ar
One nvme sudden crash again. Could anyone please help shed some light here?
Thank a ton!!!
Below are syslog and ceph log.
From /var/log/syslog
Feb 21 19:38:33 ip kernel: [232562.847916] nvme :03:00.0: I/O 943 QID 7
timeout, aborting
Feb 21 19:38:34 ip kernel: [232563.847946] nvme :03:0
Hi,
Probably a basic/stupid question but I'm asking anyway. Through lack of
knowledge and experience at the time, when we set up our pools, our pool that
holds the majority of our data was created with a PG/PGP num of 64. As the
amount of data has grown, this has started causing issues with b
Hi,
We are using arista with MLAG on our storage layer. I would be fine with
sharing questions and answers on this list, i guess more people can benefit
from them. But as its a bit offtopic, feel also free to reach out to me
directly,
Best,
Mart
From mobile
> On Feb 21, 2021, at 18:39, Mart
On Sun, Feb 21, 2021 at 1:04 PM Gaël THEROND wrote:
>
> Hi Ilya,
>
> Sorry for the late reply, I've been sick all week long :-/ and then really
> busy at work once I'll get back.
>
> I've tried to wipe out the image by zeroing it (Even tried to fully wipe it),
> I can see the same error message.
Hi Stefan,
thanks for the additional info. Dell will put me in touch with their deployment
team soonish and then I can ask about matching abilities.
It turns out that the problem I observed might have a much more profane reason.
I saw really long periods with slow ping time yesterday and finall
Hi Ilya,
Sorry for the late reply, I've been sick all week long :-/ and then really
busy at work once I'll get back.
I've tried to wipe out the image by zeroing it (Even tried to fully wipe
it), I can see the same error message.
The thing is, isn't the image created supposed to be empty?
Regardi
Bonjour,
For the record, here is a summary of the key takeaways from this conversation
(so far):
* Ambry[0] is a perfect match and I'll keep exploring it[1].
* To keep billions of small objects manageable, they must be packed together.
* Immutable & never deleted objects can be grouped together
Hello MJ,
Arista has a good documentation available for example at
https://www.arista.com/en/um-eos/eos-multi-chassis-link-aggregation or
https://eos.arista.com/mlag-basic-configuration/. Don't worry, when
you know what exactly you want to configure, it's just a few lines of
config in the end.
It
11 matches
Mail list logo