HI Nigel,
In Nautilus you can decrease PG , but it take weeks , for example for us to go
from 4096 to 2048 took more than 2 weeks.
First at all pg-autoscaling is activable by pool. And you’re going to get a lot
of warning , but it works.
Normally is recommended upgrade a cluster with
hi zheng,
On 8/21/19 4:32 AM, Yan, Zheng wrote:
> Please enable debug mds (debug_mds=10), and try reproducing it again.
we will get back with the logs on monday.
thank you & with kind regards,
t.
signature.asc
Description: OpenPGP digital signature
Hi Team,
One of our old customer had Kraken and they are going to upgrade to
Luminous . In the process they also requesting for downgrade procedure.
Kraken used leveldb for ceph-mon data , from luminous it changed to rocksdb
, upgrade works without any issues.
When we downgrade , the ceph-mon doe
You can't downgrade from Luminous to Kraken well officially at least.
I guess it maybe could somehow work but you'd need to re-create all
the services. For the mon example: delete a mon, create a new old one,
let it sync, etc.
Still a bad idea.
Paul
--
Paul Emmerich
Looking for help with y
Hi,
I`m use a Qemu/kvm(Opennebula) with Ceph/RBD for running VMs, and I having
problems with slowness in aplications that many times not consuming very
CPU or RAM. This problem affect mostly Windows. Appearly the problem is
that normally the application load many short files (ex: DLLs) and these
Hi
on 2019/8/21 20:25, Gesiel Galvão Bernardes wrote:
I`m use a Qemu/kvm(Opennebula) with Ceph/RBD for running VMs, and I
having problems with slowness in aplications that many times not
consuming very CPU or RAM. This problem affect mostly Windows. Appearly
the problem is that normally the ap
Hi Eliza,
Em qua, 21 de ago de 2019 às 09:30, Eliza escreveu:
> Hi
>
> on 2019/8/21 20:25, Gesiel Galvão Bernardes wrote:
> > I`m use a Qemu/kvm(Opennebula) with Ceph/RBD for running VMs, and I
> > having problems with slowness in aplications that many times not
> > consuming very CPU or RAM. Th
Use 100% Flash setup avoid rotational disk for get some performance in HDD with
windows.
Windows is very sensitive to disk latency and interface with latency provides a
bad sense for customer some times.
You can check in your Graphana for ceph your avg read/write when in windows go
up 50
Hello
I am running a Ceph 14.2.1 cluster with 3 rados gateways. Periodically,
radosgw process on those machines starts consuming 100% of 5 CPU cores
for days at a time, even though the machine is not being used for data
transfers (nothing in radosgw logs, couple of KB/s of network).
This sit
Hi Vladimir,
On 8/21/19 8:54 AM, Vladimir Brik wrote:
Hello
I am running a Ceph 14.2.1 cluster with 3 rados gateways.
Periodically, radosgw process on those machines starts consuming 100%
of 5 CPU cores for days at a time, even though the machine is not
being used for data transfers (nothin
On Wed, Aug 21, 2019 at 3:55 PM Vladimir Brik
wrote:
>
> Hello
>
> I am running a Ceph 14.2.1 cluster with 3 rados gateways. Periodically,
> radosgw process on those machines starts consuming 100% of 5 CPU cores
> for days at a time, even though the machine is not being used for data
> transfers (
Correction: the number of threads stuck using 100% of a CPU core varies
from 1 to 5 (it's not always 5)
Vlad
On 8/21/19 8:54 AM, Vladimir Brik wrote:
Hello
I am running a Ceph 14.2.1 cluster with 3 rados gateways. Periodically,
radosgw process on those machines starts consuming 100% of 5 CPU
On 8/21/19 10:22 AM, Mark Nelson wrote:
> Hi Vladimir,
>
>
> On 8/21/19 8:54 AM, Vladimir Brik wrote:
>> Hello
>>
[much elided]
> You might want to try grabbing a a callgraph from perf instead of just
> running perf top or using my wallclock profiler to see if you can drill
> down and find out
Hello
After increasing number of PGs in a pool, ceph status is reporting
"Degraded data redundancy (low space): 1 pg backfill_toofull", but I
don't understand why, because all OSDs seem to have enough space.
ceph health detail says:
pg 40.155 is active+remapped+backfill_toofull, acting [20,57
> Are you running multisite?
No
> Do you have dynamic bucket resharding turned on?
Yes. "radosgw-admin reshard list" prints "[]"
> Are you using lifecycle?
I am not sure. How can I check? "radosgw-admin lc list" says "[]"
> And just to be clear -- sometimes all 3 of your rados gateways are
> si
All;
How experimental is the multiple CephFS filesystems per cluster feature? We
plan to use different sets of pools (meta / data) per filesystem.
Are there any known issues?
While we're on the subject, is it possible to assign a different active MDS to
each filesystem?
Thank you,
Dominic L
On Wed, Aug 21, 2019 at 2:02 PM wrote:
> How experimental is the multiple CephFS filesystems per cluster feature? We
> plan to use different sets of pools (meta / data) per filesystem.
>
> Are there any known issues?
No. It will likely work fine but some things may change in a future
version th
Just chiming in to say that I too had some issues with backfill_toofull PGs,
despite no OSD's being in a backfill_full state, albeit, there were some
nearfull OSDs.
I was able to get through it by reweighting down the OSD that was the target
reported by ceph pg dump | grep 'backfill_toofull'.
Thank you Paul.
On Wed, Aug 21, 2019 at 5:36 PM Paul Emmerich
wrote:
> You can't downgrade from Luminous to Kraken well officially at least.
>
> I guess it maybe could somehow work but you'd need to re-create all
> the services. For the mon example: delete a mon, create a new old one,
> let
Hi all,
we are using ceph in version 14.2.2 from
https://mirror.croit.io/debian-nautilus/ on debian buster and experiencing
problems with cephfs.
The mounted file system produces hanging processes due to pg stuck inactive.
This often happens after I marked single osds out manually.
A typical r
20 matches
Mail list logo