Alright, I didn't realize that the MDS was affected by this as well.
In that case there's probably no other way than running the 'ceph fs
new ...' command as Yan, Zheng suggested.
Do you have backups of your cephfs contents in case that goes wrong?
I'm not sure if a pool copy would help in any
One of failed osd with 3G RAM started and dump_mempools shows total RAM
usage is 18G and buff_anon uses 17G RAM!
On Mon, Aug 31, 2020 at 6:24 PM Vahideh Alinouri
wrote:
> osd_memory_target of failed osd in one ceph-osd node changed to 6G but
> other osd_memory_target is 3G, starting failed osd w
The mons get their bind address from the monmap I believe. So this means
changing in the monmap the ip-addresses of the monitors with the
monmaptool.
Regards
Marcel
> Hello again
>
> So I have changed the network configuration.
> Now my Ceph is reachable from outside, this also means all osdâ
Hi,
did you apply that setting in the client's (e.g. controller, compute
nodes) ceph.conf? You can find a description in [1].
Regards,
Eugen
[1] https://docs.ceph.com/docs/master/rbd/rbd-openstack/
Zitat von Gabriel Medve :
Hi,
I have a CEPH 15.2.4 running in a docker. How to configure
On 01/09/2020 08:15, Simon Sutter wrote:
Hello again
So I have changed the network configuration.
Now my Ceph is reachable from outside, this also means all osd’s of all nodes
are reachable.
I still have the same behaviour which is a timeout.
The client can resolve all nodes with their hostn
On 8/24/20 11:20 PM, Jean-Sebastien Landry wrote:
Hi everyone, a bucket was overquota, (default quota of 300k objects per
bucket), I enabled the object quota for this bucket and set a quota of 600k
objects.
We are on Luminous (12.2.12) and dynamic resharding is disabled, I manually do
the res
As a matter of fact we did. We doubled the storage nodes from 25 to 50.
Total osds now 460.
You want to share your thoughts on that?
Regards
Marcel
> On 2020-08-31 14:16, Marcel Kuiper wrote:
>> The compaction of the bluestore-kv's helped indeed. The repons is back
>> to
>> acceptable levels
>
Hi Igor
To bring this thread to a conclusion: We managed to stop the random crashes by
restarting each of the OSDs manually.
After upgrading the cluster we reshuffled a lot of our data by changing PG
counts. It seems like the memory reserved during that time was never released
back to the OS.
On 2020-08-31 14:16, Marcel Kuiper wrote:
> The compaction of the bluestore-kv's helped indeed. The repons is back to
> acceptable levels
Just curious. Did you do any cluster expansion and or PG expansion
before the slowness occurred?
Gr. Stefan
___
cep
Just ignore rgw.none is a old bug as far I investigated just a representation
bug .
New versions and newer buckets doesn't have anymore rgw.none, and right now
there's no way to remove section rgw.none.
Im at Nautilus 14.2.11 rgw.none is not present since several versions ago...
-Mensaje o
Are you unable to complete your homework assignment? Are you looking for a
reliable online homework help service provider? LiveWebTutors is one of the
best and most reliable companies when it comes to providing an assignment
writing service. You can connect with the experts whether you are in ne
Hello,
During the night the osd.16 crashed after hitting a suicide timout. Thus
this morning I did a ceph-kvstore-tool compact and restarted the osd.
I thus compared the results of ceph daemon osd.16 perf dump I had before
(i.e. yesterday) and now (after compaction). I noticed a interresting
d
Is not any solution or advice?
On Tue, Sep 1, 2020, 11:53 AM Vahideh Alinouri
wrote:
> One of failed osd with 3G RAM started and dump_mempools shows total RAM
> usage is 18G and buff_anon uses 17G RAM!
>
> On Mon, Aug 31, 2020 at 6:24 PM Vahideh Alinouri <
> vahideh.alino...@gmail.com> wrote:
>
Hi All,
Does anyone know how to get the actual block size used by an osd? I’m trying
to evaluate 4k vs 64k min_alloc_size_hdd and want to verify that the newly
created osds are actually using the expected block size.
Thanks,
-TJ Ragan
___
ceph-users
Service is logical entity, which may have multiple instances/daemons
for scaling/LB purpose. For example, one ceph-monitor service may
have 3 daemons running on 3 nodes to provide HA.
Tony
> -Original Message-
> From: John Zachary Dover
> Sent: Monday, August 31, 2020 9:47 PM
> To: ceph-u
Hi, thanks for the reply
I don't use Openstack , i use Cloudstack .
Where this ceph.conf file to edit? , i edit /etc/ceph/ceph.conf and
/var/lib/ceph/container/mon.Storage01/config , but the config not working .
--
El 1/9/20 a las 04:47, Eugen Block escribió:
Hi,
did you apply that setting
If using storcli/perccli for manipulating the LSI controller, you can disable
the on-disk write cache with:
storcli /cx/vx set pdcache=off
You can also ensure that you turn off write caching at the controller level
with
storcli /cx/vx set iopolicy=direct
storcli /cx/vx set wrcache=wt
You can a
Thank you. I was working in this direction. The situation is a lot better. But
I think I can get still far better.
I could set the controller to writethrough, direct and no read ahead for the
ssds.
But I cannot disable the pdcache ☹ there is an option set in the controller
"Block SSD Write Disk
Sorry I am not fully aware of what has been already discussed in this
thread. But can't you flash these LSI logic cards to jbod? I have done
this with my 9207 with sas2flash.
I have attached my fio test of the Micron 5100 Pro/5200 SSDs
MTFDDAK1T9TCC. They perform similar to my samsung sm863a 1
write-4k-seq: (groupid=0, jobs=1): err= 0: pid=11017: Tue Sep 1
20:58:43 2020
write: IOPS=34.4k, BW=134MiB/s (141MB/s)(23.6GiB/180001msec)
slat (nsec): min=3964, max=124499, avg=4432.71, stdev=911.13
clat (nsec): min=470, max=435529, avg=23528.70, stdev=2553.67
lat (usec): min=
> there is an option set in the controller "Block SSD Write Disk Cache Change =
> Yes" which does not permit to deactivate the ssd cache. I could not find any
> solution in google for this controller (LSI MegaRAID SAS 9271-8i) to change
> this setting.
I assume you are referencing this paramet
Hi,
I have set a 3 host cluster with 30 OSDs total. Cluster has health OK and no
warning whatsoever. I set an RBD pool and 14 images which werer all
rbd-mirrored to a second cluster (which was disconnected since problems began)
and also an iSCSI interface. Then I connected a Windows 2019 Server
Hi,
Any news on this error? I'm facing the same issue I guess. Had a Windows Server
copy data to some RBD images through iSCSI and the server got stuck and had to
be reset and now the images that had data are blocking all I/O operations,
including editing their config, creating snapshots, etc.
23 matches
Mail list logo