On 1/14/20 5:42 PM, Francois Legrand wrote:
I don't want to remove cephfs_meta pool but cephfs_datapool.
To be clear :
I have now cephfs consisting of a cephfs_metapool and a cephfs_datapool.
I want to add a new data pool cephfs_datapool2, migrate all data from
cephfs_datapool to cephfs_datapoo
Hi,
I just looked through the rbd driver of OpenStack cinder. It seems there is no
additional clear_volume step implemented for rbd driver. In my case, objects of
this rbd image were deleted partially, so I doubt it’s related Ceph instead of
Cinder driver.
br,
Xu Yun
> 2020年1月15日 下午7:36,EDH -
Hi,
we ran some benchmarks with a few samples of Seagate's new HDDs that some
of you might find interesting:
Blog post:
https://croit.io/2020/01/06/2020-01-06-benchmark-mach2
GitHub repo with scripts and raw data:
https://github.com/croit/benchmarks/tree/master/mach2-disks
Tl;dr: way faster for
i think there is something wrong with the cephfs_data pool.
i created a new pool "cephfs_data2" and copied data from the
"cephfs_data" to the "cephfs_data2" pool by using this command:
$ rados cppool cephfs_data cephfs_data2
$ ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED
the situation is:
health: HEALTH_WARN
1 pools have many more objects per pg than average
$ ceph health detail
MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
pool cephfs_data objects per pg (315399) is more than 1227.23 times
cluster average (257)
$ ceph df
RAW STORA
Hi
For huge volumes in Openstack and Ceph, setup in your cinder this param:
volume_clear_size = 50
That will wipe only the first 50MB of the file and then ask to ceph to fully
delete instead wipe all disk with zeros that sometimes in huge volumes cause
timeout.
In our deploy that was the sol
Hi,
since we upgraded to Luminous we have had an issue with snapshot
deletion that could be related: when a largish (a few TB) snapshot gets
deleted we see a spike in the load of the OSD daemon followed by a brief
flap of the daemons themselves.
It seems that while the snapshot would have been del
No every volume. It seems that volumes with high capacity have higher
probability to trigger this problem.
> 2020年1月15日 下午4:28,Eugen Block 写道:
>
> Then it's probably something different. Does that happen with every
> volume/image or just this one time?
>
>
> Zitat von 徐蕴 :
>
>> Hi Eugen,
>>
Then it's probably something different. Does that happen with every
volume/image or just this one time?
Zitat von 徐蕴 :
Hi Eugen,
Thank you for sharing your experience. I will dig into OpenStack
cinder logs to check if something happened. The strange thing is the
volume I deleted is not
Hi Eugen,
Thank you for sharing your experience. I will dig into OpenStack cinder logs to
check if something happened. The strange thing is the volume I deleted is not
created from a snapshot, or doesn’t have any snapshot. And the rbd_id.xxx,
rbd_header.xxx and rbd_object_map.xxx were deleted,
10 matches
Mail list logo