I recently enabled this and now my rsyncs are taking hours and hours
longer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I have removed_snaps listed on pools that I am not using. They are
mostly for doing some performance testing, so I cannot imagine ever
creating snapshots in them.
pool 33 'fs_data.ssd' replicated size 3 min_size 1 crush_rule 5
object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn
la
Hi!
We have a ceph cluster with 42 OSD in production as a server providing
mainly home-directories of users. Ceph is 14.2.4 nautilus.
We have 3 pools. One images (for rbd images) a cephfs_metadata and a
cephfs_data pool.
Our raw data is about 5.6T. All pools have replica size 3 and there are
onl
On 6.12.19 13:29, Jochen Schulz wrote:
Hi!
We have a ceph cluster with 42 OSD in production as a server providing
mainly home-directories of users. Ceph is 14.2.4 nautilus.
We have 3 pools. One images (for rbd images) a cephfs_metadata and a
cephfs_data pool.
Our raw data is about 5.6T. All po
Hi!
Thank you!
The output of both commands are below.
I still dont understand why there are 21T used data (because 5.5T*3 =
16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
osd output tells we have 25T free space.
$ sudo ceph df
RAW STORAGE:
CLASS SIZEAVAIL
Home directories probably means lots of small objects. Default minimum
allocation size of BlueStore on HDD is 64 kiB, so there's a lot of overhead
for everything smaller;
Details: google bluestore min alloc size, can only be changed during OSD
creation
Paul
--
Paul Emmerich
Looking for help wi
On Fri, Dec 6, 2019 at 12:12 AM Dongsheng Yang
wrote:
>
>
>
> On 12/06/2019 12:50 PM, yang...@cmss.chinamobile.com wrote:
>
> Hi Jason, dongsheng
> I found a problem using rbd_open_by_id when connection timeout(errno = 110,
> ceph version 12.2.8, there is no change about rbd_open_by_id in master
On 6.12.19 14:57, Jochen Schulz wrote:
Hi!
Thank you!
The output of both commands are below.
I still dont understand why there are 21T used data (because 5.5T*3 =
16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
osd output tells we have 25T free space.
As I know MAX AVAIL
On 6.12.19 17:01, Aleksey Gutikov wrote:
On 6.12.19 14:57, Jochen Schulz wrote:
Hi!
Thank you!
The output of both commands are below.
I still dont understand why there are 21T used data (because 5.5T*3 =
16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
osd output tells we h
On Fri, Dec 6, 2019 at 9:51 AM Dongsheng Yang
wrote:
>
>
> 在 12/6/2019 9:46 PM, Jason Dillaman 写道:
> > On Fri, Dec 6, 2019 at 12:12 AM Dongsheng Yang
> > wrote:
> >>
> >>
> >> On 12/06/2019 12:50 PM, yang...@cmss.chinamobile.com wrote:
> >>
> >> Hi Jason, dongsheng
> >> I found a problem using rb
Anyone else have any insight on this? I'd also be interested to know about
this behavior.
Thanks,
On Mon, Dec 2, 2019 at 6:54 AM Tobias Urdin wrote:
> Hello,
>
> I'm trying to wrap my head around how having a multi-site (two zones in
> one zonegroup) with multiple placement
> targets but only w
Placement targets aren't meant to control whether replication happens.
All zones need to provide mappings for any placement targets/storage
classes named in the zonegroup. Zones will fail to replicate
buckets/objects that they don't have placement rules for, and those
failures will be retried u
On 12/06/2019 01:11 AM, Thomas Schneider wrote:
> Hi Mike,
>
> actually you point to the right log; I can find relevant information in
> this logfile /var/log/rbd-target-api/rbd-target-api.log:
> root@ld5505:~# tail -f /var/log/rbd-target-api/rbd-target-api.log
> 2019-12-04 12:09:52,986ERROR [
On 12/06/2019 12:10 PM, Mike Christie wrote:
> On 12/06/2019 01:11 AM, Thomas Schneider wrote:
>> Hi Mike,
>>
>> actually you point to the right log; I can find relevant information in
>> this logfile /var/log/rbd-target-api/rbd-target-api.log:
>> root@ld5505:~# tail -f /var/log/rbd-target-api/rbd-
Thank you Paul, great hint!
> 2019年12月6日 上午9:23,Paul Emmerich 写道:
>
> You should definitely migrate to BlueStore, that'll also take care of the
> leveldb/rocksdb upgrade :)
> For me mons: as it's super easy to delete and re-create a mon that's usually
> the best/simplest way to go.
>
> Also,
15 matches
Mail list logo