[ceph-users] Install And Activate Nbc Channel On Roku

2020-07-09 Thread nbcactivate26
An American-English commercial real radio and tv system are a forerunner land 
of the NC Universal. Now, you can view the displays from round the NBC 
Universal Universal networks such as E!, Oxygen, SYFY, USA, Bravo. The NBC This 
station is currently available on particular streaming apparatus like Roku, 
Amazon Fire Stick. All of that should be done would be to bring the station on 
into the streaming device then need to trigger the station by seeing 
https://www.nbccomactivatetv.com/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] ceph nautilus repository index is incomplet

2020-07-09 Thread Francois Legrand

Hello,
It seems that the index of 
https://download.ceph.com/rpm-nautilus/el7/x86_64/ repository is wrong. 
Only the 14.2.10-0.el7 version is available (all previous versions are 
missing despite the fact that the rpms are present in the repository).

It thus seems that the index needs to be corrected.
Who can I contact for that ?

Thanks.
F.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph multisite secondary zone not sync new changes

2020-07-09 Thread Amit Ghadge
Hello All,

In our test environment we set up ceph multisite in Active/Passive.
Cluster A migrated to the master zone without deleting any data and set up
a fresh secondary zone.
First we stop pushing data to master zone and secondary zone sync all
buckets and objects but later 1 hour started uploads 1million objects with
newly created buckets this new changes not sync by secondary. It still
showing,


#  radosgw-admin sync status
  realm 2a7b2a08-3404-40ea-81ed-2d036ee6e54f (masifd)
  zonegroup 039c426b-4032-4621-9034-d3ee724967a0 (us)
   zone 213b25f9-e726-43c8-a82e-4f9f0d4ed55c (us-east)
  metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
  data sync source: 0c6643f3-0289-4811-ab3e-dff2952e31c6 (us-west)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bluestore: osd bluestore_allocated is much larger than bluestore_stored

2020-07-09 Thread Jerry Pu
Hi Igor

We are curious why blob garbage collection is not backported to mimic or
luminous?
https://github.com/ceph/ceph/pull/28229

Thanks
Jerry

Jerry Pu  於 2020年7月8日 週三 下午6:04寫道:

> OK. Thanks for your reminder. We will think about how to make the
> adjustment to our cluster.
>
> Best
> Jerry Pu
>
> Igor Fedotov  於 2020年7月8日 週三 下午5:40寫道:
>
>> Please note that simple min_alloc_size downsizing might negatively impact
>> OSD performance. That's why this modification has  been postponed till
>> Pacific - we've made a bunch of additional changes to eliminate the drop.
>>
>>
>> Regards,
>>
>> Igor
>> On 7/8/2020 12:32 PM, Jerry Pu wrote:
>>
>> Thanks for your reply. It's helpful! We may consider to adjust
>> min_alloc_size to a lower value or take other actions based on
>> your analysis for space overhead with EC pools. Thanks.
>>
>> Best
>> Jerry Pu
>>
>> Igor Fedotov  於 2020年7月7日 週二 下午4:10寫道:
>>
>>> I think you're facing the issue covered by the following ticket:
>>>
>>> https://tracker.ceph.com/issues/44213
>>>
>>>
>>> Unfortunately the only known solution is migrating to 4K min alloc size
>>> which to be available since Pacific.
>>>
>>>
>>> Thanks,
>>>
>>> Igor
>>>
>>> On 7/7/2020 6:38 AM, Jerry Pu wrote:
>>> > Hi:
>>> >
>>> >   We have a cluster (v13.2.4), and we do some tests on a EC
>>> k=2, m=1
>>> > pool "VMPool0". We deploy some VMs (Windows, CentOS7) on the pool and
>>> then
>>> > use IOMeter to write data to these VMs. After a period of time, we
>>> observe
>>> > a strange thing that pool actual usage is much larger than stored data
>>> *
>>> > 1.5 (stored_raw).
>>> >
>>> > [root@Sim-str-R6-4 ~]# ceph df
>>> > GLOBAL:
>>> >  CLASS SIZEAVAIL   USEDRAW USED %RAW
>>> USED
>>> >hdd 3.5 TiB 1.8 TiB 1.7 TiB  1.7 TiB
>>>  48.74
>>> >  TOTAL 3.5 TiB 1.8 TiB 1.7 TiB  1.7 TiB
>>>  48.74
>>> > POOLS:
>>> >  NAME ID USED%USED  MAX AVAIL
>>>  OBJECTS
>>> >  cephfs_data  1   29 GiB 100.00   0 B
>>>   2597
>>> >  cephfs_md2  831 MiB 100.00   0 B
>>>133
>>> >  erasure_meta_hdd 3   22 MiB 100.00   0 B
>>>170
>>> >  VMPool0  4  1.2 TiB  56.77   644 GiB
>>> 116011
>>> >  stresspool   5  2.6 MiB 100.00   0 B
>>> 32
>>> >
>>> > [root@Sim-str-R6-4 ~]# ceph df detail -f json-pretty
>>> > -snippet-
>>> >  {
>>> >  "name": "VMPool0",
>>> >  "id": 4,
>>> >  "stats": {
>>> >  "kb_used": 132832,
>>> >  "bytes_used": 1360782163968,<
>>> >  "percent_used": 0.567110,
>>> >  "max_avail": 692481687552,
>>> >  "objects": 116011,
>>> >  "quota_objects": 0,
>>> >  "quota_bytes": 0,
>>> >  "dirty": 116011,
>>> >  "rd": 27449034,
>>> >  "rd_bytes": 126572760064,
>>> >  "wr": 20675381,
>>> >  "wr_bytes": 1006460652544,
>>> >  "comp_ratio": 1.00,
>>> >  "stored": 497657610240,
>>> >  "stored_raw": 746486431744,   <
>>> >  }
>>> >  },
>>> >
>>> >  The perf counters of all osds (all hdd) used by VMPool0 also
>>> show
>>> > that bluestore_allocated is much larger than bluestore_stored.
>>> >
>>> > [root@Sim-str-R6-4 ~]# for i in {0..3}; do echo $i; ceph daemon
>>> osd.$i perf
>>> > dump | grep bluestore | head -6; done
>>> > 0
>>> >  "bluestore": {
>>> >  "bluestore_allocated": 175032369152,<
>>> >  "bluestore_stored": 83557936482,<
>>> >  "bluestore_compressed": 958795770,
>>> >  "bluestore_compressed_allocated": 6431965184,
>>> >  "bluestore_compressed_original": 18576584704,
>>> > 1
>>> >  "bluestore": {
>>> >  "bluestore_allocated": 119943593984,<
>>> >  "bluestore_stored": 53325238866,<
>>> >  "bluestore_compressed": 670158436,
>>> >  "bluestore_compressed_allocated": 4751818752,
>>> >  "bluestore_compressed_original": 13752328192,
>>> > 2
>>> >  "bluestore": {
>>> >  "bluestore_allocated": 155444707328,<
>>> >  "bluestore_stored": 69067116553,<
>>> >  "bluestore_compressed": 565170876,
>>> >  "bluestore_compressed_allocated": 4614324224,
>>> >  "bluestore_compressed_original": 13469696000,
>>> > 3
>>> >  "bluestore": {
>>> >  "bluestore_allocated": 128179240960,<
>>> >  "bluestore_stored": 60884752114,<
>>> >  "bluestore_compressed": 1653455847,
>>> >  "bluestore_compressed_allocated": 97

[ceph-users] Activate Xfinity Channel Via xfinity.com authorize

2020-07-09 Thread xfinityauthorize26
Xfinity or Xfinity Stream is a streaming service which provides live broadcast 
channels, DVR recordings, linear cable channels and on-demand videos. You can 
watch live TV channels from nearly 200+ favourite networks. Additionally, it is 
possible to get on-demand videos for offline access. In the next guide, we will 
illustrate how to prepare and trigger the Xfinity Stream app on Roku apparatus. 
Currently, Xfinity is available in a beta version for Roku platforms, and plus 
It Doesn't incorporate the Entire set of features and functionality included in 
the very first variants .
 https://www.xfinitycomauthorize.com/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bluestore: osd bluestore_allocated is much larger than bluestore_stored

2020-07-09 Thread Igor Fedotov

Hi Jerry,

we haven't heard about frequent occurrences of this issue and the 
backport didn't look trivial hence we decided to omit it for M and L.



Thanks,

Igor

On 7/9/2020 1:50 PM, Jerry Pu wrote:

Hi Igor

We are curious why blob garbage collection is not backported to mimic 
or luminous?

https://github.com/ceph/ceph/pull/28229

Thanks
Jerry

Jerry Pu mailto:yician1000c...@gmail.com>> 
於 2020年7月8日 週三 下午6:04寫道:


OK. Thanks for your reminder. We will think about how to make the
adjustment to our cluster.

Best
Jerry Pu

Igor Fedotov mailto:ifedo...@suse.de>> 於
2020年7月8日 週三 下午5:40寫道:

Please note that simple min_alloc_size downsizing might
negatively impact OSD performance. That's why this
modification has  been postponed till Pacific - we've made a
bunch of additional changes to eliminate the drop.


Regards,

Igor

On 7/8/2020 12:32 PM, Jerry Pu wrote:

Thanks for your reply. It's helpful! We may consider to
adjust min_alloc_size to a lower value or take other actions
based on your analysis for space overhead with EC pools. Thanks.

Best
Jerry Pu

Igor Fedotov mailto:ifedo...@suse.de>> 於
2020年7月7日 週二 下午4:10寫道:

I think you're facing the issue covered by the following
ticket:

https://tracker.ceph.com/issues/44213


Unfortunately the only known solution is migrating to 4K
min alloc size
which to be available since Pacific.


Thanks,

Igor

On 7/7/2020 6:38 AM, Jerry Pu wrote:
> Hi:
>
>           We have a cluster (v13.2.4), and we do some
tests on a EC k=2, m=1
> pool "VMPool0". We deploy some VMs (Windows, CentOS7)
on the pool and then
> use IOMeter to write data to these VMs. After a period
of time, we observe
> a strange thing that pool actual usage is much larger
than stored data *
> 1.5 (stored_raw).
>
> [root@Sim-str-R6-4 ~]# ceph df
> GLOBAL:
>      CLASS     SIZE        AVAIL  USED        RAW USED 
   %RAW USED
>        hdd     3.5 TiB     1.8 TiB     1.7 TiB      1.7
TiB         48.74
>      TOTAL     3.5 TiB     1.8 TiB     1.7 TiB      1.7
TiB         48.74
> POOLS:
>      NAME                 ID     USED   %USED      MAX
AVAIL     OBJECTS
>      cephfs_data          1       29 GiB  100.00       
   0 B        2597
>      cephfs_md            2      831 MiB  100.00       
   0 B         133
>      erasure_meta_hdd     3       22 MiB  100.00       
   0 B         170
>      VMPool0              4      1.2 TiB   56.77     
 644 GiB      116011
>      stresspool           5      2.6 MiB  100.00       
   0 B          32
>
> [root@Sim-str-R6-4 ~]# ceph df detail -f json-pretty
> -snippet-
>          {
>              "name": "VMPool0",
>              "id": 4,
>              "stats": {
>                  "kb_used": 132832,
>                  "bytes_used": 1360782163968,   
<
>                  "percent_used": 0.567110,
>                  "max_avail": 692481687552,
>                  "objects": 116011,
>                  "quota_objects": 0,
>                  "quota_bytes": 0,
>                  "dirty": 116011,
>                  "rd": 27449034,
>                  "rd_bytes": 126572760064,
>                  "wr": 20675381,
>                  "wr_bytes": 1006460652544,
>                  "comp_ratio": 1.00,
>                  "stored": 497657610240,
>                  "stored_raw": 746486431744, 
 <
>              }
>          },
>
>          The perf counters of all osds (all hdd) used
by VMPool0 also show
> that bluestore_allocated is much larger than
bluestore_stored.
>
> [root@Sim-str-R6-4 ~]# for i in {0..3}; do echo $i;
ceph daemon osd.$i perf
> dump | grep bluestore | head -6; done
> 0
>      "bluestore": {
>          "bluestore_allocated": 175032369152,   
<
>          "bluestore_stored": 83557936482,  
<
>          "bluestore_compressed": 958795770,
>          "bluestore_c

[ceph-users] Re: bluestore: osd bluestore_allocated is much larger than bluestore_stored

2020-07-09 Thread Jerry Pu
Understand.Thank you!

Best
Jerry

Igor Fedotov  於 2020年7月9日 週四 18:56 寫道:

> Hi Jerry,
>
> we haven't heard about frequent occurrences of this issue and the backport
> didn't look trivial hence we decided to omit it for M and L.
>
>
> Thanks,
>
> Igor
> On 7/9/2020 1:50 PM, Jerry Pu wrote:
>
> Hi Igor
>
> We are curious why blob garbage collection is not backported to mimic or
> luminous?
> https://github.com/ceph/ceph/pull/28229
>
> Thanks
> Jerry
>
> Jerry Pu  於 2020年7月8日 週三 下午6:04寫道:
>
>> OK. Thanks for your reminder. We will think about how to make the
>> adjustment to our cluster.
>>
>> Best
>> Jerry Pu
>>
>> Igor Fedotov  於 2020年7月8日 週三 下午5:40寫道:
>>
>>> Please note that simple min_alloc_size downsizing might negatively
>>> impact OSD performance. That's why this modification has  been postponed
>>> till Pacific - we've made a bunch of additional changes to eliminate the
>>> drop.
>>>
>>>
>>> Regards,
>>>
>>> Igor
>>> On 7/8/2020 12:32 PM, Jerry Pu wrote:
>>>
>>> Thanks for your reply. It's helpful! We may consider to adjust
>>> min_alloc_size to a lower value or take other actions based on
>>> your analysis for space overhead with EC pools. Thanks.
>>>
>>> Best
>>> Jerry Pu
>>>
>>> Igor Fedotov  於 2020年7月7日 週二 下午4:10寫道:
>>>
 I think you're facing the issue covered by the following ticket:

 https://tracker.ceph.com/issues/44213


 Unfortunately the only known solution is migrating to 4K min alloc size
 which to be available since Pacific.


 Thanks,

 Igor

 On 7/7/2020 6:38 AM, Jerry Pu wrote:
 > Hi:
 >
 >   We have a cluster (v13.2.4), and we do some tests on a EC
 k=2, m=1
 > pool "VMPool0". We deploy some VMs (Windows, CentOS7) on the pool and
 then
 > use IOMeter to write data to these VMs. After a period of time, we
 observe
 > a strange thing that pool actual usage is much larger than stored
 data *
 > 1.5 (stored_raw).
 >
 > [root@Sim-str-R6-4 ~]# ceph df
 > GLOBAL:
 >  CLASS SIZEAVAIL   USEDRAW USED %RAW
 USED
 >hdd 3.5 TiB 1.8 TiB 1.7 TiB  1.7 TiB
  48.74
 >  TOTAL 3.5 TiB 1.8 TiB 1.7 TiB  1.7 TiB
  48.74
 > POOLS:
 >  NAME ID USED%USED  MAX AVAIL
  OBJECTS
 >  cephfs_data  1   29 GiB 100.00   0 B
 2597
 >  cephfs_md2  831 MiB 100.00   0 B
  133
 >  erasure_meta_hdd 3   22 MiB 100.00   0 B
  170
 >  VMPool0  4  1.2 TiB  56.77   644 GiB
   116011
 >  stresspool   5  2.6 MiB 100.00   0 B
   32
 >
 > [root@Sim-str-R6-4 ~]# ceph df detail -f json-pretty
 > -snippet-
 >  {
 >  "name": "VMPool0",
 >  "id": 4,
 >  "stats": {
 >  "kb_used": 132832,
 >  "bytes_used": 1360782163968,<
 >  "percent_used": 0.567110,
 >  "max_avail": 692481687552,
 >  "objects": 116011,
 >  "quota_objects": 0,
 >  "quota_bytes": 0,
 >  "dirty": 116011,
 >  "rd": 27449034,
 >  "rd_bytes": 126572760064,
 >  "wr": 20675381,
 >  "wr_bytes": 1006460652544,
 >  "comp_ratio": 1.00,
 >  "stored": 497657610240,
 >  "stored_raw": 746486431744,   <
 >  }
 >  },
 >
 >  The perf counters of all osds (all hdd) used by VMPool0 also
 show
 > that bluestore_allocated is much larger than bluestore_stored.
 >
 > [root@Sim-str-R6-4 ~]# for i in {0..3}; do echo $i; ceph daemon
 osd.$i perf
 > dump | grep bluestore | head -6; done
 > 0
 >  "bluestore": {
 >  "bluestore_allocated": 175032369152,<
 >  "bluestore_stored": 83557936482,<
 >  "bluestore_compressed": 958795770,
 >  "bluestore_compressed_allocated": 6431965184,
 >  "bluestore_compressed_original": 18576584704,
 > 1
 >  "bluestore": {
 >  "bluestore_allocated": 119943593984,<
 >  "bluestore_stored": 53325238866,<
 >  "bluestore_compressed": 670158436,
 >  "bluestore_compressed_allocated": 4751818752,
 >  "bluestore_compressed_original": 13752328192,
 > 2
 >  "bluestore": {
 >  "bluestore_allocated": 155444707328,<
 >  "bluestore_stored": 69067116553,<
>

[ceph-users] Re: Questions on Ceph on ARM

2020-07-09 Thread norman

Anthony,

I just used normal HDD,  I pretend to test the same HDDs on two X86 and
ARM clusters to test the cephfs perf diff.

Best regards,

Norman

On 8/7/2020 上午11:51, Anthony D'Atri wrote:

Bear in mind that ARM and x86 are architectures, not CPU models.  Both are 
available in a vast variety of core counts, clocks, and implementations.
Eg., an 80 core Ampere Altra likely will smoke a Intel Atom D410 in every way.

That said, what does “performance” mean?  For object storage, it might focus on 
throughput; for block attached to VM instances, chances are that latency (iops) 
is the dominant concern.

And if you’re running LFF HDDs, it’s less likely to matter than if your nodes 
are graced with an abundance of NVMe devices split into multiple OSDs and using 
dmcrypt.

You might like to visit softiron.com if they aren’t already on your radar.  If 
they fit your use-case, their ARM servers boast really low power usage compared 
to eg. a dual Xeon chassis:  less power, less cooling, RUs become the limiting 
factor in your racks instead of ahmps, etc.

— me



Aaron,

Significant performance diff for OSD? Can you tell me percentage of perf
descent?

Best regards,

Norman

On 8/7/2020 上午9:56, Aaron Joue wrote:

Hi Norman,

It works well. We mix Arm and x86. For example, OSD and MON are on Arm, RGWs 
are on x86. We can also put x86 OSD in the same cluster. All of the Ceph 
daemons talk to each other with the same protocol. Just separate the OSD to 
different CRUSH root If the OSD on Arm and x86 has a significant performance 
difference.

Best regards,
Aaron
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Questions on Ceph on ARM

2020-07-09 Thread norman

Aaron,

It's the same consideration I will take, If mix them, I will worry about
the jitter of perf.

Best regards,

Norman

On 8/7/2020 下午1:33, Aaron Joue wrote:

Hi Norman
There is no fixed percentage for that. If you mix slow and fast OSD in a PG, 
the overall performance of the pool will be affected by the slower devices. 
This is the same reason as you will not mix SSD and HDD OSD in a pool. If the 
performance difference between OSD is small, it will be OK to mix. It is also 
the same recommendation if you mix different performances of any x86 servers.

Best regards,
Aaron
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD thin provisioning and time to format a volume

2020-07-09 Thread Jason Dillaman
On Thu, Jul 9, 2020 at 12:02 AM Void Star Nill  wrote:
>
>
>
> On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman  wrote:
>>
>> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill  
>> wrote:
>> >
>> > Hello,
>> >
>> > My understanding is that the time to format an RBD volume is not dependent
>> > on its size as the RBD volumes are thin provisioned. Is this correct?
>> >
>> > For example, formatting a 1G volume should take almost the same time as
>> > formatting a 1TB volume - although accounting for differences in latencies
>> > due to load on the Ceph cluster. Is that a fair assumption?
>>
>> Yes, that is a fair comparison when creating the RBD image. However, a
>> format operation might initialize and discard extents on the disk, so
>> a larger disk will take longer to format.
>
>
> Thanks for the response Jason. Could you please explain a bit more on the the 
> format operation?

I'm not sure what else there is to explain. When you create a file
system on top of any block device, it needs to initialize the block
device. Depending on the file system, it might take more time for
larger block devices because it's doing more work.

> Is there a relative time that we can determine based on the volume size?
>
> Thanks
> Shridhar
>
>
>>
>>
>> > Thanks,
>> > Shridhar
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>> >
>>
>>
>> --
>> Jason
>>


-- 
Jason
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] bucket index nvme

2020-07-09 Thread Szabo, Istvan (Agoda)
Hello,

Can someone explain me a bit about the objectstore indexing? It's not really 
clear, when redhat says one of the important tuning for objectstore is to put 
the indexes on a fast drive, when I check our current ceph cluster and I see 
petabytes of read operations but the size of the index pool is 0, so how I can 
size 1 nvme drive in each server with 4 osd to host the index pool? Also 
thinking to share the nvme with different partition for journaling and bucket 
index, but no problem to order 1 more for it.

Thank you


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph df Vs Dashboard pool usage mismatch

2020-07-09 Thread Ernesto Puerta
Hi Richard,

Here you can find the PR  for this
issue. Feel free to leave your feedback.

Thanks!

Kind regards,
Ernesto
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD thin provisioning and time to format a volume

2020-07-09 Thread Void Star Nill
Thanks Jason.

Do you mean to say some filesystems will initialize the entire disk during
format? Does that mean we will see the entire size of the volume getting
allocated during formatting?
Or do you mean to say, some filesystem formatting just takes longer than
others, as it does more initialization?

I am just trying to understand if there are cases where Ceph will allocate
all the blocks for a filesystem during formation operations, OR if they
continue to be thin provisioned (allocate as you go based on real data). So
far I have tried with ext3, ext4 and xfs and none of them dont allocate all
the blocks during format.

-Shridhar


On Thu, 9 Jul 2020 at 06:58, Jason Dillaman  wrote:

> On Thu, Jul 9, 2020 at 12:02 AM Void Star Nill 
> wrote:
> >
> >
> >
> > On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman 
> wrote:
> >>
> >> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill 
> wrote:
> >> >
> >> > Hello,
> >> >
> >> > My understanding is that the time to format an RBD volume is not
> dependent
> >> > on its size as the RBD volumes are thin provisioned. Is this correct?
> >> >
> >> > For example, formatting a 1G volume should take almost the same time
> as
> >> > formatting a 1TB volume - although accounting for differences in
> latencies
> >> > due to load on the Ceph cluster. Is that a fair assumption?
> >>
> >> Yes, that is a fair comparison when creating the RBD image. However, a
> >> format operation might initialize and discard extents on the disk, so
> >> a larger disk will take longer to format.
> >
> >
> > Thanks for the response Jason. Could you please explain a bit more on
> the the format operation?
>
> I'm not sure what else there is to explain. When you create a file
> system on top of any block device, it needs to initialize the block
> device. Depending on the file system, it might take more time for
> larger block devices because it's doing more work.
>
> > Is there a relative time that we can determine based on the volume size?
> >
> > Thanks
> > Shridhar
> >
> >
> >>
> >>
> >> > Thanks,
> >> > Shridhar
> >> > ___
> >> > ceph-users mailing list -- ceph-users@ceph.io
> >> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >> >
> >>
> >>
> >> --
> >> Jason
> >>
>
>
> --
> Jason
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] default.rgw.data.root pool

2020-07-09 Thread Seena Fallah
Hi all.

Is there any docs related to default.rgw.data.root pool? I have this
pool and there are no objects in default.rgw.meta pool.

Thanks for your help.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD thin provisioning and time to format a volume

2020-07-09 Thread Marc Roos
 
What about ntfs? You have there a not quick option. Maybe it writes to 
the whole disk some random pattern. Why do you ask?



-Original Message-
Cc: ceph-users
Subject: [ceph-users] Re: RBD thin provisioning and time to format a 
volume

Thanks Jason.

Do you mean to say some filesystems will initialize the entire disk 
during format? Does that mean we will see the entire size of the volume 
getting allocated during formatting?
Or do you mean to say, some filesystem formatting just takes longer than 
others, as it does more initialization?

I am just trying to understand if there are cases where Ceph will 
allocate all the blocks for a filesystem during formation operations, OR 
if they continue to be thin provisioned (allocate as you go based on 
real data). So far I have tried with ext3, ext4 and xfs and none of them 
dont allocate all the blocks during format.

-Shridhar


On Thu, 9 Jul 2020 at 06:58, Jason Dillaman  wrote:

> On Thu, Jul 9, 2020 at 12:02 AM Void Star Nill 
> 
> wrote:
> >
> >
> >
> > On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman 
> wrote:
> >>
> >> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill 
> >> 
> wrote:
> >> >
> >> > Hello,
> >> >
> >> > My understanding is that the time to format an RBD volume is not
> dependent
> >> > on its size as the RBD volumes are thin provisioned. Is this 
correct?
> >> >
> >> > For example, formatting a 1G volume should take almost the same 
> >> > time
> as
> >> > formatting a 1TB volume - although accounting for differences in
> latencies
> >> > due to load on the Ceph cluster. Is that a fair assumption?
> >>
> >> Yes, that is a fair comparison when creating the RBD image. 
> >> However, a format operation might initialize and discard extents on 

> >> the disk, so a larger disk will take longer to format.
> >
> >
> > Thanks for the response Jason. Could you please explain a bit more 
> > on
> the the format operation?
>
> I'm not sure what else there is to explain. When you create a file 
> system on top of any block device, it needs to initialize the block 
> device. Depending on the file system, it might take more time for 
> larger block devices because it's doing more work.
>
> > Is there a relative time that we can determine based on the volume 
size?
> >
> > Thanks
> > Shridhar
> >
> >
> >>
> >>
> >> > Thanks,
> >> > Shridhar
> >> > ___
> >> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send 

> >> > an email to ceph-users-le...@ceph.io
> >> >
> >>
> >>
> >> --
> >> Jason
> >>
>
>
> --
> Jason
>
>
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: post - bluestore default vs tuned performance comparison

2020-07-09 Thread Mark Nelson
I believe they were chosen based on a 3rd party recommendation. I would 
suggest carefully considering each of those options and what they do 
before blindly using them.



Mark


On 7/8/20 3:30 PM, Frank Ritchie wrote:

Hi,

For this post:

https://ceph.io/community/bluestore-default-vs-tuned-performance-comparison/

I don't see a way to contact the authors so I thought I would try here.

Does anyone know how the rocksdb tuning parameters of:

"
bluestore_rocksdb_options =
compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_bytes_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB
"

were chosen?

Some of the settings seem to not be in line with the rocksdb tuning guide:

https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide

thx
Frank
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD thin provisioning and time to format a volume

2020-07-09 Thread Void Star Nill
On Thu, Jul 9, 2020 at 10:33 AM Marc Roos  wrote:

>
> What about ntfs? You have there a not quick option. Maybe it writes to
> the whole disk some random pattern. Why do you ask?
>

I am writing an API layer to plug into our platform, so I want to know if
the format times can be deterministic or unbounded. From what I saw with
ext3, ext4 and xfs volumes, the format time is actually not dependent on
the size of the volume, so I just wanted to confirm if we can assume that,
or if I am missing something.

Thanks,
Shridhar



>
>
> -Original Message-
> Cc: ceph-users
> Subject: [ceph-users] Re: RBD thin provisioning and time to format a
> volume
>
> Thanks Jason.
>
> Do you mean to say some filesystems will initialize the entire disk
> during format? Does that mean we will see the entire size of the volume
> getting allocated during formatting?
> Or do you mean to say, some filesystem formatting just takes longer than
> others, as it does more initialization?
>
> I am just trying to understand if there are cases where Ceph will
> allocate all the blocks for a filesystem during formation operations, OR
> if they continue to be thin provisioned (allocate as you go based on
> real data). So far I have tried with ext3, ext4 and xfs and none of them
> dont allocate all the blocks during format.
>
> -Shridhar
>
>
> On Thu, 9 Jul 2020 at 06:58, Jason Dillaman  wrote:
>
> > On Thu, Jul 9, 2020 at 12:02 AM Void Star Nill
> > 
> > wrote:
> > >
> > >
> > >
> > > On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman 
> > wrote:
> > >>
> > >> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill
> > >> 
> > wrote:
> > >> >
> > >> > Hello,
> > >> >
> > >> > My understanding is that the time to format an RBD volume is not
> > dependent
> > >> > on its size as the RBD volumes are thin provisioned. Is this
> correct?
> > >> >
> > >> > For example, formatting a 1G volume should take almost the same
> > >> > time
> > as
> > >> > formatting a 1TB volume - although accounting for differences in
> > latencies
> > >> > due to load on the Ceph cluster. Is that a fair assumption?
> > >>
> > >> Yes, that is a fair comparison when creating the RBD image.
> > >> However, a format operation might initialize and discard extents on
>
> > >> the disk, so a larger disk will take longer to format.
> > >
> > >
> > > Thanks for the response Jason. Could you please explain a bit more
> > > on
> > the the format operation?
> >
> > I'm not sure what else there is to explain. When you create a file
> > system on top of any block device, it needs to initialize the block
> > device. Depending on the file system, it might take more time for
> > larger block devices because it's doing more work.
> >
> > > Is there a relative time that we can determine based on the volume
> size?
> > >
> > > Thanks
> > > Shridhar
> > >
> > >
> > >>
> > >>
> > >> > Thanks,
> > >> > Shridhar
> > >> > ___
> > >> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send
>
> > >> > an email to ceph-users-le...@ceph.io
> > >> >
> > >>
> > >>
> > >> --
> > >> Jason
> > >>
> >
> >
> > --
> > Jason
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Bucket index logs (bilogs) not being trimmed automatically (multisite, ceph nautilus 14.2.9)

2020-07-09 Thread david . piper
Hi all,

We're seeing a problem in our multisite Ceph deployment, where bilogs aren't 
being trimmed for several buckets. This is causing bilogs to accumulate over 
time, leading to large OMAP object warnings for the indexes on these buckets.

In every case, Ceph reports that the bucket is in sync and the data is 
consistent across both sites, so we're perplexed as to why the logs aren't 
being trimmed. It's not affecting all of our buckets, we're not sure what's 
'different' in the affected cases causing them to accumulate. We're seeing this 
in both unsharded and sharded buckets. Some buckets with heavy activity (lots 
of object updates) have accumulated millions of bilogs, but this does not 
affect all of our very active buckets.

I've tried running 'radosgw-admin bilog autotrim' against an affected bucket, 
and it doesn't appear to do anything. I've used 'radosgw-admin bilog trim' with 
a suitable 'end-marker' to trim all of the bilogs, but the implications of 
doing this aren't clear to me, and the logs continue to accumulate afterwards.

We're running Ceph Nautilus 14.2.9 and running the services in containers, we 
have 3 hosts on each site with 1 OSD on each host.

We're adjusting a fairly minimal set of config and I don't think it includes 
anything that would affect bilog trimming. Checking the running config on the 
mon service, I think these defaulted parameters are relevant:

"rgw_sync_log_trim_concurrent_buckets": "4",
"rgw_sync_log_trim_interval": "1200",
"rgw_sync_log_trim_max_buckets": "16",
"rgw_sync_log_trim_min_cold_buckets": "4",

I can't find any documentation on these parameters but we have more than 16 
buckets, so is it possible that some buckets are just never being selected for 
trimming?

Any other ideas as to what might be causing this, or anything else we could try 
to help diagnose or fix this? Thanks in advance!


I've included an example below for one such affected bucket, showing its 
current state. Zone details (as per 'radosgw-admin zonegroup get') are at the 
bottom.

$ radosgw-admin bucket sync status --bucket=edin2z6-sharedconfig
  realm b7f31089-0879-4fa2-9cbc-cfdf5f866a35 (geored_realm)
  zonegroup 5d74eb0e-5d99-481f-ae33-43483f6cebc0 (geored_zg)
   zone c48f33ad-6d79-4b9f-a22f-78589f67526e (siteA)
 bucket 
edin2z6-sharedconfig[033709fc-924a-4582-b00d-97c90e9e61b6.3634407.1]

source zone 0a3c29b7-1a2c-432d-979b-d324a05cc831 (siteApubsub)
full sync: 0/1 shards
incremental sync: 0/1 shards
bucket is caught up with source
source zone 9f5fba56-4a32-46a6-8695-89253be81614 (siteB)
full sync: 0/1 shards
incremental sync: 1/1 shards
bucket is caught up with source
source zone c72b3aa8-a051-4665-9421-909510702412 (siteBpubsub)
full sync: 0/1 shards
incremental sync: 0/1 shards
bucket is caught up with source

$ radosgw-admin bilog list --bucket edin2z6-sharedconfig --max-entries 
6 | grep op_id | wc -l
1299392

$ rados -p siteA.rgw.buckets.index listomapkeys 
.dir.033709fc-924a-4582-b00d-97c90e9e61b6.3634407.1 | wc -l
1299083



$ radosgw-admin bucket stats --bucket=edin2z6-sharedconfig
{
"bucket": "edin2z6-sharedconfig",
"num_shards": 0,
"tenant": "",
"zonegroup": "5d74eb0e-5d99-481f-ae33-43483f6cebc0",   
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "033709fc-924a-4582-b00d-97c90e9e61b6.3634407.1",
"marker": "033709fc-924a-4582-b00d-97c90e9e61b6.3634407.1",
"index_type": "Normal",
"owner": "edin2z6",
"ver": "0#1622676",
"master_ver": "0#0",
"mtime": "2020-01-14 14:30:18.606142Z",
"max_marker": "0#1622675.2115836.5",
"usage": {
"rgw.main": {
"size": 15209,
"size_actual": 40960,
"size_utilized": 15209,
"size_kb": 15,
"size_kb_actual": 40,
"size_kb_utilized": 15,
"num_objects": 7
}
},
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
}

$ radosgw-admin bucket limit check
...
{
"bucket": "edin2z6-sharedconfig",
"tenant": "",
"num_objects": 7,
"num_shards": 0,
"objects_per_shard": 7,
"fill_status": "OK"
},
...

$ radosgw-admin zonegroup get
 ++ sudo docker ps --filter name=ceph-rgw-.*rgw -q
 ++  sudo docker exec d2c999b1f3f8 radosgw-admin
{
"id": "5d74eb0e-5d99-481f-ae33-43483f6cebc0",
"name": "geored_zg",
"api_name": "geored_zg",
"is_master": "true",
"endpoints": [
"https://10.254.2.93:7480";
],
"hostnames": [],
"hostnames_s3

[ceph-users] Lost Journals for XFS OSDs

2020-07-09 Thread Mike Dawson
Tonight an old Ceph cluster we run suffered a hardware failure that 
resulted in the loss of Ceph journal SSDs on 7 nodes out of 36. Overview 
of this old setup:


- Super-old Ceph Dumpling v0.67
- 3x replication for RBD w/ 3 failure domains in replication hierarchy
- OSDs on XFS on spinning disks with Journals on SSD

In total we lost 7 SSDs hosting Journals for 21 OSDs (3 each). The lost 
nodes span all three failure domains which makes me nervous that there 
are likely missing Placement Groups in the pool. Due to how Ceph shards 
data across the Placement Groups, I'm concerned I may have lost all the 
RBD volumes in this pool.


The obvious solution is to attempt to bring the OSDs back online (for at 
least one failure domain) to ensure there is at least one complete copy 
of the data then rebuild everything else. The issue is I lost the 
journals when the SSDs died.


I don't see much published about recovering OSDs in the event of a lost 
journal except:


https://ceph.io/geen-categorie/ceph-recover-osds-after-ssd-journal-failure/

And that doesn't mention if the data is valid afterwards. I think I 
recall Inktank used to deal with this situation and may have had a 
potential solution. At this point, I'll take any constructive advice.


Thanks you in advance,
Mike
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Error on upgrading to 15.2.4 / invalid service name using containers

2020-07-09 Thread Mario J . Barchéin Molina
Hello. I'm trying to upgrade to ceph 15.2.4 from 15.2.3. The upgrade is
almost finished, but it has entered in a service start/stop loop. I'm using
a container deployment over Debian 10 with 4 nodes. The problem is with a
service named literally "mds.label:mds". It has the colon character, which
is of special use in docker. This character can't appear in the name of the
container and also breaks the volumen binding syntax.

I have seen in the /var/lib/ceph/UUID/ the files for this service:

root@ceph-admin:/var/lib/ceph/0ce93550-b628-11ea-9484-f6dc192416ca# ls -la
total 48
drwx-- 12167 167 4096 jul 10 02:54 .
drwxr-x---  3 ceph   ceph4096 jun 24 16:36 ..
drwx--  3 nobody nogroup 4096 jun 24 16:37 alertmanager.ceph-admin
drwx--  3167 167 4096 jun 24 16:36 crash
drwx--  2167 167 4096 jul 10 01:35 crash.ceph-admin
drwx--  4998 996 4096 jun 24 16:38 grafana.ceph-admin
drwx--  2167 167 4096 jul 10 02:55
mds.label:mds.ceph-admin.rwmtkr
drwx--  2167 167 4096 jul 10 01:33 mgr.ceph-admin.doljkl
drwx--  3167 167 4096 jul 10 01:34 mon.ceph-admin
drwx--  2 nobody nogroup 4096 jun 24 16:38 node-exporter.ceph-admin
drwx--  4 nobody nogroup 4096 jun 24 16:38 prometheus.ceph-admin
drwx--  4 root   root4096 jul  3 02:43 removed


root@ceph-admin:/var/lib/ceph/0ce93550-b628-11ea-9484-f6dc192416ca/mds.label:mds.ceph-admin.rwmtkr#
ls -la
total 32
drwx--  2  167  167 4096 jul 10 02:55 .
drwx-- 12  167  167 4096 jul 10 02:54 ..
-rw---  1  167  167  295 jul 10 02:55 config
-rw---  1  167  167  152 jul 10 02:55 keyring
-rw---  1  167  167   38 jul 10 02:55 unit.configured
-rw---  1  167  167   48 jul 10 02:54 unit.created
-rw---  1 root root   24 jul 10 02:55 unit.image
-rw---  1 root root0 jul 10 02:55 unit.poststop
-rw---  1 root root  981 jul 10 02:55 unit.run

root@ceph-admin:/var/lib/ceph/0ce93550-b628-11ea-9484-f6dc192416ca/mds.label:mds.ceph-admin.rwmtkr#
cat unit.run
/usr/bin/install -d -m0770 -o 167 -g 167
/var/run/ceph/0ce93550-b628-11ea-9484-f6dc192416ca
/usr/bin/docker run --rm --net=host --ipc=host --name
ceph-0ce93550-b628-11ea-9484-f6dc192416ca-mds.label:mds.ceph-admin.rwmtkr
-e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=ceph-admin -v
/var/ru
n/ceph/0ce93550-b628-11ea-9484-f6dc192416ca:/var/run/ceph:z -v
/var/log/ceph/0ce93550-b628-11ea-9484-f6dc192416ca:/var/log/ceph:z -v
/var/lib/ceph/0ce93550-b628-11ea-9484-f6dc192416ca/crash:/var/lib/ceph/c
rash:z -v
/var/lib/ceph/0ce93550-b628-11ea-9484-f6dc192416ca/mds.label:mds.ceph-admin.rwmtkr:/var/lib/ceph/mds/ceph-label:mds.ceph-admin.rwmtkr:z
-v /var/lib/ceph/0ce93550-b628-11ea-9484-f6dc192416ca/mds.l
abel:mds.ceph-admin.rwmtkr/config:/etc/ceph/ceph.conf:z --entrypoint
/usr/bin/ceph-mds docker.io/ceph/ceph:v15 -n
mds.label:mds.ceph-admin.rwmtkr -f --setuser ceph --setgroup ceph
--default-log-to-file=fal
se --default-log-to-stderr=true --default-log-stderr-prefix="debug "

If I try to manually run the docker command, this is the error:

docker: Error response from daemon: Invalid container name
(ceph-0ce93550-b628-11ea-9484-f6dc192416ca-mds.label:mds.ceph-admin.rwmtkr),
only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed.

If I try with a different container name, then the volume binding error
rises:

docker: Error response from daemon: invalid volume specification:
'/var/lib/ceph/0ce93550-b628-11ea-9484-f6dc192416ca/mds.label:mds.ceph-admin.rwmtkr:/var/lib/ceph/mds/ceph-label:mds.ceph-admin.rwmtkr:z'.

This mds is not needed and I would be happy simply removing it, but I don't
know how to do it. The documentation says how to do it for "normal"
services, but my installation is a container deployment. I have tried to
remove the directory and restart the upgrading process but then the
directory with this service appears again.

Please, how can I remove or rename this service so I can complete the
upgrading?

Also, I think it's a bug to allow docker-forbidden characters in the
service names when using container deployment and it should be checked.

Thank you very much.

-- 
*Mario J. Barchéin Molina*
*Departamento de I+D+i*
ma...@intelligenia.com
Madrid: +34 911 86 35 46
US: +1 (918) 856 - 3838
Granada: +34 958 07 70 70
――
intelligenia · Intelligent Engineering · Web & APP & Intranet
www.intelligenia.com  · @intelligenia  ·
fb.com/intelligenia · blog.intelligenia.com
Madrid · C/ de la Alameda 22, 28014, Madrid, Spain

Miami · 2153 Coral Way #400, Miami, FL, US, 33145

Granada‎ · C/ Luis Amador nº 24, 18014, Granada, Spain


[ceph-users] about replica size

2020-07-09 Thread Zhenshi Zhou
Hi,

As we all know, the default replica setting of 'size' is 3 which means
there
are 3 copies of an object. What is the disadvantages if I set it to 2,
except
I get fewer copies?

Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: about replica size

2020-07-09 Thread Scottix
I think you said it yourself, you have fewer copies. Which make you more
prone for data loss. The other downside is recovery could be slower because
they're would only be one other copy to get it from. You could look into
erasure coding if you are trying to save storage cost but that takes higher
CPU process.

On Thu, Jul 9, 2020 at 19:12 Zhenshi Zhou  wrote:

> Hi,
>
> As we all know, the default replica setting of 'size' is 3 which means
> there
> are 3 copies of an object. What is the disadvantages if I set it to 2,
> except
> I get fewer copies?
>
> Thanks
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
-- 
T: @Thaumion
IG: Thaumion
scot...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: about replica size

2020-07-09 Thread Zhenshi Zhou
Hi, not trying to save storage, I just wanna know what would be impacted if
I modify the total number of object copies.

Scottix  于2020年7月10日周五 上午10:52写道:

> I think you said it yourself, you have fewer copies. Which make you more
> prone for data loss. The other downside is recovery could be slower because
> they're would only be one other copy to get it from. You could look into
> erasure coding if you are trying to save storage cost but that takes higher
> CPU process.
>
> On Thu, Jul 9, 2020 at 19:12 Zhenshi Zhou  wrote:
>
>> Hi,
>>
>> As we all know, the default replica setting of 'size' is 3 which means
>> there
>> are 3 copies of an object. What is the disadvantages if I set it to 2,
>> except
>> I get fewer copies?
>>
>> Thanks
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
> --
> T: @Thaumion
> IG: Thaumion
> scot...@gmail.com
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: about replica size

2020-07-09 Thread Lindsay Mathieson

On 10/07/2020 1:33 pm, Zhenshi Zhou wrote:

Hi, not trying to save storage, I just wanna know what would be impacted if
I modify the total number of object copies.


Storage is cheap, data is expensive.

--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io