[ceph-users] Install And Activate Nbc Channel On Roku

2020-07-09 Thread nbcactivate26
An American-English commercial real radio and tv system are a forerunner land of the NC Universal. Now, you can view the displays from round the NBC Universal Universal networks such as E!, Oxygen, SYFY, USA, Bravo. The NBC This station is currently available on particular streaming apparatus li

[ceph-users] ceph nautilus repository index is incomplet

2020-07-09 Thread Francois Legrand
Hello, It seems that the index of https://download.ceph.com/rpm-nautilus/el7/x86_64/ repository is wrong. Only the 14.2.10-0.el7 version is available (all previous versions are missing despite the fact that the rpms are present in the repository). It thus seems that the index needs to be corre

[ceph-users] Ceph multisite secondary zone not sync new changes

2020-07-09 Thread Amit Ghadge
Hello All, In our test environment we set up ceph multisite in Active/Passive. Cluster A migrated to the master zone without deleting any data and set up a fresh secondary zone. First we stop pushing data to master zone and secondary zone sync all buckets and objects but later 1 hour started uploa

[ceph-users] Re: bluestore: osd bluestore_allocated is much larger than bluestore_stored

2020-07-09 Thread Jerry Pu
Hi Igor We are curious why blob garbage collection is not backported to mimic or luminous? https://github.com/ceph/ceph/pull/28229 Thanks Jerry Jerry Pu 於 2020年7月8日 週三 下午6:04寫道: > OK. Thanks for your reminder. We will think about how to make the > adjustment to our cluster. > > Best > Jerry Pu

[ceph-users] Activate Xfinity Channel Via xfinity.com authorize

2020-07-09 Thread xfinityauthorize26
Xfinity or Xfinity Stream is a streaming service which provides live broadcast channels, DVR recordings, linear cable channels and on-demand videos. You can watch live TV channels from nearly 200+ favourite networks. Additionally, it is possible to get on-demand videos for offline access. In the

[ceph-users] Re: bluestore: osd bluestore_allocated is much larger than bluestore_stored

2020-07-09 Thread Igor Fedotov
Hi Jerry, we haven't heard about frequent occurrences of this issue and the backport didn't look trivial hence we decided to omit it for M and L. Thanks, Igor On 7/9/2020 1:50 PM, Jerry Pu wrote: Hi Igor We are curious why blob garbage collection is not backported to mimic or luminous?

[ceph-users] Re: bluestore: osd bluestore_allocated is much larger than bluestore_stored

2020-07-09 Thread Jerry Pu
Understand.Thank you! Best Jerry Igor Fedotov 於 2020年7月9日 週四 18:56 寫道: > Hi Jerry, > > we haven't heard about frequent occurrences of this issue and the backport > didn't look trivial hence we decided to omit it for M and L. > > > Thanks, > > Igor > On 7/9/2020 1:50 PM, Jerry Pu wrote: > > Hi I

[ceph-users] Re: Questions on Ceph on ARM

2020-07-09 Thread norman
Anthony, I just used normal HDD,  I pretend to test the same HDDs on two X86 and ARM clusters to test the cephfs perf diff. Best regards, Norman On 8/7/2020 上午11:51, Anthony D'Atri wrote: Bear in mind that ARM and x86 are architectures, not CPU models. Both are available in a vast variety o

[ceph-users] Re: Questions on Ceph on ARM

2020-07-09 Thread norman
Aaron, It's the same consideration I will take, If mix them, I will worry about the jitter of perf. Best regards, Norman On 8/7/2020 下午1:33, Aaron Joue wrote: Hi Norman There is no fixed percentage for that. If you mix slow and fast OSD in a PG, the overall performance of the pool will be af

[ceph-users] Re: RBD thin provisioning and time to format a volume

2020-07-09 Thread Jason Dillaman
On Thu, Jul 9, 2020 at 12:02 AM Void Star Nill wrote: > > > > On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman wrote: >> >> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill >> wrote: >> > >> > Hello, >> > >> > My understanding is that the time to format an RBD volume is not dependent >> > on its size

[ceph-users] bucket index nvme

2020-07-09 Thread Szabo, Istvan (Agoda)
Hello, Can someone explain me a bit about the objectstore indexing? It's not really clear, when redhat says one of the important tuning for objectstore is to put the indexes on a fast drive, when I check our current ceph cluster and I see petabytes of read operations but the size of the index p

[ceph-users] Re: Ceph df Vs Dashboard pool usage mismatch

2020-07-09 Thread Ernesto Puerta
Hi Richard, Here you can find the PR for this issue. Feel free to leave your feedback. Thanks! Kind regards, Ernesto ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-

[ceph-users] Re: RBD thin provisioning and time to format a volume

2020-07-09 Thread Void Star Nill
Thanks Jason. Do you mean to say some filesystems will initialize the entire disk during format? Does that mean we will see the entire size of the volume getting allocated during formatting? Or do you mean to say, some filesystem formatting just takes longer than others, as it does more initializa

[ceph-users] default.rgw.data.root pool

2020-07-09 Thread Seena Fallah
Hi all. Is there any docs related to default.rgw.data.root pool? I have this pool and there are no objects in default.rgw.meta pool. Thanks for your help. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...

[ceph-users] Re: RBD thin provisioning and time to format a volume

2020-07-09 Thread Marc Roos
What about ntfs? You have there a not quick option. Maybe it writes to the whole disk some random pattern. Why do you ask? -Original Message- Cc: ceph-users Subject: [ceph-users] Re: RBD thin provisioning and time to format a volume Thanks Jason. Do you mean to say some filesystems

[ceph-users] Re: post - bluestore default vs tuned performance comparison

2020-07-09 Thread Mark Nelson
I believe they were chosen based on a 3rd party recommendation. I would suggest carefully considering each of those options and what they do before blindly using them. Mark On 7/8/20 3:30 PM, Frank Ritchie wrote: Hi, For this post: https://ceph.io/community/bluestore-default-vs-tuned-perf

[ceph-users] Re: RBD thin provisioning and time to format a volume

2020-07-09 Thread Void Star Nill
On Thu, Jul 9, 2020 at 10:33 AM Marc Roos wrote: > > What about ntfs? You have there a not quick option. Maybe it writes to > the whole disk some random pattern. Why do you ask? > I am writing an API layer to plug into our platform, so I want to know if the format times can be deterministic or u

[ceph-users] Bucket index logs (bilogs) not being trimmed automatically (multisite, ceph nautilus 14.2.9)

2020-07-09 Thread david . piper
Hi all, We're seeing a problem in our multisite Ceph deployment, where bilogs aren't being trimmed for several buckets. This is causing bilogs to accumulate over time, leading to large OMAP object warnings for the indexes on these buckets. In every case, Ceph reports that the bucket is in sync

[ceph-users] Lost Journals for XFS OSDs

2020-07-09 Thread Mike Dawson
Tonight an old Ceph cluster we run suffered a hardware failure that resulted in the loss of Ceph journal SSDs on 7 nodes out of 36. Overview of this old setup: - Super-old Ceph Dumpling v0.67 - 3x replication for RBD w/ 3 failure domains in replication hierarchy - OSDs on XFS on spinning disks

[ceph-users] Error on upgrading to 15.2.4 / invalid service name using containers

2020-07-09 Thread Mario J . Barchéin Molina
Hello. I'm trying to upgrade to ceph 15.2.4 from 15.2.3. The upgrade is almost finished, but it has entered in a service start/stop loop. I'm using a container deployment over Debian 10 with 4 nodes. The problem is with a service named literally "mds.label:mds". It has the colon character, which is

[ceph-users] about replica size

2020-07-09 Thread Zhenshi Zhou
Hi, As we all know, the default replica setting of 'size' is 3 which means there are 3 copies of an object. What is the disadvantages if I set it to 2, except I get fewer copies? Thanks ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe se

[ceph-users] Re: about replica size

2020-07-09 Thread Scottix
I think you said it yourself, you have fewer copies. Which make you more prone for data loss. The other downside is recovery could be slower because they're would only be one other copy to get it from. You could look into erasure coding if you are trying to save storage cost but that takes higher C

[ceph-users] Re: about replica size

2020-07-09 Thread Zhenshi Zhou
Hi, not trying to save storage, I just wanna know what would be impacted if I modify the total number of object copies. Scottix 于2020年7月10日周五 上午10:52写道: > I think you said it yourself, you have fewer copies. Which make you more > prone for data loss. The other downside is recovery could be slowe

[ceph-users] Re: about replica size

2020-07-09 Thread Lindsay Mathieson
On 10/07/2020 1:33 pm, Zhenshi Zhou wrote: Hi, not trying to save storage, I just wanna know what would be impacted if I modify the total number of object copies. Storage is cheap, data is expensive. -- Lindsay ___ ceph-users mailing list -- ceph-use