[ceph-users] Re: Change crush rule on pool

2020-09-12 Thread jesper
> I would like to change the crush rule so data lands on ssd instead of hdd,
> can this be done on the fly and migration will just happen or do I need to
> do something to move data?

I would actually like to relocate my object store to a new storage tier.
Is the best to:

1) create new pool on storage tier (SSD)
2) stop activity
3) rados cppool data to the new one.
4) rename the pool back into the "default.rgw.buckets.data" pool.

Done?

Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Change crush rule on pool

2020-09-12 Thread Anthony D'Atri
If you have capacity to have both online at the same time, why not add the SSDs 
to the existing pool, let the cluster converge, then remove the HDDs?  Either 
all at once or incrementally?  With care you’d have zero service impact.  If 
you want to change the replication strategy at the same time, that would be 
more complex.

— Anthony

> On Sep 12, 2020, at 12:42 AM, jes...@krogh.cc wrote:
> 
>> I would like to change the crush rule so data lands on ssd instead of hdd,
>> can this be done on the fly and migration will just happen or do I need to
>> do something to move data?
> 
> I would actually like to relocate my object store to a new storage tier.
> Is the best to:
> 
> 1) create new pool on storage tier (SSD)
> 2) stop activity
> 3) rados cppool data to the new one.
> 4) rename the pool back into the "default.rgw.buckets.data" pool.
> 
> Done?
> 
> Thanks.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Change crush rule on pool

2020-09-12 Thread jesper

 Can i do that - when the SSDs are allready used in another crush rule - 
backing and kvm_ssd rbd’s?

Jesper



Sent from myMail for iOS


Saturday, 12 September 2020, 11.01 +0200 from anthony.da...@gmail.com  
:
>If you have capacity to have both online at the same time, why not add the 
>SSDs to the existing pool, let the cluster converge, then remove the HDDs?  
>Either all at once or incrementally?  With care you’d have zero service 
>impact.  If you want to change the replication strategy at the same time, that 
>would be more complex.
>
>— Anthony
>
>> On Sep 12, 2020, at 12:42 AM,  jes...@krogh.cc wrote:
>> 
>>> I would like to change the crush rule so data lands on ssd instead of hdd,
>>> can this be done on the fly and migration will just happen or do I need to
>>> do something to move data?
>> 
>> I would actually like to relocate my object store to a new storage tier.
>> Is the best to:
>> 
>> 1) create new pool on storage tier (SSD)
>> 2) stop activity
>> 3) rados cppool data to the new one.
>> 4) rename the pool back into the "default.rgw.buckets.data" pool.
>> 
>> Done?
>> 
>> Thanks.
>> ___
>> ceph-users mailing list --  ceph-users@ceph.io
>> To unsubscribe send an email to  ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Incapable to utilize make include out of Facebook? Call Facebook Customer Service Toll Free Number.

2020-09-12 Thread mary smith
The make include out of the mail is utilized to compose an email however on the 
off chance that you can't utilize it, at that point you can get the important 
assistance from the assistance community by utilizing their FAQs. 
Notwithstanding that, you can likewise call the Facebook Customer Service Toll 
Free Number to get the necessary help. 
https://www.customercare-email.com/facebook-customer-service.html
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Issue in interface causing Epson Error Code 0x97? Get to customer care team for help.

2020-09-12 Thread mary smith
Interface issues can also lead to Epson Error Code 0x97 as the glitch can cause 
hindrance with smooth functioning. However, the error can be resolved if you 
use the help that is provided by customer care. All you have to do is to call 
the team by dialing the tech support number that is available on the 
internet.https://www.epsonprintersupportpro.net/epson-error-code-0x97/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Choosing suitable SSD for Ceph cluster

2020-09-12 Thread Seena Fallah
Hi. How do you say 883DCT is faster than 970 EVO?
I saw the specifications and 970 EVO has higher IOPS than 883DCT!
Can you please tell why 970 EVO act lower than 883DCT?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Choosing suitable SSD for Ceph cluster

2020-09-12 Thread Anthony D'Atri
Is this a reply to Paul’s message from 11 months ago?

https://bit.ly/32oZGlR

The PM1725b is interesting in that it has explicitly configurable durability vs 
capacity, which may be even more effective than user-level short-stroking / 
underprovisioning.


> 
> Hi. How do you say 883DCT is faster than 970 EVO?
> I saw the specifications and 970 EVO has higher IOPS than 883DCT!
> Can you please tell why 970 EVO act lower than 883DCT?

The thread above explains that.  Basically it’s not as simple as “faster”. IOPS 
describe behavior along one axis under a certain workload for a certain length 
of times.  Subtle factors:

* With increasing block size, queue depth, operation rate / duration, some 
less-robust drives will exhibit cliffing where their performance falls off 
dramatically


——
|
|
—

(that may or may not render usefully, your UMA may vary)

Or they may lose your data when there’s a power event.

* Is IOPS what you’re really concerned with?  As your OSD nodes are 
increasingly saturated by parallel requests (or if you’re overly aggressive 
with your PG ratio) , you may see more IOPS / throughput, at the risk of 
latencying going down the drain.  This may be reasonably acceptable for RGW 
bucket data, but maybe not indexes and for sure not for RBD volumes.

* The nature of the workload can dramatically affect performance

** block size
** queue depth
** r/w mix
** sync
** phoon
** etc

This is one thing that (hopefully) distinguishes “enterprise” drives from 
“consumer” drives.  There’s one “enterprise” drive (now EOL) that turned out to 
develop UREs and dramatically increased latency when presented with an actual 
enterprise Ceph — vs desktop — workload. I fought that for a year and found 
that older drives actually fared better than newer, though the vendor denyed an 
engineering or process change.  Consider the total cost of saving a few bucks 
on cheap drives that appear *on paper* to have attractive marketing specs, vs 
the nightmares you will face and the other things you won’t have time to work 
on if you’re consumed with pandemic drive failures.

Look up the performance firmware update history of the various 840/860 EVO even 
when used on desktops, which is not to say that the 970 does or doesn’t exhibit 
the same or similar issues.  Consider if you want to risk your 
corporate/production data, applications, and users on desktop-engineered drives.

In the end, you really need to buy or borrow eval drives and measure how they 
perform under both benchmarks and real workloads.  And Ceph mon / OSD service 
is *not* the same as any FIO or other benchmark tool load.

https://github.com/louwrentius/fio-plot

is a delightfully visual tool that shows the IOPS / BW / latency tradeoffs

Ideally one would compare FIO benchmarks across drives and also provision 
multiple models on a given system, slap OSDs on them, throw your real workload 
at them, and after at least a month gather drive/OSD iops/latency/bw metrics 
for each and compare them.  I’m not aware of a simple tool to manage this 
process, though I’d love one.

ymmocv
— aad


> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Assignment Help

2020-09-12 Thread jack39068
We are the best in online assignment help and students stuck in their 
assignment so don’t worry we are available 24/7. Take best help with our high 
qualified 
experts.https://www.fullhomeworkhelp.com/pages/assignment/assignment-help-australia/46
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io