> Op 10 januari 2017 om 22:05 schreef Nick Fisk <n...@fisk.me.uk>:
> 
> 
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> Stuart Harland
> Sent: 10 January 2017 11:58
> To: Wido den Hollander <w...@42on.com>
> Cc: ceph new <ceph-users@lists.ceph.com>; n...@fisk.me.uk
> Subject: Re: [ceph-users] Write back cache removal
> 
>  
> 
> Yes Wido, you are correct. There is a RBD pool in the cluster, but is not 
> currently running with a cache attached. The Pool I’m trying to manage here 
> is only used by Librados to write objects directly to the pool as opposed to 
> any of the other niceties that ceph provides.
> 
>  
> 
> Specifically I ran:
> 
>  
> 
> `ceph osd tier cache-mode <hot-storage> forward`
> 
>  
> 
> which returned `Error EPERM: 'forward' is not a well-supported cache mode and 
> may corrupt your data.  pass --yes-i-really-mean-it to force.`
> 
>  
> 
> Currently we are running 10.2.5. I suspect that it’s fine in our use case, 
> however given the sparsity of the documentation I didn’t like to assume 
> anything.
> 
>  
> 
>  
> 
> Regards
> 
>  
> 
> Stuart
> 
>  
> 
>  
> 
> Yep, sorry, I got this post mixed up with the one from Daznis yesterday who 
> was using RBD’s. I think that warning was introduced as some bugs were found 
> that corrupted some users data after frequently switching between writeback 
> and forward modes. As it is very rarely used mode and so wasn’t worth the 
> testing I believe the decision was taken to just implement the warning. If 
> you are using it as part of removing a cache tier and you have already 
> flushed the tier, then I believe it should be fine to use. 
> 
>  

I suggest that you stop writes if possible so that nothing changes.

Then drain the cache and set the mode to forward.

Wido

> 
> Another way would probably be to set the min promote thresholds to higher 
> than your hit set counts, this will abuse the tiering logic but should also 
> stop anything getting promoted into your cache tier.
> 
>  
> 
>  
> 
>  
> 
>  
> 
> On 10 Jan 2017, at 09:52, Wido den Hollander <w...@42on.com 
> <mailto:w...@42on.com> > wrote:
> 
>  
> 
> 
> Op 10 januari 2017 om 9:52 schreef Nick Fisk <n...@fisk.me.uk 
> <mailto:n...@fisk.me.uk> >:
> 
> 
> 
> 
> 
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido 
> den Hollander
> Sent: 10 January 2017 07:54
> To: ceph new <ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> >; 
> Stuart Harland <s.harl...@livelinktechnology.net 
> <mailto:s.harl...@livelinktechnology.net> >
> Subject: Re: [ceph-users] Write back cache removal
> 
> 
> 
> 
> 
> Op 9 januari 2017 om 13:02 schreef Stuart Harland 
> <s.harl...@livelinktechnology.net <mailto:s.harl...@livelinktechnology.net> >:
> 
> 
> Hi,
> 
> We’ve been operating a ceph storage system storing files using librados 
> (using a replicated pool on rust disks). We implemented a
> 
> cache over the top of this with SSDs, however we now want to turn this off.
> 
> 
> 
> 
> The documentation suggests setting the cache mode to forward before draining 
> the pool, however the ceph management
> 
> controller spits out an error about this saying that it is unsupported and 
> hence dangerous.
> 
> 
> 
>  
> 
> 
> What version of Ceph are you running?
> 
> And can you paste the exact command and the output?
> 
> Wido
> 
> 
> Hi Wido,
> 
> I think this has been discussed before and looks like it might be a current 
> limitation. Not sure if it's on anybody's radar to fix.
> 
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg24472.html
> 
> 
> Might be, but afaik they are using their own application which writes to 
> RADOS using librados, not RBD.
> 
> Is that correct Stuart?
> 
> Wido
> 
> 
> 
> 
> Nick
> 
> 
> 
> 
> 
> 
> 
> 
> The thing is I cannot really locate any documentation as to why it’s 
> considered unsupported and under what conditions it is expected
> 
> to fail: I have read a passing comment about EC pools having data corruption, 
> but we are using replicated pools.
> 
> 
> 
> 
> Is this something that is safe to do?
> 
> Otherwise I have noted the read proxy mode of cache tiers which is documented 
> as a mechanism to transition from write back to
> 
> disabled, however the documentation is even sparser on this than forward 
> mode. Would this be a better approach if there is some
> unsupported behaviour in the forward mode cache option?
> 
> 
> 
> 
> Any thoughts would be appreciated - we really cannot afford to corrupt the 
> data, and I really do not want to have to do some
> 
> manual software based eviction on this data.
> 
> 
> 
> 
> regards
> 
> Stuart
> 
> 
> − Stuart Harland:
> Infrastructure Engineer
> Email: s.harl...@livelinktechnology.net 
> <mailto:s.harl...@livelinktechnology.net>  
> <mailto:s.harl...@livelinktechnology.net>
> 
> 
> 
> LiveLink Technology Ltd
> McCormack House
> 56A East Street
> Havant
> PO9 1BS
> 
> IMPORTANT: The information transmitted in this e-mail is intended only for 
> the person or entity to whom it is addressed and may
> 
> contain confidential and/or privileged information. If you are not the 
> intended recipient of this message, please do not read, copy, use
> or disclose this communication and notify the sender immediately. Any review, 
> retransmission, dissemination or other use of, or
> taking any action in reliance upon this information by persons or entities 
> other than the intended recipient is prohibited. Any views or
> opinions presented in this e-mail are solely those of the author and do not 
> necessarily represent those of LiveLink. This e-mail
> message has been checked for the presence of computer viruses. However, 
> LiveLink is not able to accept liability for any damage
> caused by this e-mail.
> 
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
>  
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to