16512.pdf
I'm trying to configure active/passive iSCSI gateway on OSD nodes serving
RBD image. Clustering is done using pacemaker/corosync. Does anyone have a
similar working setup? Anything I should be aware of?
Thanks
Dominik
On Mon, Jan 18, 2016 at 11:35 AM, Dominik Zalewski
wrote:
&
Hi,
I'm looking into implementing iscsi gateway with MPIO using lrbd -
https://github.com/swiftgist/lrb
https://www.suse.com/docrep/documents/kgu61iyowz/suse_enterprise_storage_2_and_iscsi.pdf
https://www.susecon.com/doc/2015/sessions/TUT16512.pdf
>From above examples:
*For iSCSI failover and
Hi,
I would like to hear from people who use cache tier in Ceph about best
practices and things I should avoid.
I remember hearing that it wasn't that stable back then. Has it changed in
Hammer release?
Any tips and tricks are much appreciated!
Thanks
Dominik
__
urnal size = 1
On Wed, Aug 5, 2015 at 4:48 PM, Dominik Zalewski
wrote:
> Yes, there should a separate partition per OSD. You are probably looking
> at 10-20GB journal partition per OSD. If you are creating your cluster
> using ceph-deploy it can create journal partitions for you
;wear out" in the same
time due to writes happening on both of them.
You only going to get journal write performance penalty with RAID-1.
Dominik
On Wed, Aug 5, 2015 at 3:37 PM, Dominik Zalewski
wrote:
> I would suggest splitting OSDs across two or more SSD journals (depending
> on OSD
Hi,
I’ve asked same question last weeks or so (just search the mailing list
archives for EnhanceIO :) and got some interesting answers.
Looks like the project is pretty much dead since it was bought out by HGST.
Even their website has some broken links in regards to EnhanceIO
I’m keen to try f
Hi,
I'm wondering if anyone is using NVME SSDs for journals?
Intel 750 series 400GB NVME SSD offers good performance and price in
comparison to let say Intel S3700 400GB.
http://ark.intel.com/compare/71915,86740
My concern would be MTBF / TBW which is only 1.2M hours and 70GB per day
for 5yrs o
Hi,
I’ve asked same question last weeks or so (just search the mailing list
archives for EnhanceIO :) and got some interesting answers.
Looks like the project is pretty much dead since it was bought out by HGST.
Even their website has some broken links in regards to EnhanceIO
I’m keen to try
running it in writecache mode, there is no requirement I
> can think of for it to keep on running gracefully.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Dominik Zalewski
> *Sent:* 26 June 2015 10:54
> *To:* Nick Fisk
> *Cc:* ceph-us
year.
Dominik
On Fri, Jun 26, 2015 at 10:28 AM, Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Dominik Zalewski
> > Sent: 26 June 2015 09:59
> > To: ceph-users@lists.ceph.com
>
Hi,
I came across this blog post mentioning using EnhanceIO (fork of
flashcache) as cache for OSDs.
http://www.sebastien-han.fr/blog/2014/10/06/ceph-and-enhanceio/
https://github.com/stec-inc/EnhanceIO
I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs and
using 256GB Trans
>
> Be warned that running SSD and HD based OSDs in the same server is not
> recommended. If you need the storage capacity, I'd stick to the journals
> on SSDs plan.
Can you please elaborate more why running SSD and HD based OSDs in the same
server is not
recommended ?
Thanks
Dominik
__
12 matches
Mail list logo