--- Original message ---
Asunto: Re: [ceph-users] RBD as backend for iSCSI SAN Targets
De: Jianing Yang <jianingy.y...@gmail.com>
Para: Karol Kozubal <karol.kozu...@elits.com>
Cc: ceph-users@lists.ceph.com <ceph-users@lists.ceph.com>
Fecha: Tuesday, 01/04/2014 07:39
On Fri 28 Mar 2014 08:55:30 AM CST, Karol Kozubal wrote:
Hi Jianing,
Sorry for the late reply, I missed your contribution to the thread.
> Thank you for your response. I am still waiting for some of my
hardware
> and will begin testing the new setup with firefly once it is
available as
> a long term support release. I am looking forward to testing the new
setup.
> I am curious about more details on your proxy node configuration for
the
> tgt deamons? I am interested if your setup tolerates node failure on
the
iscsi end of things, if so how it is configured?
Actually, The fault tolerance is provided by ms exchange servers. We
just setup two proxy nodes (server /w tgt daemon). One for master
database and the other for backup database. The exchange servers will
do
the switch thing on failure.
Thanks,
Karol
> On 2014-03-19, 6:58 AM, "Jianing Yang" <jianingy.y...@gmail.com>
wrote:
>Hi, Karol
>
> >Here is something that I can share. We are running Ceph as an
Exchange
> >Backend via iSCSI. We currently host about 2000 mailboxes which is
about
>7 TB data overall. Our configuration is
>
>- Proxy Node (with tgt daemon) x 2
>- Ceph Monitor x 3 (virtual machines)
> >- Ceph OSD x 50 (SATA 7200rpm 2T), Replica = 2, Journal on OSD (I
know it
>is
>bad, but ...)
>
> >We tested RBD using fio and got a randwrite around 1500 iops. On
the
>living system, I saw the highest op/s around 3.1k.
>
> >I've benchmarked "tgt with librdb" vs "tgt with kernel rbd" using
my
>virtual machines. It seems that "tgt with librdb" doesn't perform
>well. It has only 1/5 iops of kernel rbd.
>
> >We are new to Ceph and still finding ways to improve the
performance. I
>am really looking forward to your benchmark.
>
>On Sun 16 Mar 2014 12:40:53 AM CST, Karol Kozubal wrote:
>
> > Hi Wido,
>
> > > I will have some new hardware for running tests in the next two
weeks
>or
> > > so and will report my findings once I get a chance to run some
tests. I
> > > will disable writeback on the target side as I will be
attempting to
> > > configure an ssd caching pool of 24 ssd's with writeback for the
main
>pool
> > > with 360 disks with a 5 osd spinners to 1 ssd journal ratio. I
will be
> > running everything through 10Gig SFP+ Ethernet interfaces with a
>dedicated
> > cluster network interface, dedicated public ceph interface and a
>separate
> > > iscsi network also with 10 gig interfaces for the target
machines.
>
> > > I am ideally looking for a 20,000 to 60,000 IOPS from this
system if I
>can
> > > get the caching pool configuration right. The application has a
30ms
>max
> > latency requirement for the storage.
>
> > > In my current tests I have only spinners with SAS 10K disks,
4.2ms
>write
> > > latency on the disks with separate journaling on SAS 15K disks
with a
> > > 3.3ms write latency. With 20 OSDs and 4 Journals I am only
concerned
>with
> > > the overall operation apply latency that I have been seeing
(1-6ms
>idle is
> > normal, but up to 60-170ms for a moderate workload using rbd
>bench-write)
> > > however I am on a network where I am bound to 1500 mtu and I
will get
>to
> > > test jumbo frames with the next setup in addition to the ssd¹s.
I
>suspect
> > > the overall performance will be good in the new test setup and I
am
> > curious to see what my tests will yield.
>
> > Thanks for the response!
>
> > Karol
>
>
>
> > > On 2014-03-15, 12:18 PM, "Wido den Hollander" <w...@42on.com>
wrote:
>
> > >On 03/15/2014 04:11 PM, Karol Kozubal wrote:
> > >> Hi Everyone,
> > >>
> > > >> I am just wondering if any of you are running a ceph cluster
with an
> > > >> iSCSI target front end? I know this isn¹t available out of
the box,
> > > >> unfortunately in one particular use case we are looking at
providing
> > > >> iSCSI access and it's a necessity. I am liking the idea of
having
>rbd
> > > >> devices serving block level storage to the iSCSI Target
servers
>while
> > > >> providing a unified backed for native rbd access by openstack
and
> > > >> various application servers. On multiple levels this would
reduce
>the
> > > >> complexity of our SAN environment and move us away from
expensive
> > >> proprietary solutions that don¹t scale out.
> > >>
> > > >> If any of you have deployed any HA iSCSI Targets backed by
rbd I
>would
> > >> really appreciate your feedback and any thoughts.
> > >>
> > >
> > > >I haven't used it in production, but a couple of things which
come to
> > >mind:
> > >
> > >- Use TGT so you can run it all in userspace backed by librbd
> > >- Do not use writeback caching on the targets
> > >
> > > >You could use multipathing if you don't use writeback caching.
Use
> > > >writeback would also cause data loss/corruption in case of
multiple
> > >targets.
> > >
> > > >It will probably just work with TGT, but I don't know anything
about
>the
> > >performance.
> > >
> > >> Karol
> > >>
> > >>
> > >> _______________________________________________
> > >> ceph-users mailing list
> > >> ceph-users@lists.ceph.com
> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >>
> > >
> > >
> > >--
> > >Wido den Hollander
> > >42on B.V.
> > >
> > >Phone: +31 (0)20 700 9902
> > >Skype: contact42on
> > >_______________________________________________
> > >ceph-users mailing list
> > >ceph-users@lists.ceph.com
> > >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>--
> _________________________________________
>/ Save time starting to type a command or \
>| file name, then press tab to complete |
>| Hit tab twice to bring up multiple |
>\ completion options. /
> -----------------------------------------
> \
> \
> _____ _______
> ____==== ]OO|_n_n__][. | |
> [________]_|__|________)< | |
> oo oo 'oo OOOO-| oo\\_ ~~~|~~~
>+--+--+--+--+--+--+--+--+--+--+--+--+--+
--
_________________________________________
/ Debian Hint #16: If you're searching \
| for a particular file, but don't know |
| which package it belongs to, try |
| installing 'apt-file', which maintains |
| a small database of this information, |
| or search the contents of the Debian |
| Packages database, which can be done |
| at: |
| |
| http://www.debian.org/distrib/packages# |
\ search_contents /
-----------------------------------------
\
\
\
.- <O> -. .-====-. ,-------. .-=<>=-.
/_-\'''/-_\ / / '' \ \ |,-----.| /__----__\
|/ o) (o \| | | ')(' | | /,'-----'.\ |/ (')(') \|
\ ._. / \ \ / / {_/(') (')\_} \ __ /
,>-_,,,_-<. >'=jf='< `. _ .' ,'--__--'.
/ . \ / \ /'-___-'\ / :| \
(_) . (_) / \ / \ (_) :| (_)
\_-----'____--/ (_) (_) (_)_______(_) |___:|____|
\___________/ |________| \_______/ |_________|
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com