Hello,
On Sat, 06 Sep 2014 10:28:19 -0700 JIten Shah wrote:
> Thanks Christian. Replies inline.
> On Sep 6, 2014, at 8:04 AM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > On Fri, 05 Sep 2014 15:31:01 -0700 JIten Shah wrote:
> >
> >> Hel
my systems don't support it.
>
Wasn't aware of that yet, thanks for that information.
But yeah, no support for that in any of my systems either and probably not
for some time either. ^^
Christian
> >
> > Thanks
> >
> > Andrei
> >
>
>
>
>
be to "use ceph osd
reweight" to lower the weight of osd.7 in particular.
Lastly, given the utilization of your cluster, your really ought to deploy
more OSDs and/or more nodes, if a node would go down you'd easily get into
a "real" near full or full situation.
Rega
Hello,
On Mon, 08 Sep 2014 09:53:58 -0700 JIten Shah wrote:
>
> On Sep 6, 2014, at 8:22 PM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > On Sat, 06 Sep 2014 10:28:19 -0700 JIten Shah wrote:
> >
> >> Thanks Christian. Replies inline.
&
have is, will this pg number remain effective on the
> cluster, even if we restart MON or OSD’s on the individual disks? I
> haven’t changed the values in /etc/ceph/ceph.conf. Do I need to make a
> change to the ceph.conf and push that change to all the MON, MSD and
> OSD’s ?
>
>
is self-contained and all nodes in it are completely
> loaded (i.e., I can't add any more nodes nor disks). It's also not an
> option at the moment to upgrade to firefly (can't make a big change
> before sending it out the door).
>
>
>
> On 9/8/201
9533
> > volumes - 594078601
> > total used 2326235048 285923
> > total avail 1380814968
> > total space 3707050016
> >
> > So should I up the number of PGs for the rbd and volumes pools?
> >
> > I'
stat.
Christian
> Regards,
> Quenten Grasso
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer Sent: Sunday, 7 September 2014 1:38 AM
> To: ceph-users
> Subject: Re: [ceph-users] SSD journ
, JR wrote:
> > Hi Christian,
> >
> > Ha ...
> >
> > root@osd45:~# ceph osd pool get rbd pg_num
> > pg_num: 128
> > root@osd45:~# ceph osd pool get rbd pgp_num
> > pgp_num: 64
> >
> > That's the explanation! I did run the command b
heads on VMs, thus using librbd
an having TRIM (with the correct disk device type). And again use
pacemaker to quickly fail over things.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.
However if you're planning on growing this cluster further and your
current hardware has plenty of reserves, I would go with the 1024 PGs for
big pools and 128 or 256 for the small ones.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.
pedia.org/wiki/Header_(computing)>, the data
> area and the error-correcting
> code <http://en.wikipedia.org/wiki/Error-correcting_code> (ECC)". So if
> the data is not correct. the disk can recovery it or return i/o error.
>
> Does anyone can explain it?
>
http://
On Tue, 9 Sep 2014 10:57:26 -0700 Craig Lewis wrote:
> On Sat, Sep 6, 2014 at 9:27 AM, Christian Balzer wrote:
>
> > On Sat, 06 Sep 2014 16:06:56 + Scott Laird wrote:
> >
> > > Backing up slightly, have you considered RAID 5 over your SSDs?
> > &g
s is just a test cluster.
So finding bugs is hardly surprising, especially since you're using an
experimental backend on top of that.
Also due it being a development version, you might get more feedback on
the ceph-devel mailing list.
Christian
--
Christian BalzerNetwork/Sy
System:
> >> Linux ceph-osd-bs04 3.14-0.bpo.1-amd64 #1 SMP Debian 3.14.12-1~bpo70+1
> >> (2014-07-13) x86_64 GNU/Linux
> >>
> >> Since this is happening on other Hardware as well, I don't think it's
> >> Hardware related. I have no Idea if this is an OS issue (which w
not using these two functions to
> >> achieve more performance. Can anyone provide some hints?
> >> BR
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ce
e store systems will be both faster and more
reliable than BTRFS within a year or so.
Christian
> Thanks for helping
>
> Christoph
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph
of course
isn't supported yet by ZOL. ^o^
>
> On Thu, Sep 18, 2014 at 6:36 AM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > On Thu, 18 Sep 2014 13:07:35 +0200 Christoph Adomeit wrote:
> >
> >
> > > Presently we use Solaris ZFS Boxes
clusters.
There are export functions for RBD volumes, not sure about S3 and the
mtimes as I don't use that functionality.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
Hello,
On Sun, 21 Sep 2014 21:00:48 +0200 Udo Lembke wrote:
> Hi Christian,
>
> On 21.09.2014 07:18, Christian Balzer wrote:
> > ...
> > Personally I found ext4 to be faster than XFS in nearly all use cases
> > and the lack of full, real kernel integration of ZFS is
slots on a specific CPU might
actually make a lot of sense. Especially if not all the traffic generated
by that card will have to transferred to the other CPU anyway.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com
why you often see the need for adding a 2nd CPU to use all slots) and
> > in that case pinning the IRQ handling for those slots on a specific
> > CPU might actually make a lot of sense. Especially if not all the
> > traffic generated by that card will have to transferred to the other
On Mon, 22 Sep 2014 13:35:26 +0200 Udo Lembke wrote:
> Hi Christian,
>
> On 22.09.2014 05:36, Christian Balzer wrote:
> > Hello,
> >
> > On Sun, 21 Sep 2014 21:00:48 +0200 Udo Lembke wrote:
> >
> >> Hi Christian,
> >>
> >> On 21.09.2014
Hello,
On Mon, 22 Sep 2014 08:55:48 -0500 Mark Nelson wrote:
> On 09/22/2014 01:55 AM, Christian Balzer wrote:
> >
> > Hello,
> >
> > not really specific to Ceph, but since one of the default questions by
> > the Ceph team when people are facing performance p
e MON
with the lowest IP becomes master (and thus the busiest).
This way you can survive a loss of 2 nodes and still have a valid quorum.
Christian
> 2 x 10 GbE
>
> What do you think?
> Any feedbacks, advices, or ideas are welcome!
>
> Thanks so much
>
> Regards,
--
Hello,
On Wed, 4 Feb 2015 09:20:24 + Colombo Marco wrote:
> Hi Christian,
>
>
>
> On 04/02/15 02:39, "Christian Balzer" wrote:
>
> >On Tue, 3 Feb 2015 15:16:57 + Colombo Marco wrote:
> >
> >> Hi all,
> >> I hav
Hello,
re-adding the mailing list.
On Mon, 16 Feb 2015 17:54:01 +0300 Mike wrote:
> Hello
>
> 05.02.2015 08:35, Christian Balzer пишет:
> >
> > Hello,
> >
> >>>
> >>>> LSI 2308 IT
> >>>> 2 x SSD Intel DC S3700 400GB
d to update things constantly.
In general you will want the newest stable kernel you can run, from what I
remember the 3.13 in one Ubuntu version was particular bad.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion
lusters running mixed versions."
Which I interpret as major version differences anyway.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
-ceph.com
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLin
ith a flat, host
only topology is having its data distributed as expected by the weights.
Christian
> Thanks and all the best,
>
> Frank
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listin
to fiddle around the quorum and starting the cluster with
> >>> only one mon, didn't help either. It behaves in the very same way:
> >>> Having the
> >>> mon stuck and the ceph-create-keys waiting forever.
> >>>
> >>> Could someone ple
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/
Another group in-house here runs XenServer, they would love to use a Ceph
cluster made by me for cheaper storage than NetAPP or 3PAR, but since
XenServer only supports NFS or iSCSI, I don't see that happening any time
soon.
Christian
>
> Kind regards
>
> Kevin Walker
> +968
rnals
(partitions had been created with fdisk before).
Christian
>
>
> To fix it I carried out the following steps:-
>
>
>
> 1. Used gdisk on both SSD's to create a new partition from sector
> 34 to 2047, of type EF02
>
> 2. Ran grub-install against
o DC level SSDs tend to wear out too fast,
have no powercaps and tend to have unpredictable (caused by garbage
collection) and steadily decreasing performance.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communicat
e left 50% underprovisioned. I've got 10GB for journals
> > and I am using 4 osds per ssd.
> >
> > Andrei
> >
> >
> > ------
> >
> > *From: *"Tony Harris"
> > *To: *"Andrei Mikhailovsky"
> > *Cc
On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote:
> On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer wrote:
>
> >
> > Again, penultimately you will need to sit down, compile and compare the
> > numbers.
> >
> > Start with this:
> > http://ark.intel.
On Sun, 1 Mar 2015 22:47:48 -0600 Tony Harris wrote:
> On Sun, Mar 1, 2015 at 10:18 PM, Christian Balzer wrote:
>
> > On Sun, 1 Mar 2015 21:26:16 -0600 Tony Harris wrote:
> >
> > > On Sun, Mar 1, 2015 at 6:32 PM, Christian Balzer
> > > wrote:
> > >
RIM to clean up things.
OTOH with a 240GB drive you can leave most of it empty for prevent or at
least forestall these issues.
Finally there is NO specification on the makers homepage at all. So when it
comes to durability and the buzzwords plus Toshiba eMLC mentioned in the
reviews do not so
iostat I can see all the IO's are getting coalesced into nice
> large 512kb IO's at a high queue depth, which Ceph easily swallows.
>
>
>
> If librbd could support writing its cache out to SSD it would hopefully
> achieve the same level of performance and
-----Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 04 March 2015 08:40
> To: ceph-users@lists.ceph.com
> Cc: Nick Fisk
> Subject: Re: [ceph-users] Persistent Write Back Cache
>
>
> Hello,
>
>
a key indicator of the problem must be this from the
> > > monitor log:
> > >
> > > 2015-03-04 16:53:20.715132 7f3cd0014700 1
> > > mon.ceph-mon-00@0(leader).mds e1 warning, MDS mds.?
> > > [2001:8b0::5fb3::1fff::9054]:6800/4036 up but files
t may trigger movement *back* to the more correct location.
> For this reason, you must manually opt-in to the fixed behavior.
>
It would be nice to know at what version of Ceph those bugs were
introduced.
Christian
--
Christian BalzerNetwork/Systems Engineer
c
.000.000.00 0.00 0.00 sdc 0.00 0.00
> > 0.000.00 0.00 0.00 0.00 0.000.000.00
> > 0.00 0.00 0.00 sdd 0.00 0.00 0.00 76.00
> > 0.00 760.0020.00 0.99 13.110.00 13.11 13.05 99.20
>
Regards,
Christian
> Cheers,
> Josef
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com
forming a ceph-users based
> > performance team? There's a developer performance meeting which is
> > mainly concerned with improving the internals of Ceph. There is also a
> > raft of information on the mailing list archives where people have
> > said "hey look at my S
On Wed, 18 Mar 2015 08:59:14 +0100 Josef Johansson wrote:
> Hi,
>
> > On 18 Mar 2015, at 05:29, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
>
[snip]
> >> We though of doi
Hello,
On Wed, 18 Mar 2015 11:41:17 +0100 Francois Lafont wrote:
> Hi,
>
> Christian Balzer wrote :
>
> > Consider what you think your IO load (writes) generated by your
> > client(s) will be, multiply that by your replication factor, divide by
> > the number o
On Mon, 23 Mar 2015 02:33:20 +0100 Francois Lafont wrote:
> Hi,
>
> Sorry Christian for my late answer. I was a little busy.
>
> Christian Balzer a wrote:
>
> > You're asking the wrong person, as I'm neither a Ceph or kernel
> > developer. ^o^
>
>
educe amplification.
>
> Don't known how ssd internal algorithms work for this.
>
>
> ----- Mail original -
> De: "aderumier"
> À: "Christian Balzer"
> Cc: "ceph-users"
> Envoyé: Lundi 23 Mars 2015 07:36:48
> Objet: Re: [
h cluster/becames degraded /until
> I removed all affected tiered pools (cache & EC)
> So this is just my observation of what kind of problems can be faced if
> you choose wrong Filesystem for OSD backend.
> And now I *strongly* recommend you to choose*XFS* or *Btrfs* filesy
W, is it through ceph-fs ? or rbd/rados ?
>
See the link below, it was rados bench.
But anything that would generate small writes would cause this, I bet.
> - Mail original -
> De: "Christian Balzer"
> À: "ceph-users"
> Cc: "aderumier
ct size is 4Mb, and client was an /kernel-rbd /client
> > each SSD disk is 60G disk, 2 disk per node, 6 nodes in total = 12
> > OSDs in total
> >
> >
> > 23.03.2015 12:00, Christian Balzer пишет:
> >> Hello,
> >>
> >> Thi
; >> Does the time of the procedure depends on anything else except the
> >> amount of
> >> data and the available connection (bandwidth)?
> >>
> >>
> >> Looking forward for your answers!
> >>
> >>
> >> All the best,
> >>
&
he OSDs
> run.
>
>
> Any suggestions where to look, or what could cause that problem?
> (because I can't believe your loosing that much performance through ceph
> replication)
>
> Thanks in advance.
>
> If you need any info please tell me.
>
&g
sired extra isize: 28
> Journal inode:8
> Default directory hash: half_md4
> Directory Hash Seed: 148ee5dd-7ee0-470c-a08a-b11c318ff90b
> Journal backup: inode blocks
>
> *fsck.ext4 /dev/sda1*
> e2fsck 1.42.5 (29-Jul-2012)
> /dev/sda1: clean, 33358
t; This basically stopped IO on the cluster, and I had to revert it and
> restart some of the osds with requests stuck
>
>
>
> And I tried moving the monitor from an VM to the Hardware where the OSDs
> run.
>
>
>
>
>
> Any suggestions where to look, or what could cause th
> How many iops do you see with "#ceph -w" ?
>
Very few with the cache enabled, but exactly the bandwidth indicated by
the test.
Another mystery solved. ^.^
And another data point for the OP on how to compare these results.
Christian
>
>
>
> - Mail original
> > net.core.rmem_max = 524287
> > net.core.wmem_max = 524287
> > net.core.rmem_default = 524287
> > net.core.wmem_default = 524287
> > net.core.optmem_max = 524287
> > net.core.netdev_max_backlog = 30
> >
> > AND
> >
> >
we are still
> going to start on XFS. Our plan is to build a cluster as a target for our
> backup system and we will put BTRFS on that to prove it in a production
> setting.
>
> Robert LeBlanc
>
> Sent from a mobile device please excuse any typos.
> On Mar 24, 2015 7:00
as you can afford.
Regards,
Christian
> thanks,
> Sreenath
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch.
rather tricky to me.
Lastly, the fact that they "die" precisely at the time the TBW is exceed
makes them particularly unattractive when compared to the same sized and
priced 3610 that is 15 times cheaper in TBW/$.
Regards,
Christian
--
Christian BalzerNetwork/Systems Eng
On Thu, 09 Apr 2015 02:33:44 + Shawn Edwards wrote:
> On Wed, Apr 8, 2015 at 9:23 PM Christian Balzer wrote:
>
> > On Wed, 08 Apr 2015 14:25:36 + Shawn Edwards wrote:
> >
> > > We've been working on a storage repository for xenserver 6.5, which
> &g
a mailbox server being somewhat
comparable) here with Platinum (94% efficiency supposedly) PSUs consumes
while being basically idle 105W on the input side (100V in Japan) and 95W
on the output side.
This triples basically during peak utilization times.
Christian
> Thanks in advance for your
o be deleted on the local filesystem of the OSD and various maps
updated cluster wide. Rince and repeat until all objects have been dealt
with.
Quite a bit more involved, but that's the price you have to pay when you
have a DISTRIBUTED storage architecture that doesn't rely on a single item
(l
ver. Painful for the same reasons as
> above, but in the opposite direction.
>
Probably the better of the 2 choices, as you gain many other improvements
as well. Including support of newer hardware. ^o^
Regards,
Christian
--
Christian Balzer
On Sun, 12 Apr 2015 18:03:52 +0200 Francois Lafont wrote:
> Hi,
>
> Christian Balzer wrote:
>
> >> I'm not sure to well understand: the model that I indicated in the
> >> link above (page 2, model SSG-6027R-OSD040H in the table) already
> >>
> > > > > > 2015-04-10 19:33:09.021851 7fad8e142700 0
> > > > > > log_channel(default)
> > > > > > log
> > > > > > [WRN] : slow request 94.284115 seconds old, received at
> > > > > > 2015-04-10
> > > > > > 19:31:34.737603: osd_sub_o
out-of-quorum monitors will probably keep trying to form a
> quorum and that might make mon.5 unhappy. You should test what happens
> with that kind of net split. :)
> -Greg
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
&g
ub.com/ceph/ceph/pull/3318.
> >
> > You could set osd_scrub_begin_hour and osd_scrub_begin_hour which are
> > suitable for you. This feature is available since 0.93.
> >
> > But it has not been backport to 0.87 (hammer).
> >
> > 2015-04-13 12:55 GMT+08:00 Lindsay Mathieson
&
start with something like 10 racks (ToR switches),
loosing one switch would mean a loss of 10% of your cluster, which is
something it should be able to cope with.
Especially if you configured Ceph to _not_ start re-balancing data
automatically if a rack goes down (so that you have a chance to
confidential or privileged
> information that may be protected by law; they should not be
> distributed, used or copied without authorisation. If you have received
> this email in error, please notify the sender and delete this message
> and its attachments. As emails may be altered
to totally flatten this cluster or if pools with
important data exist to copy them to new, correctly sized, pools and
delete all the oversized ones after that.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Co
for me, as I have clusters of similar small size and only one type
of usage, RBD images. So they have 1 or 2 pools and that's it.
This also results in the smoothest data distribution possible of course.
Christian
> /Steffen
--
Christian BalzerNetwork/Systems Engineer
re precisely?
>
You need to consider the speed of the devices, their local bus (hopefully
fast enough) and the network.
All things considered you probably want a redundant link (but with
bandwidth aggregation if both links are up).
10Gb/s per link would do, but 40Gb/s links (or your storage netw
Hello,
On Mon, 20 Apr 2015 04:16:01 +0200 Francois Lafont wrote:
> Hi,
>
> Christian Balzer wrote:
>
> > For starters, make that 5 MONs.
> > It won't really help you with your problem of keeping a quorum when
> > loosing a DC, but being able to loo
about right.
PGs should be power of 2 values if possible, so with 8-10 pools (of equal
expected data size) a value of 1024 would be better.
Regards,
Christian
> [client]
> rbd cache = true
> rbd cache writethrough until flush = true
>
> [osd]
> filestore_fd_cache_size = 100
On Mon, 20 Apr 2015 13:17:18 -0400 J-P Methot wrote:
> On 4/20/2015 11:01 AM, Christian Balzer wrote:
> > Hello,
> >
> > On Mon, 20 Apr 2015 10:30:41 -0400 J-P Methot wrote:
> >
> >> Hi,
> >>
> >> This is similar to another thread running rig
Hello,
On Tue, 21 Apr 2015 08:33:21 +0200 Götz Reinicke - IT Koordinator wrote:
> Hi Christian,
> Am 13.04.15 um 12:54 schrieb Christian Balzer:
> >
> > Hello,
> >
> > On Mon, 13 Apr 2015 11:03:24 +0200 Götz Reinicke - IT Koordinator
> > wrote:
> &g
e of thumb estimate :)?
>
> BTW: The numbers I got are from the recommendation and sample
> configurations from DELL, HP, Intel, Supermicron, Emulex, CERN and some
> more... Like this list.
>
> Thanks a lot for any suggestion and feedback . Regards . Götz
>
--
Chri
On Wed, 22 Apr 2015 13:50:21 -0500 Mark Nelson wrote:
>
>
> On 04/22/2015 01:39 PM, Francois Lafont wrote:
> > Hi,
> >
> > Christian Balzer wrote:
> >
> >>> thanks for the feedback regarding the network questions. Currently I
> >>> try to
lease notify the
> > sender by telephone or e-mail (as shown above) immediately and destroy
> > any and all copies of this message in your possession (whether hard
> > copies or electronically stored copies).
> >
> > ___
> > ceph-us
o know
is that in all likelihood you won't need to stop the OSD to apply the
change.
As in, use the admin socket interface to inject the new setting into the
respective OSD.
Keeping ceph.conf up to date (if only for reference) is of course helpful,
too.
Christian
--
Christian B
from O_DSYNC writes.
>
They will get burned, as in have their cells worn out by any and all
writes.
> 5.In term of HBA controller, did you guys have made any test for a full
> SSD cluster or even just for SSD Journal.
>
If you have separate journals and OSDs, it often makes good sens
m able to reach around 2-25000iops with 4k block with s3500 (with
> o_dsync) (so yes, around 80-100MB/S).
>
> I'l bench new s3610 soon to compare.
>
>
> - Mail original -----
> De: "Anthony Levesque"
> À: "Christian Balzer"
> Cc: "ceph
.
Christian
> As anyone testes a similar environment?
>
> Anyway guys, lets me know what you think since we are still testing this
> POC. ---
> Anthony Lévesque
>
>
> > On Apr 25, 2015, at 11:46 PM, Christian Balzer wrote:
> >
> >
> > Hel
e the
hardware would be capable of.
But then again, my guestimate is that aside from the significant code that
gets executed per Ceph IOP, any such Ceph IOP results in 5-10 real IOPs
down the line.
Christian
> Anyway still brainstorm this so we can work on some POC. Will you guys
> posted here
write bandwidth (about 70MB/s per SATA HDD).
IOPS (see thread above) are unlikely to be the limiting factor with SSD
journals.
>
> > > We're using Ceph for OpenStack storage (kvm). Enabling RBD cache
> > > didn't really help all that much.
> > The read s
1-(514)-907-0750
> aleves...@gtcomm.net <mailto:aleves...@gtcomm.net>
> http://www.gtcomm.net <http://www.gtcomm.net/>
> > On Apr 30, 2015, at 9:32 PM, Christian Balzer wrote:
> >
> > On Thu, 30 Apr 2015 18:01:44 -0400 Anthony Levesque wrote:
> >
> >> I
ounds far more sensible. I'd look at the I2 (iops) or D2
> (density) class instances, depending on use case.
>
It might still be a fools errand, take a look at the recent/current thread
here called "long blocking with writes on rbds".
I'd make 200% sure that AWS
d writes, but that's something to keep
in mind, not a disadvantage per se.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
cep
to run out of CPU for IOPS long before scratching
the IOPS potential of these NVMe SSDs.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.
lled and wake up, just to issue a
> > single command. Do you see any problems in triggering this command
> > automatically via monitoring event? Is there a reason why ceph isn't
> > resolving these errors itself when it has enought replicas to do so?
> >
> > R
ew, dissemination, distribution, or copying of this message is
> strictly prohibited. If you have received this communication in error,
> please notify the sender by telephone or e-mail (as shown above)
> immediately and destroy any and all copies of this message in your
> possession (w
f failures.
>
> From the redundndency POV I'd go with standalone servers, but space
> could be a bit of a problem currently ....
>
> Waht do you think?
>
> Regards . Götz
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.co
for that.
Christian
> Thanks & Regards
> Somnath
>
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Monday, May 11, 2015 11:28 PM
> To: ceph-users@lists.ceph.com
> Cc: Somnath Roy; Loic Dachary (l...@dachary.org)
> Subject: Re: [ceph-us
for EC vs
> > replication.
> >
> Thanks, take the above parts into consideration for that.
>
> Christian
> > Thanks & Regards
> > Somnath
> >
> > -Original Message-
> > From: Christian Balzer [mailto:ch...@gol.com]
> > Sent: Monday, May 11, 20
NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named above.
> If the reader of this message is not the intended recipient, you are
> hereby notified that you have received this message in error and that
101 - 200 of 1226 matches
Mail list logo