On Fri, 03 Oct 2014 11:56:42 +0200 Massimiliano Cuttini wrote:
>
> Il 02/10/2014 17:24, Christian Balzer ha scritto:
> > On Thu, 02 Oct 2014 12:20:06 +0200 Massimiliano Cuttini wrote:
> >> Il 02/10/2014 03:18, Christian Balzer ha scritto:
> >>> On Wed, 01 Oct
ly
flogged as 40Gb/s even though it is 32Gb/s, but who's counting), which
tends to be the same basic hardware as the 10Gb/s Ethernet offerings from
Mellanox?
A brand new 18 port switch of that caliber will only cost about 180$ per
port, too.
Christian
--
Christian BalzerNet
;infiniband
ethernet bridge".
But even "infiniband ethernet switch" has a link telling you pretty much
what was said here now at the 6th position:
http://www.tomshardware.com/forum/44997-42-connect-infiniband-switch-ethernet
Christian
> > Thanks,
> > Max
> >
^o^
Christian
> On Tue Oct 07 2014 at 5:34:57 PM Christian Balzer wrote:
>
> > On Tue, 07 Oct 2014 20:40:31 + Scott Laird wrote:
> >
> > > I've done this two ways in the past. Either I'll give each machine
> > > an Infiniband network link and a 10
gt; > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/
5
> >>> sdc 0.0069.000.00 4855.50 0.00 105771.50
> >>> 21.78 0.950.20 0.10 46.40
> >>> dm-0 0.00 0.000.000.00 0.00 0.00
> >>> 0.00 0.000.00 0.00 0.00
> >>&
(both with 0.80.1 and 0.80.6)?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
t;
I of course know that, that is how I found out that things didn't get
picked up.
>
> While we try to figure this out, you can tell the running daemons to use
> your values with:
> ceph tell osd.\* --inject_args '--osd_op_threads 10'
>
That I'm also aware o
oute, as we do
> not know anything yet. But we are unsuccessful. We can go as far as
> creating a virtual machine but it fails as provisioning stage (i.e.
> mons;osds;mdss;rgws etc do not get created)
>
>
>
>
>
> Any suggestions?
>
>
>
>
>
> Thanks
&g
e journals on them. How to solve this ?
>
That is very odd, maybe some of the Ceph developers have an idea or
recollection of seeing this before.
In general you will want to monitor all your cluster nodes with something
like atop in a situation like this to spot potential problems like slow
di
no effect.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
Hello,
On Wed, 22 Oct 2014 17:41:45 -0300 Ricardo J. Barberis wrote:
> El Martes 21/10/2014, Christian Balzer escribió:
> > Hello,
> >
> > I'm trying to change the value of mon_osd_down_out_subtree_limit from
> > rack to something, anything else with ceph 0.80.(6
hat would help us debug.
> >
> > In the test, we run vdbench with the following parameters on one host:
> >
> > sd=sd1,lun=/dev/rbd2,threads=128
> > sd=sd2,lun=/dev/rbd0,threads=128
> > sd=sd3,lun=/dev/rbd1,threads=128
> > *sd=sd4,lun=/dev/rbd3,threads=128
>
thing, but nobody cares about those in journals. ^o^
Obvious things that come to mind in this context would be the ability to
disable journals (difficult, I know, not touching BTRFS, thank you) and
probably K/V store in the future.
Regards,
Christian
--
Christian BalzerNetwork/Systems Eng
sition Ceph
against SolidFire here.
Christian
> > On 24 Oct 2014, at 07:58, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > as others have reported in the past and now having tested things here
> > myself, there really is no point in having journal
will
definitely be faster than your original design.
With something dense and fast like this you will probably want 40GbE or
Infiniband on the network side, though.
> It's too big or normal use case for ceph?
Not too big, but definitely needs a completely different design and lots of
forethough
t;
> > Cheers,
> > Robert van Leeuwen
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
On Tue, 28 Oct 2014 10:33:55 +0100 Mariusz Gronczewski wrote:
> On Tue, 28 Oct 2014 11:32:34 +0900, Christian Balzer
> wrote:
>
> > On Mon, 27 Oct 2014 19:30:23 +0400 Mike wrote:
>
> > The fact that they make you buy the complete system with IT mode
> > controller
and I hope that this doesn't
even generate a WARN in the "ceph -s" report.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ournal perf counters in Ceph
that show how effective and utilized it is.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users m
and do a rebuild of that object itself.
On an other note, have you done any tests using the ZFS compression?
I'm wondering what the performance impact and efficiency are.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLi
ieve is already being discussed in the context of cache
tier pools, having to promote/demote 4MB blobs for a single hot 4KB of
data is hardly efficient.
Regards,
Christian
> Cheers,
>
>
>
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent:
> /var/lib/ceph/osd/ceph-0
>
> on Ceph3
> /dev/sdb1 5.0G 1.1G 4.0G 21%
> /var/lib/ceph/osd/ceph-1
>
> My Linux OS
>
> lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:Ubuntu 14.04 LTS
On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote:
> On Mon, 3 Nov 2014, Christian Balzer wrote:
> > c) But wait, you specified a pool size of 2 in your OSD section! Tough
> > luck, because since Firefly there is a bug that at the very least
> > prevents OSD and RGW
or
OSDs isn't particular helpful either.
Christian
> Ian R. Colle
> Global Director
> of Software Engineering
> Red Hat (Inktank is now part of Red Hat!)
> http://www.linkedin.com/in/ircolle
> http://www.twitter.com/ircolle
> Cell: +1.303.601.7713
> Email: ico...@re
On Mon, 3 Nov 2014 06:02:08 -0800 (PST) Sage Weil wrote:
> On Mon, 3 Nov 2014, Mark Kirkwood wrote:
> > On 03/11/14 14:56, Christian Balzer wrote:
> > > On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote:
> > >
> > > > On Mon, 3 Nov 2014, Christian B
number of good reasons why OSDs (HDDs)
become slower when getting fuller besides fragmentation.
Christian
> Udo
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
assume this would likely expose counters that hold the information
you're after.
If you're using libvirt, using virt-top should make really busy VMs stand
out pretty quickly.
I'm using ganeti, so I'd _really_ love to have what you're asking for
implemented in Ceph.
e read orgy, clearly reading all the data in
the PGs.
Why? And what is it comparing that data with, the cosmic background
radiation?
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communication
osd.13 up 1
> 14 0.35osd.14 up 1
> 15 0.35osd.15 up 1
> -4 1.75host rack6-storage-6
> 16 0.35osd.16 up 1
> 17 0.35
On Tue, 11 Nov 2014 10:21:49 -0800 Gregory Farnum wrote:
> On Mon, Nov 10, 2014 at 10:58 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > One of my clusters has become busy enough (I'm looking at you, evil
> > Window VMs that I shall banish elsewhere soo
on here and what I
> should try to make solaris 10 Guests faster on ceph storage ?
>
> Many Thanks
> Christoph
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
__
ards people should (not)
> > use in their systems ?
> >
> > Any magical way to “blink” a drive in linux ?
> >
> >
> >
> > Thanks && regards
> >
> > ___
> > ceph-users mailing list
> &
speed since the total
> > > > size written (a few GB) is lower than the size of the cache
> > > > pool. Instead the write speed is consistently at least twice
> > > > slower (12.445 * 2 < 34.581).
> > > >
> > > >
aived,
> lost or destroyed by reason of delivery to any person other than
> intended addressee. If you have received this message and are not the
> intended addressee you should notify the sender by return email and
> destroy all copies of the message and any attachments. Unless expressly
> attributed,
25698460 Swap: 31249404 0
> 31249404
>
> Darryl
>
>
> From: Christian Balzer
> Sent: Friday, 21 November 2014 10:06 AM
> To: 'ceph-users'
> Cc: Bond, Darryl
> Subject: Re: [ceph-users] Kernel memory allocation oo
ting did not.
>
Well, at least it's good to know that.
Guess I'll keep cargo-culting that little setting for some time to come.
> Darryl
>
> ________
> From: Christian Balzer
> Sent: Friday, 21 November 2014 2:39 PM
> To: 'ceph-user
neck in certain situations. ^o^
As for setting things on the fly, look for the "injectargs" syntax, like:
ceph tell osd.* injectargs '--filestore_max_sync_interval 30'
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Glo
their top directory. Could
this have an affect on things?
3. Is it really that easy to trash your OSDs?
In the case a storage node crashes, am I to expect most if not all OSDs or
at least their journals to require manual loving?
Regards,
Christian
--
Christian BalzerNetwork/Systems
On Fri, 5 Dec 2014 11:23:19 -0800 Gregory Farnum wrote:
> On Thu, Dec 4, 2014 at 7:03 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > This morning I decided to reboot a storage node (Debian Jessie, thus
> > 3.16 kernel and Ceph 0.80.7, HDD OSDs with SSD
to maybe coax more information
out of Ceph if this happens again.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-us
few
> > options that can cause data corruption. -- justin
> > --
> > To unsubscribe from this list: send the line "unsubscribe
> > ceph-devel" in the body of a message to majord...@vger.kernel.org
> > <mailto:majord...@vger.kernel.org>
>
nch, but I'd have to check the tracker to be sure.
>
> On Mon, Dec 8, 2014 at 8:23 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Mon, 8 Dec 2014 19:51:00 -0800 Gregory Farnum wrote:
> >
> >> On Mon, Dec 8, 2014 at 6:39 PM, Christian Balzer
Hello,
On Mon, 8 Dec 2014 19:51:00 -0800 Gregory Farnum wrote:
> On Mon, Dec 8, 2014 at 6:39 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > Debian Jessie cluster, thus kernel 3.16, ceph 0.80.7.
> > 3 storage nodes with 8 OSDs (journals on 4 SSDs)
d:
> >>>> https://github.com/ceph/ceph/pull/1233
> >>>
> >>> Nope, this has nothing to do with it.
> >>>
> >>>>
> >>>> Is that what we're seeing here? Can anyone point u
, even though there
> > > >>>> are three mons in his config file. I would think that the
> > > >>>> radosgw client would connect
> > > >>>> to any of the nodes in the config file to get the state of the
> > > >>>>
se SSDs can do, see above.
> Any ideas?
>
Somebody who actually has upgraded an SSD cluster from Firefly to Giant
would be in the correct position to answer that.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Com
all the time?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
o
> -rw-r--r-- 1 root root 0 Dec 15 01:35 bar
> drwx-- 2 root root 16384 Dec 12 16:05 lost+found
>
> Am I wrong something? Are there something to be configured? Is it
> impossible to use RBD as a shared network block device?
>
Shared block device != shared file s
d once you use a larger amount of threads and 4KB blocks, your CPUs will
melt.
Try "rados -p poolname bench 30 write -t 64 -b 4096" for some fireworks.
Regards,
Christian
> Thanks for any help...
>
>
> Samuel Terburg
>
>
>
>
> _
mesh with other needs I have here.
Thanks for reminding me, though.
Christian
> Cheers,
> Josef
>
> > On 15 Dec 2014, at 04:10, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > What are people here using to provide HA KVMs (and with that I me
t; [mon.deis-2]
> host = deis-2
> mon addr = 10.132.183.192:6789
>
> [client.radosgw.gateway]
> host = deis-store-gateway
> keyring = /etc/ceph/ceph.client.radosgw.keyring
> rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
> log file = /dev/stdout
>
> Thank you.
>
> - J
;> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > Cheers.
> >
> > Sébastien Han
> > Cloud Architect
> >
> > "Always give 100%. Unless y
r improvements.
They show great promise/potential and I'm looking forward to use them, but
right now (and probably for the next 1-2 releases) the best bang for the
buck in speeding up Ceph is classic SSD journals for writes and lots of
RAM for reads.
Christian
--
Christian Balzer
tely I don't think there has been any progress with this.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users m
st.com/ssd-endurance-test-report/Samsung-840-EVO-120
The only reliable game in town at this point in time are Intel DC S3700
models, the 200GB model for example has a TBW of 1.8PB and will keep
its speed w/o the need for TRIM or massive underprovisioning.
Christian
--
Christian Balzer
% of my
> >> OSDs. In the reference architecture, it could lose a whole row, and
> >> still keep under that limit. My 5 nodes cluster is noticeably better
> >> better than the 3 node cluster. It's faster, has lower latency, and
> >> latency doesn
On Fri, 19 Dec 2014 12:28:48 +1000 Lindsay Mathieson wrote:
> On 19 December 2014 at 11:14, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Thu, 18 Dec 2014 16:12:09 -0800 Craig Lewis wrote:
> >
> > Firstly I'd like to confirm what Craig said abou
a fully
> functioning cluster.
>
> Please help. I am in the #ceph room on OFTC every day as 'seapasulli' as
> well.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.
s, ill grab the model once I get home as well.
>
> The 300 was completely incorrect and it can push more, it was just meant
> for a quick comparison but I agree it should be higher.
>
> Thank you so much. Please hold up and ill grab the extra info ^~^
>
>
>
ng on.
>
> That said I didn't see much last time.
>
You may be looking at something I saw the other day, find my
"Unexplainable slow request" thread. Which unfortunately still remains a
mystery.
But the fact that it worked until recently suggest loading issues.
> On 12/19/2014 1
s
> 0 rbd,
>
> cephadmin@ceph1:/etc/ceph$ cat ceph.conf
> [global]
> fsid = bce2ff4d-e03b-4b75-9b17-8a48ee4d7788
> public_network = 192.168.30.0/24
> cluster_network = 10.1.1.0/24
> mon_initial_members = ceph1, ceph2
> mon_host = 192.168.30.21,192.168.30.22
> auth_cluster_req
ternative for some use cases (also far from production ready at this
time).
Regards,
Christian
> You are right with the round up. I forgot about that.
>
> Thanks for your help. Much appreciated.
> Jiri
>
> - Reply message -
> From: "Christian Balzer"
>
e already have 150% more space than we need, redundancy
> and performance is more important.
>
You really, really want size 3 and a third node for both performance
(reads) and redundancy.
> Now I think on it, we can live with the slow write performance, but
> reducing iow
dmap to serve Ceph in the long-term*, butXFSandext4provide the
> necessary stability for today’s deployments.btrfsdevelopment is
> proceeding rapidly: users should be comfortable installing the latest
> released upstream kernels and be able to track development activity for
> critic
Hello,
On Mon, 29 Dec 2014 00:05:40 +1000 Lindsay Mathieson wrote:
> Appreciate the detailed reply Christian.
>
> On Sun, 28 Dec 2014 02:49:08 PM Christian Balzer wrote:
> > On Sun, 28 Dec 2014 08:59:33 +1000 Lindsay Mathieson wrote:
> > > I'm looking to improve th
orage there would be some
things like deduplication that need to be handled in Ceph, not the layer
below.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
__
Hello,
On Mon, 29 Dec 2014 13:49:49 +0400 Andrey Korolyov wrote:
> On Mon, Dec 29, 2014 at 12:47 PM, Tomasz Kuzemko
> wrote:
> > On Sun, Dec 28, 2014 at 02:49:08PM +0900, Christian Balzer wrote:
> >> You really, really want size 3 and a third node for both perform
Hello,
On Tue, 30 Dec 2014 08:12:21 +1000 Lindsay Mathieson wrote:
> On Mon, 29 Dec 2014 11:12:06 PM Christian Balzer wrote:
> > Is that a private cluster network just between Ceph storage nodes or is
> > this for all ceph traffic (including clients)?
> > The later would
On Tue, 30 Dec 2014 08:22:01 +1000 Lindsay Mathieson wrote:
> On Mon, 29 Dec 2014 11:29:11 PM Christian Balzer wrote:
> > Reads will scale up (on a cluster basis, individual clients might
> > not benefit as much) linearly with each additional "device" (host/OSD).
>
On Tue, 30 Dec 2014 14:08:32 +1000 Lindsay Mathieson wrote:
> On Tue, 30 Dec 2014 12:48:58 PM Christian Balzer wrote:
> > > Looks like I misunderstood the purpose of the monitors, I presumed
> > > they were just for monitoring node health. They do more than that?
> >
monitor to retain a viable quorum if one node goes down.
That is also why the next useful increase of monitors is from 3 to 5, so
you can loose 2 at a time.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
/lib/ceph/osd/ceph-19
/dev/sdk1 2112738204 140998368 1864349324 8% /var/lib/ceph/osd/ceph-21
---
On OSD 19 are 157 PGs, on 21 just 105, perfectly explaining this size
difference of about 33%.
That's on a Firefly cluster with 24 OSDs and more than adequate number of
PGs per OSD (128).
C
;
This is a question for the Ceph developers, I was under the impression that
with Giant adding OSDs would just result in misplaced objects, not
degraded ones...
Christian
--
Christian BalzerNetwork/Systems Engineer
e, etc.
Note that getting even a fraction of the performance of those SSDs will
require quite a lot of CPU power.
Also given your setup, you should be able to saturate your network now, so
probably negating the need for super fast storage to some extent.
Christian
--
Christian BalzerNetwork/Syst
27;s gist with info about my cluster:
> https://gist.github.com/bobrik/fb8ad1d7c38de0ff35ae
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
_
On Mon, 05 Jan 2015 13:53:56 +0100 Wido den Hollander wrote:
> On 01/05/2015 01:39 PM, ivan babrou wrote:
> > On 5 January 2015 at 14:20, Christian Balzer wrote:
> >
> >> On Mon, 5 Jan 2015 14:04:28 +0400 ivan babrou wrote:
> >>
> >>> Hi!
> &
'd remove that line and replace it with something like:
filestore_max_sync_interval = 30
(I use 10 with 10GB journals)
Regards,
Christian
> filestore xattr use omap = true
> osd crush update on start = false
> osd pool default size = 3
> osd pool default min size = 1
> osd p
ng
they wind up on just one PG.
Again, would love to hear something from the devs on this one.
Christian
> On 5 January 2015 at 15:39, ivan babrou wrote:
>
> >
> >
> > On 5 January 2015 at 14:20, Christian Balzer wrote:
> >
> >> On Mon, 5 Jan 2015 14:04:28
t; > dumped all in format plain
> > 10.f26 1018 0 1811 0 2321324247 3261 3261
> > active+remapped+wait_backfill+backfill_toofull 2015-01-05
> > 15:06:49.504731 22897'359132 22897:48571 [91,1] 91 [8,40] 8
> > 19248'358872 2015-01-05 11:58:03.062029 18326'358786
On Wed, 7 Jan 2015 00:54:13 +0900 Christian Balzer wrote:
> On Tue, 6 Jan 2015 19:28:44 +0400 ivan babrou wrote:
>
> > Restarting OSD fixed PGs that were stuck:
> > http://i.imgur.com/qd5vuzV.png
> >
> Good to hear that.
>
> Funny (not really) how often rest
ll the info it
needs from said pagecache on your storage nodes, never having to go to the
actual OSD disks and thus be fast(er) than the initial test.
Finally to potentially improve the initial scan that has to come from the
disks obviously, see how fragmented your OSDs are and depending on the
5-01-07
> 15:48:34.429997 7fc0e9bfd700 0 log [WRN] : slow request 60.742016
> seconds old, received at 2015-01-07 15:47:33.687935:
> osd_op(client.92886.0:4711 benchmark_data_tvsaq1_29431_object4710 [write
> 0~4194304] 3.1639422f ack+ondisk+ write e1464) v4 currently waiting for
> s
why not having min_size at 1
permanently, so that in the (hopefully rare) case of loosing 2 OSDs at the
same time your cluster still keeps working (as it should with a size of 3).
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLi
st settles, you ought to have a better, but no way perfect
distribution.
It comes with the territory, Ceph needs to be deterministic first. See the
mentioned threads again.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fu
/2014-April/028552.html
> Are these values normal with my configuration and hardware? -> The
> read-performance seems slow.
>
See the above thread, in short yes.
Unless there is a Windows tunable that is equivalent to "read_ahead_kb" or
Ceph addresses this problem on their end
On Thu, 8 Jan 2015 11:41:37 -0700 Robert LeBlanc wrote:
> On Wed, Jan 7, 2015 at 10:55 PM, Christian Balzer wrote:
> > Which of course begs the question of why not having min_size at 1
> > permanently, so that in the (hopefully rare) case of loosing 2 OSDs at
> > the same ti
luster,
but drawing conclusions about performance with what you have at hand will
be tricky.
Lastly, Ceph can get very compute intense (OSDs, particularly with small
I/Os), so I'm skeptical that ARM will cut the mustard.
Christian
--
Christian BalzerNetwork/Systems Enginee
On Thu, 8 Jan 2015 21:17:12 -0700 Robert LeBlanc wrote:
> On Thu, Jan 8, 2015 at 8:31 PM, Christian Balzer wrote:
> > On Thu, 8 Jan 2015 11:41:37 -0700 Robert LeBlanc wrote:
> > Which of course currently means a strongly consistent lockup in these
> > scenarios. ^o^
&g
he directory listing seems much
> >> faster and the responsiveness of the PHP app has increased as well.
> >>
> >> Hopefully nothing else will need to be done here, however it seems
> >> that worst case...a daily or weekly cronjob that traverses the
> >> d
same OSD as last time and I'm not seeing anything in
the 0.80.8 changelog that looks like this bug, would a "windows solution"
maybe the way forward, as in deleting and re-creating that OSD?
Christian
On Tue, 9 Dec 2014 13:51:39 +0900 Christian Balzer wrote:
> On Mon, 8 Dec 2014
_
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> Cheers.
>
> Sébastien Han
> Cloud Architect
>
> "Always give 100%. Unless you're givi
> I'll try to post full benchmark results next month (including qemu-kvm
> optimisations)
>
Looking forward to that.
Christian
> Regards
>
> Alexandre
>
> - Mail original -
> De: "Christian Balzer"
> À: "ceph-users"
> Envoyé:
ate_pool('PE-TEST4')
> >
> >
> >
> > After the pool is created via the librados python library, I check the
> > pg_num of the pool and it's 8; not 256.
> >
> > [root@DCOS01 lib]# ceph osd pool get PE-TEST4 pg_num
> >
> >
oval procedure everything was "ok" and the osd.0 had
> been emptied and seemingly rebalanced.
>
> Any ideas why its rebalancing again?
>
> we're using Ubuntu 12.04 w/ Ceph 80.8 & Kernel 3.13.0-43-generic
> #72~precise1-Ubuntu SMP Tue Dec 9 12:14:18 UTC 2014 x
up 1
> 52 1 osd.52 up 1
> 53 1 osd.53 up 1
> 54 1 osd.54 up 1
>
> Regards,
> Quenten Grasso
>
> -Original Message-
> From: Christian Balzer
ady been rebalanced anyway?
>
I don't think so, but somebody else correct me if I'm wrong.
If you're actually _replacing_ those OSDs and not permanently removing
them, search the ML archives for some tricks (by Craig Lewis IIRC) to
minimize the balancing song and dance.
Christ
thinking about using consumer grade SSD's for OSD's and
> Enterprise SSD's for journals.
>
> Reasoning is enterprise SSD's are a lot faster at journaling then
> consumer grade drives plus this would effectively half the overall write
> requirements on the consum
;t change (and just looking at my journal SSDs and OSD
HDDs with atop I don't think so) your writes go to the HDDs pretty much in
parallel.
In either case, an SSD that can _write_ fast enough to satisfy your needs
will definitely have no problems reading fast enough.
Christian
--
Ch
301 - 400 of 1226 matches
Mail list logo