Hi all,
I've seen several chats on the monitor elections, and how the one with the
lowest IP is always the master.
Is there any way to change or influence this behaviour? Other than changing
the IP of the monitor themselves?
thanks
___
ceph-users maili
On Wed, Jun 3, 2015 at 8:03 PM, Nick Fisk wrote:
>
> Hi All,
>
>
>
> Am I correct in thinking that in latest kernels, now that krbd is supported
> via blk-mq, the maximum queue depth is now 128 and cannot be adjusted
>
>
>
> http://xo4t.mj.am/link/xo4t/jw0u7zr/1/VnVTVD2KMuL7gZiTD1iRXQ/aHR0cHM6Ly9
Sorry Christian,
I did briefly wonder, then thought, oh yeah, that fix is already merged
in...However - on reflection, perhaps *not* in the 0.80 tree...doh!
On 04/06/15 18:57, Christian Balzer wrote:
Hello,
Actually after going through the changelogs with a fine comb and the ole
Mark I eyeb
Hi!
> My deployments have seen many different versions of ceph. Pre 0.80.7, I've
> seen those numbers being pretty high. After upgrading to 0.80.7, all of a
> sudden, commit latency of all OSDs drop to 0-1ms, and apply latency remains
> pretty low most of the time.
We use now Ceph 0.80.7-1~bpo70+
Hi All,
I have 2 pools both on the same set of OSD's, 1st is the default rbd pool
created at installation 3 months ago, the other has just recently been
created, to verify performance problems.
As mentioned both pools are on the same set of OSD's, same crush ruleset and
RBD's on both are id
Hi,
A Hammer cluster can provide only one Cephfs and my problem is about
security.
Currently, if I want to share a Cephfs for 2 nodes foo-1 and foo-2 and
another Cephfs for
2 another nodes bar-1 and bar-2, I just mount a dedicated directory in
foo-1/foo-2 and
another dedicated directory in bar-1/b
Nick,
Did you preinitialize the new rbd volume ?
If not, do a seq write to fill up the entire volume first and then do a Random
read.
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Nick
Fisk
Sent: Thursday, June 04, 2015 6:32 AM
To: ceph-users
Are there any safety/consistency or other reasons we wouldn't want to try
using an external XFS log device for our OSDs? I realize if that device
fails the filesystem is pretty much lost, but beyond that?
--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media
e: da...@slashdo
I've always used Ubuntu for my Ceph client OS and found out in the lab that
Centos/RHEL 6.x doesn't have the kernel rbd support. I wanted to investigate
using RHEL 7.1 for the client OS. Is there a kernel rbd module that installs
with RHEL 7.1?? If not are there 7.1 rpm's or src tar balls availa
On 2015-06-04T12:42:42, David Burley wrote:
> Are there any safety/consistency or other reasons we wouldn't want to try
> using an external XFS log device for our OSDs? I realize if that device
> fails the filesystem is pretty much lost, but beyond that?
I think with the XFS journal on the same
On Thu, Jun 4, 2015 at 6:31 AM, Nick Fisk wrote:
>
> Hi All,
>
> I have 2 pools both on the same set of OSD’s, 1st is the default rbd pool
> created at installation 3 months ago, the other has just recently been
> created, to verify performance problems.
>
> As mentioned both pools are on the sa
On Thu, Jun 4, 2015 at 7:25 AM, François Lafont wrote:
> Hi,
>
> A Hammer cluster can provide only one Cephfs and my problem is about
> security.
> Currently, if I want to share a Cephfs for 2 nodes foo-1 and foo-2 and
> another Cephfs for
> 2 another nodes bar-1 and bar-2, I just mount a dedicate
With a write-heavy RBD workload, I add the following to ceph.conf:
osd_max_backfills = 2
osd_recovery_max_active = 2
If things are going well during recovery (i.e. guests happy and no slow
requests), I will often bump both up to three:
# ceph tell osd.* injectargs '--osd-max-backfills 3
--os
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Gregory Farnum
> Sent: 04 June 2015 21:22
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Old vs New pool on same OSDs - Performance
> Difference
>
> On Thu, Jun 4,
Nick,
I noticed that dumping page cache sometime helps as I was hitting Ubuntu page
cache compaction issue (I shared that to community sometimes back). Perf top
should show compaction related stack trace then . Setting sysctl vm option
min_free_kbytes to big numbers (like 5/10 GB in my 64 GB RAM
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
In our tests we got better performance with OSD journals on SSD than
with bcache. We abandoned the complexity of bcache. YMMV.
-BEGIN PGP SIGNATURE-
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com
wsFcBAEBCAAQBQJVcNSNCRDmVDuy
>From a ease of use standpoint and depending on the situation you are
setting up your environment, the idea is as follow;
It seems like it would be nice to have some easy on demand control where
you don't have to think a whole lot other than knowing how it is going to
affect your cluster in a gene
On 06/03/2015 04:15 AM, Jan Schermer wrote:
Thanks for a very helpful answer.
So if I understand it correctly then what I want (crash consistency with RPO>0)
isn’t possible now in any way.
If there is no ordering in RBD cache then ignoring barriers sounds like a very
bad idea also.
Yes, that'
Hi Bruce
Yes RHEL 6 come with kernel 2.6.32 and you don't have krbd. No backport,
already asked :)
It's good on RHEL 7.1 that coming with 3.10 kernel and krbd integrated
Thanks
Sent from my iPhone
> On 4 juin 2015, at 13:26, Bruce McFarland
> wrote:
>
> I’ve always used Ubuntu for my Ceph cl
Hello Mark,
On Thu, 04 Jun 2015 20:34:55 +1200 Mark Kirkwood wrote:
> Sorry Christian,
>
> I did briefly wonder, then thought, oh yeah, that fix is already merged
> in...However - on reflection, perhaps *not* in the 0.80 tree...doh!
>
No worries, I'm just happy to hear that you think it's the
Hi Ceph folks,
We want to use rbd format v2, but find it is not supported on kernel 3.10.0 of
centos 7:
[ceph@ ~]$ sudo rbd map zhi_rbd_test_1
rbd: sysfs write failed
rbd: map failed: (22) Invalid argument
[ceph@ ~]$ dmesg | tail
[662453.664746] rbd: image zhi_rbd_test_1: unsupported
Hello, seeing issues with OSDs stalling and error messages such as:
2015-06-04 06:48:17.119618 7fc932d59700 0 -- 10.80.4.15:6820/3501 >> 10.80.4.30
:6811/3003603 pipe(0xb6b4000 sd=19 :33085 s=1 pgs=311 cs=4 l=0 c=0x915c6e0).conn
ect claims to be 10.80.4.30:6811/4106 not 10.80.4.30:6811/3003603 -
Hi Cephers,
I recently had a power problem and the entire cluster was brought down,
came up, went down, and came up again. Afterword, 3 OSDs were mostly dead
(HDD failures). Luckily (I think) the drives were alive enough that I could
copy the data off and leave the journal alone.
Since my pool "d
Looking quickly at the relevant code:
FileJournal::stop_writer() in src/os/FileJpurnal.cc
I see that we didn't start seeing the (original) issue until changes in
0.83, which suggests that 0.80 tree might not be doing the same thing.
*However* I note that I'm not happy with the placement of the
Hello,
On Fri, 05 Jun 2015 16:33:46 +1200 Mark Kirkwood wrote:
Well, whatever it is, I appear to not be the only one after all:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=773361
> Looking quickly at the relevant code:
>
> FileJournal::stop_writer() in src/os/FileJpurnal.cc
>
> I see th
25 matches
Mail list logo