Nice Work Mark !
I don't see any tuning about sharding in the config file sample
(osd_op_num_threads_per_shard,osd_op_num_shards,...)
as you only use 1 ssd for the bench, I think it should improve results for
hammer ?
- Mail original -
De: "Mark Nelson"
À: "ceph-devel"
Cc: "ceph-us
To be exact, the platform used throughout is CentOS 6.4... I am reading my copy
right now :)
Best -F
- Original Message -
From: "SUNDAY A. OLUTAYO"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Monday, February 16, 2015 3:28:45 AM
Subject: Re: [ceph-users] Introducing
https://github.com/ceph/ceph-tools/tree/master/cbt
On Tue, Feb 17, 2015 at 12:16 PM, Stephen Hindle wrote:
> I was wondering what the 'CBT' tool is ? Google is useless for that
> acronym...
>
> Thanks!
> Steve
>
> On Tue, Feb 17, 2015 at 10:37 AM, Mark Nelson wrote:
> > Hi All,
> >
> > I wrote
Mark, many thanks for your effort and ceph performance tests. This puts things
in perspective.
Looking at the results, I was a bit concerned that the IOPs performance in
niether releases come even marginally close to the capabilities of the
underlying ssd device. Even the fastest PCI ssds have
Hi Alex,
Thanks! I didn't tweak the sharding settings at all, so they are just
at the default values:
OPTION(osd_op_num_threads_per_shard, OPT_INT, 2)
OPTION(osd_op_num_shards, OPT_INT, 5)
I don't have really good insight yet into how tweaking these would
affect single-osd performance. I k
Hi Andrei,
On 02/18/2015 09:08 AM, Andrei Mikhailovsky wrote:
Mark, many thanks for your effort and ceph performance tests. This puts
things in perspective.
Looking at the results, I was a bit concerned that the IOPs performance
in niether releases come even marginally close to the capabilitie
On Wed, Feb 18, 2015 at 6:56 AM, Gregory Farnum wrote:
> On Tue, Feb 17, 2015 at 9:48 PM, Florian Haas wrote:
>> On Tue, Feb 17, 2015 at 11:19 PM, Gregory Farnum wrote:
>>> On Tue, Feb 17, 2015 at 12:09 PM, Florian Haas wrote:
Hello everyone,
I'm seeing some OSD behavior that I c
>>I don't have really good insight yet into how tweaking these would
>>affect single-osd performance. I know the PCIe SSDs do have multiple
>>controllers on-board so perhaps increasing the number of shards would
>>improve things, but I suspect that going too high could maybe start
>>hurting per
Thanks Brad. That solved the problem. I mistakenly assumed all dependencies
are in http://ceph.com/rpm-giant/el6/x86_64/.
Regards,
Wenxiao
On Tue, Feb 17, 2015 at 10:37 PM, Brad Hubbard wrote:
> On 02/18/2015 12:43 PM, Wenxiao He wrote:
>
>>
>> Hello,
>>
>> I need some help as I am getting pac
Note that ceph-deploy would enable EPEL for you automatically on
CentOS. When doing a manual installation, the requirement for EPEL is
called out here:
http://ceph.com/docs/master/install/get-packages/#id8
Though looking at that, we could probably update it to use the now
much easier to use "yum
We're running ceph version 0.87
(c51c8f9d80fa4e0168aa52685b8de40e42758578), and seeing this:
HEALTH_WARN 1 pgs degraded; 1 pgs stuck degraded; 1 pgs stuck unclean; 1
pgs stuck undersized; 1 pgs undersized
pg 4.2af is stuck unclean for 77192.522960, current state
active+undersized+degraded, las
How do I update the ceph monmap after extracting and removing unwanted an ip in
the monmap to the clean monmap?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey cephers,
We still have a couple of speaking slots open for Ceph Day San
Francisco on 12 March. I'm open to both high level "what have you been
doing with Ceph" type talks as well as more technical "here is what
we're writing and/or integrating with Ceph."
I know many folks will be at VAULT, b
Hi,
use the following command line: ceph-mon -i {monitor_id} --inject-monmap
{updated_monmap_file}
JC
> On 18 Feb 2015, at 11:15, SUNDAY A. OLUTAYO wrote:
>
> How do I update the ceph monmap after extracting and removing unwanted an ip
> in the monmap to the clean monmap?
>
>
> ___
On Wed, Feb 18, 2015 at 7:53 PM, Brian Rak wrote:
> We're running ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578),
> and seeing this:
>
> HEALTH_WARN 1 pgs degraded; 1 pgs stuck degraded; 1 pgs stuck unclean; 1 pgs
> stuck undersized; 1 pgs undersized
> pg 4.2af is stuck unclean for 7
On 2/18/2015 3:01 PM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 7:53 PM, Brian Rak wrote:
We're running ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578),
and seeing this:
HEALTH_WARN 1 pgs degraded; 1 pgs stuck degraded; 1 pgs stuck unclean; 1 pgs
stuck undersized; 1 pgs undersiz
Hey everyone,
I must confess I'm still not fully understanding this problem and
don't exactly know where to start digging deeper, but perhaps other
users have seen this and/or it rings a bell.
System info: Ceph giant on CentOS 7; approx. 240 OSDs, 6 pools using 2
different rulesets where the prob
We've been running some tests to try to determine why our FreeBSD VMs
are performing much worse than our Linux VMs backed by RBD, especially
on writes.
Our current deployment is:
- 4x KVM Hypervisors (QEMU 2.0.0+dfsg-2ubuntu1.6)
- 2x OSD nodes (8x SSDs each, 10Gbit links to hypervisors, pool has 2
On Wed, Feb 18, 2015 at 9:09 PM, Brian Rak wrote:
>> What does your crushmap look like (ceph osd getcrushmap -o
>> /tmp/crushmap; crushtool -d /tmp/crushmap)? Does your placement logic
>> prevent Ceph from selecting an OSD for the third replica?
>>
>> Cheers,
>> Florian
>
>
> I have 5 hosts, and i
Hey folks,
I have a ceph cluster supporting about 500 VMs using RBD. I am seeing
around 10-12k IOPS cluster-wide and IO wait time creeping up within the
VMs.
My suspicion is that I am pushing my ceph cluster to its limit in terms of
overall throughput. I am curious if there are metrics that can b
On 02/18/2015 02:19 PM, Florian Haas wrote:
Hey everyone,
I must confess I'm still not fully understanding this problem and
don't exactly know where to start digging deeper, but perhaps other
users have seen this and/or it rings a bell.
System info: Ceph giant on CentOS 7; approx. 240 OSDs, 6 p
On 2/18/2015 3:24 PM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 9:09 PM, Brian Rak wrote:
What does your crushmap look like (ceph osd getcrushmap -o
/tmp/crushmap; crushtool -d /tmp/crushmap)? Does your placement logic
prevent Ceph from selecting an OSD for the third replica?
Cheers,
Floria
On Wed, Feb 18, 2015 at 9:32 PM, Mark Nelson wrote:
> On 02/18/2015 02:19 PM, Florian Haas wrote:
>>
>> Hey everyone,
>>
>> I must confess I'm still not fully understanding this problem and
>> don't exactly know where to start digging deeper, but perhaps other
>> users have seen this and/or it rin
Dear Ceph Experts,
is it possible to define a Ceph user/key with privileges
that allow for read-only CephFS access but do not allow
write or other modifications to the Ceph cluster?
I would like to export a sub-tree of our CephFS via HTTPS.
Alas, web-servers are inviting targets, so in the (hope
On Wed, Feb 18, 2015 at 10:28 PM, Oliver Schulz wrote:
> Dear Ceph Experts,
>
> is it possible to define a Ceph user/key with privileges
> that allow for read-only CephFS access but do not allow
> write or other modifications to the Ceph cluster?
Warning, read this to the end, don't blindly do as
> From: "Logan Barfield"
> We've been running some tests to try to determine why our FreeBSD VMs
> are performing much worse than our Linux VMs backed by RBD, especially
> on writes.
>
> Our current deployment is:
> - 4x KVM Hypervisors (QEMU 2.0.0+dfsg-2ubuntu1.6)
> - 2x OSD nodes (8x SSDs each,
On Wed, Feb 18, 2015 at 1:58 PM, Florian Haas wrote:
> On Wed, Feb 18, 2015 at 10:28 PM, Oliver Schulz wrote:
>> Dear Ceph Experts,
>>
>> is it possible to define a Ceph user/key with privileges
>> that allow for read-only CephFS access but do not allow
>> write or other modifications to the Ceph
Hi,
The impatient me was using the "quick" guide (
http://ceph.com/docs/master/start/quick-start-preflight/) which merely
states "On CentOS, you may need to install EPEL" :)
I have a separate question: why ceph-deploy always shows "Error in
sys.exitfunc:", though things look fine?
$ ceph-deploy
Hi Florian,
On 18.02.2015 22:58, Florian Haas wrote:
is it possible to define a Ceph user/key with privileges
that allow for read-only CephFS access but do not allow
All you should need to do is [...]
However, I've just tried the above with ceph-fuse on firefly, and [...]
So I believe you've un
On 02/12/2015 05:59 PM, Blair Bethwaite wrote:
My particular interest is for a less dynamic environment, so manual
key distribution is not a problem. Re. OpenStack, it's probably good
enough to have the Cinder host creating them as needed (presumably
stored in its DB) and just send the secret key
Dear Greg,
On 18.02.2015 23:41, Gregory Farnum wrote:
is it possible to define a Ceph user/key with privileges
that allow for read-only CephFS access but do not allow
...and deletes, unfortunately. :( I don't think this is presently a
thing it's possible to do until we get a much better user au
On Wed, Feb 18, 2015 at 11:41 PM, Gregory Farnum wrote:
> On Wed, Feb 18, 2015 at 1:58 PM, Florian Haas wrote:
>> On Wed, Feb 18, 2015 at 10:28 PM, Oliver Schulz wrote:
>>> Dear Ceph Experts,
>>>
>>> is it possible to define a Ceph user/key with privileges
>>> that allow for read-only CephFS acc
On Wed, Feb 18, 2015 at 3:30 PM, Florian Haas wrote:
> On Wed, Feb 18, 2015 at 11:41 PM, Gregory Farnum wrote:
>> On Wed, Feb 18, 2015 at 1:58 PM, Florian Haas wrote:
>>> On Wed, Feb 18, 2015 at 10:28 PM, Oliver Schulz wrote:
Dear Ceph Experts,
is it possible to define a Ceph use
Hi,
yesterday we had had the problem that one of our cluster clients
remounted a rbd device in read-only mode. We found this[1] stack trace
in the logs. We investigated further and found similar traces on all
other machines that are using the rbd kernel module. It seems to me that
whenever th
Hi Cephers,
What is your "best practice" for starting up OSDs?
I am trying to determine the most robust technique on CentOS 7 where I
have too much choice:
udev/gpt/uuid or /etc/init.d/ceph or /etc/systemd/system/ceph-osd@X
1. Use udev/gpt/UUID: no OSD sections in /etc/ceph/mycluster.conf or
35 matches
Mail list logo