Wanted to check if there are any readily available tools that the community is
aware of/using for parsing/plotting CBT run results. Am particularly interested
in tools for the CBT librbdfio runs, where in the aggregated BW/IOPS/Latency
reports are generated either in CSV/graphical.
Thanks!
___
Hi All,
The CFP for linux.conf.au 2017 (January 16-20 in Hobart, Tasmania,
Australia) opened on Monday:
https://linux.conf.au/proposals/
This is one of the best F/OSS conferences in the world, and is being
held in one of the most beautiful places in the world (although I might
be slightly bias
Hi List,
I just setup my first ceph demo cluster by following the step-by-step quick
start guide.
However, I noted that there is a FAQ
http://tracker.ceph.com/projects/ceph/wiki/How_Can_I_Give_Ceph_a_Try that says
it's may problematic if use ceph client from ceph cluster node.
Is that still the
Hi,
I think I found a Solution for my Problem, here are my findings:
This Bug can be easily reproduced in a test environment:
1. Delete all rgw related pools.
2. Start infernalis radosgw to initialize them again.
3. Create user.
4. User creates bucket.
5. Upgrade radosgw to jewel
6. User creat
Hello.
I've been testing Intel 3500 as journal store for few HDD-based OSD. I
stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc
sometime do not appear after partition creation). And I'm thinking that
partition is not that useful for OSD management, because linux do no
a
Hi
Maybe the easiest way would be to just create files to the SSD and use those
as journals. Don't know if this creates too much overhead, but atleast it
would be simple.
Br,
T
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
George Shuklin
Se
We have 5 journal partitions per SSD. Works fine (on el6 and el7).
Best practice is to use ceph-disk:
ceph-disk prepare /dev/sde /dev/sdc # where e is the osd, c is an SSD.
-- Dan
On Wed, Jul 6, 2016 at 2:03 PM, George Shuklin wrote:
> Hello.
>
> I've been testing Intel 3500 as journal stor
Hi George,
interesting result for your benchmark. May you please supply some more numbers?
As we didn't get that good of a result
on our tests.
Thanks.
Cheers,
Alwin
On 07/06/2016 02:03 PM, George Shuklin wrote:
> Hello.
>
> I've been testing Intel 3500 as journal store for few HDD-based OSD
Hi George,
We have several journal partition on our SSDs too. Using ceph-deploy
utility (as Dan mentioned before),I think it is best way:
ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
where journal will be the path to journal disk (not to partition):
ceph-deploy osd
Hi list,
Given a single node Ceph cluster (lab), I started out with the following
CRUSH rule:
> # rules
> rule replicated_ruleset {
> ruleset 0
> type replicated
> min_size 1
> max_size 10
> step take default
> step choose firstn 0 type osd
> step emit
> }
Meanwhile, t
Hi,
> Is this the way to go? I would like as little performance degradation
while rebalancing as possible.
Please advice if I need to take in account certain preparations.
Set these in your ceph.conf beforehand:
osd recovery op priority = 1
osd max backfills= 1
I would allso suggest
Hi Micha,
Thank you very much for your prompt response. In an earlier process, I
already ran:
> $ ceph tell osd.* injectargs '--osd-max-backfills 1'
> $ ceph tell osd.* injectargs '--osd-recovery-op-priority 1'
> $ ceph tell osd.* injectargs '--osd-client-op-priority 63'
> $ ceph tell osd.* inject
Hi Goncalo,
On Wed, Jul 6, 2016 at 2:18 AM, Goncalo Borges
wrote:
> Just to confirm that, after applying the patch and recompiling, we are no
> longer seeing segfaults.
>
> I just tested with a user application which would kill ceph-fuse almost
> instantaneously. Now it is running for quite some
Hi all,
I'm trying to use the ceph-ansible playbooks to deploy onto baremetal. I'm
currently testing with the approach that uses an existing filesystem rather
than giving raw access to the disks.
The OSDs are failing to start:
2016-07-06 10:48:50.249976 7fa410aef800 0 set uid:gid to 167:167
(c
Try strace.
-Sam
On Wed, Jul 6, 2016 at 7:53 AM, RJ Nowling wrote:
> Hi all,
>
> I'm trying to use the ceph-ansible playbooks to deploy onto baremetal. I'm
> currently testing with the approach that uses an existing filesystem rather
> than giving raw access to the disks.
>
> The OSDs are failin
Yes.
On my lab (not production yet) with 9 7200 SATA (OSD) and one INTEL
SSDSC2BB800G4 (800G, 9 journals) during random write I got ~90%
utilization of 9 HDD with ~5% utilization of SSD (2.4k IOPS). With
linear writing it somehow worse: I got 250Mb/s on SSD, which translated
to 240Mb of all O
In my expirience, 50% of 'permission denied' for OSD were coming not
from filesystem, but from monitors:
check if key in ceph auth list matches to osd_dir/keyring.
On 07/06/2016 06:08 PM, Samuel Just wrote:
Try strace.
-Sam
On Wed, Jul 6, 2016 at 7:53 AM, RJ Nowling wrote:
Hi all,
I'm tr
I used strace and got entries like this:
Jul 06 11:07:29 et32.et.eng.bos.redhat.com strace[19754]:
open("/var/lib/ceph/osd/ceph-1/type", O_RDONLY) = -1 EACCES (Permission
denied)
I can't recreate the permission issues when running as the ceph user: I
changed the ceph user's shell to /bin/bash, di
Sam, George, thanks for your help!
The problem is that the data directory is symlinked to /home and the
systemd unit file had `ProtectHome=true`
On Wed, Jul 6, 2016 at 9:53 AM, RJ Nowling wrote:
> Hi all,
>
> I'm trying to use the ceph-ansible playbooks to deploy onto baremetal.
> I'm currently
Check if keyring is belong to OSD.
One more thing I saw in the lab: number of allowed opened files is not
enough. After I raised it via ulimit, this type of bug disappear.
On 07/06/2016 06:28 PM, RJ Nowling wrote:
Assuming I did this correctly :) :
[root@et32 ceph-1]# cat keyring
[osd.1]
ke
So just to update, I decided to ditch XenServer and go with Openstack… Thanks
for everyone’s help with this!
Cheers,
Mike
> On Jul 1, 2016, at 1:29 PM, Mike Jacobacci wrote:
>
> Yes, I would like to know too… I decided nott to update the kernel as it
> could possibly affect xenserver’s stabili
Kees,
See http://dachary.org/?p=3189 for some simple instructions on testing your
crush rule logic.
Bob
On Wed, Jul 6, 2016 at 7:07 AM, Kees Meijs wrote:
> Hi Micha,
>
> Thank you very much for your prompt response. In an earlier process, I
> already ran:
> > $ ceph tell osd.* injectargs '--os
Hi Pavan,
A couple of us have some pretty ugly home-grown scripts for doing this.
Basically just bash/awk that loop through the directories and grab the
fio bw/latency lines. Eventually the whole way that cbt stores data
should be revamped since the way data gets laid out in a nested
directo
Hey,
Those out there who are running production clusters: have you upgraded already
to Jewel?
I usually wait until .2 is out (which it is now for Jewel) but just looking for
largish deployment experiences in the field before I pull the trigger over the
weekend. It’s a largish upgrade going from
I noticed on that USE list that the 10.2.2 ebuild introduced a new
cephfs emerge flag, so I enabled that and emerged everywhere again. The
active mon is still crashing on the assertion though.
Bill Sharer
On 07/05/2016 08:14 PM, Bill Sharer wrote:
Relevant USE flags FWIW
# emerge -pv c
Thank you very much, I'll start testing the logic prior to implementation.
K.
On 06-07-16 19:20, Bob R wrote:
> See http://dachary.org/?p=3189 for some simple instructions on testing
> your crush rule logic.
___
ceph-users mailing list
ceph-users@lists
On Wed, Jul 6, 2016 at 9:31 AM, RJ Nowling wrote:
> Sam, George, thanks for your help!
>
> The problem is that the data directory is symlinked to /home and the systemd
> unit file had `ProtectHome=true`
Good to know that feature works! :D
- Ken
___
cep
Manual downgrade to 10.2.0 put me back in business. I'm going to mask
10.2.2 and then try to let 10.2.1 emerge.
Bill Sharer
On 07/06/2016 02:16 PM, Bill Sharer wrote:
I noticed on that USE list that the 10.2.2 ebuild introduced a new
cephfs emerge flag, so I enabled that and emerged everywher
Hi,
I am installing ceph hammer and integrating it with openstack Liberty for
the first time.
My local disk has only 500 GB but i need to create 600 GB VM. SO i have
created a soft link to ceph filesystem as
lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
/var/lib/ceph/osd/ceph-0/instances [r
On Thu, Jul 7, 2016 at 12:31 AM, Patrick Donnelly wrote:
>
> The locks were missing in 9.2.0. There were probably instances of the
> segfault unreported/unresolved.
Or even unseen :)
Race conditions are funny things and extremely subtle changes in
timing introduced
by any number of things can af
Hi haomai:
I noticed your PR about support DPDK by Ceph:
https://github.com/ceph/ceph/pull/9230
It's great job for Ceph.
I want to do some test base on the PR, but can not use it still.First I can
not find the package for dpdk on debian/ubuntu, So I download the source
code of dpdk and compile
Thanks for your warmhearted response!
I do not find out the root reason which causes 'client did not provide
supported auth type'.
I just rebuild keys and it's ok.
-- --
??: "Goncalo Borges";;
: 2016??6??28??(??) 1:18
??: "
Hi, All:)
When we make an OSD, SAS/SATA as data and SSD(or a partition) as journal
usuanlly.
So, pure SSD OSD, also need journal?
Maybe this is a low level Q, but i really do not understand:(
Regards,
XiuCai.___
ceph-users mailing list
ceph-users
Hello,
I have a multitude of of problems with the benchmarks and conclusions
here, more below.
But firstly to address the question of the OP, definitely not filesystem
based journals.
Another layer of overhead and delays, something I'd be willing to ignore
if we're talking about a full SSD as O
On Thu, 7 Jul 2016 09:57:44 +0800 秀才 wrote:
> Hi, All:)
>
>
> When we make an OSD, SAS/SATA as data and SSD(or a partition) as journal
> usuanlly.
>
>
> So, pure SSD OSD, also need journal?
>
Of course.
The journal is needed for data consistency with filestore.
This will go away with bluest
Hi George...
On my latest deployment we have set
# grep journ /etc/ceph/ceph.conf
osd journal size = 2
and configured the OSDs for each device running 'ceph-disk prepare'
# ceph-disk -v prepare --cluster ceph --cluster-uuid XXX --fs-type
xfs /dev/sdd /dev/sdb
# ceph-disk -v
I have 12 journals on 1 SSD, but I wouldn't recommend it if you want any
real performance.
I use it on an archive type environment.
On Wed, Jul 6, 2016 at 9:01 PM Goncalo Borges
wrote:
> Hi George...
>
>
> On my latest deployment we have set
>
> # grep journ /etc/ceph/ceph.conf
> osd journal si
I copy rte_config.h to /usr/include/ and it can pass the ./configure, when
did 'make', meet the error of these:
CXXLDlibcommon_crc.la
../libtool: line 6000: cd: yes/lib: No such file or directory
libtool: link: cannot determine absolute directory name of `yes/lib'
Makefile:13645: recipe for
Thanks Mark, we did look at tools from Ben England here
https://github.com/bengland2/cbt/blob/fio-thru-analysis/tools/parse-fio-result-log.sh
but not with much luck, that’s partly because we didn’t bother to look into
the gory details if things didn’t work.
Thanks for the scripts you have atta
My previous email did not go through because of its size. Here goes a
new attempt:
Cheers
Goncalo
--- * ---
Hi Patrick, Brad...
Unfortunately, the other user application breaks ceph-fuse again (It is
a completely different application then in my previous test).
We have tested it in 4 machi
Previously dpdk plugin only support cmake.
Currently I'm working on split that PR into multi clean PR to let
merge. So previous PR isn't on my work list. plz move on the following
changes
On Thu, Jul 7, 2016 at 1:25 PM, 席智勇 wrote:
> I copy rte_config.h to /usr/include/ and it can pass the ./conf
very appreciate~
2016-07-07 14:18 GMT+08:00 Haomai Wang :
> Previously dpdk plugin only support cmake.
>
> Currently I'm working on split that PR into multi clean PR to let
> merge. So previous PR isn't on my work list. plz move on the following
> changes
>
> On Thu, Jul 7, 2016 at 1:25 PM, 席智勇
Hi Gaurav,
Unfortunately I'm not completely sure about your setup, but I guess it
makes sense to configure Cinder and Glance to use RBD for a backend. It
seems to me, you're trying to store VM images directly on an OSD filesystem.
Please refer to http://docs.ceph.com/docs/master/rbd/rbd-openstack
43 matches
Mail list logo