Hello,
On Mon, 8 Jun 2015 18:01:28 +1200 cameron.scr...@solnet.co.nz wrote:
> Just used the method in the link you sent me to test one of the EVO
> 850s, with one job it reached a speed of around 2.5MB/s but it didn't
> max out until around 32 jobs at 24MB/s:
>
I'm not the author of that page,
I recently did some testing of a few SSDs and found some surprising, and some
not so surprising things:
1) performance varies wildly with firmware, especially with cheaper drives
2) performance varies with time - even with S3700 - slows down after ~40-80GB
and then creeps back up
3) cheaper driv
On 05/06/15 21:50, Jelle de Jong wrote:
> I am new to ceph and I am trying to build a cluster for testing.
>
> after running:
> ceph-deploy osd prepare --zap-disk ceph02:/dev/sda
>
> It seems udev rules find the disk and try to activate them, but then
> gets stuck:
>
> http://paste.debian.net/pl
Hello,
All I can tell you is that I'm seeing the same thing frequently on
Debian Jessie and that it indeed seems to be a race condition between udev
and ceph-deploy (ceph-disk).
I "solved" this by killing of the process stuck on the target node (the
one with the tmp/mnt directory) and then doing
Trying out some tests on my pet VMs with 0.80.9 does not elicit any
journal failures...However ISTR that running on the bare metal was the
most reliable way to reproduce...(proceeding - currently cannot get
ceph-deploy to install this configuration...I'll investigate further
tomorrow)!
Cheers
On Mon, 8 Jun 2015 09:44:54 +0200 Jan Schermer wrote:
> I recently did some testing of a few SSDs and found some surprising, and
> some not so surprising things:
>
> 1) performance varies wildly with firmware, especially with cheaper
> drives
> 2) performance varies with time - even with S3700 -
Mark,
one would hope you can't with 0.80.9 as per the release notes, while
0.80.7 definitely was susceptible.
Christian
On Mon, 08 Jun 2015 20:05:20 +1200 Mark Kirkwood wrote:
> Trying out some tests on my pet VMs with 0.80.9 does not elicit any
> journal failures...However ISTR that running
> On 08 Jun 2015, at 10:07, Christian Balzer wrote:
>
> On Mon, 8 Jun 2015 09:44:54 +0200 Jan Schermer wrote:
>
>> I recently did some testing of a few SSDs and found some surprising, and
>> some not so surprising things:
>>
>> 1) performance varies wildly with firmware, especially with cheape
On Mon, 8 Jun 2015 10:12:02 +0200 Jan Schermer wrote:
>
> > On 08 Jun 2015, at 10:07, Christian Balzer wrote:
> >
> > On Mon, 8 Jun 2015 09:44:54 +0200 Jan Schermer wrote:
> >
> >> I recently did some testing of a few SSDs and found some surprising,
> >> and some not so surprising things:
> >>
> On 08 Jun 2015, at 10:40, Christian Balzer wrote:
>
> On Mon, 8 Jun 2015 10:12:02 +0200 Jan Schermer wrote:
>
>>
>>> On 08 Jun 2015, at 10:07, Christian Balzer wrote:
>>>
>>> On Mon, 8 Jun 2015 09:44:54 +0200 Jan Schermer wrote:
>>>
I recently did some testing of a few SSDs and found
Thank you for taking the time to reply.
I removed the file /lib/udev/rules.d/95-ceph-osd.rules from all my nodes
and tried to recreate the osd's.
The bellow pastebin is an example of the commands:
ceph-deploy disk zap ceph02:/dev/sdc
ceph-deploy osd prepare --zap-disk ceph02:sda:/dev/sdc
ceph-dep
Hi
Actually we use libvirt VM with ceph rbd pool for storage.
By default we want to have "disk cache=writeback" for all disks in libvirt.
In /etc/ceph/ceph.conf, we have "rbd cache = true" and for each VMs XML
we set "cache=writeback" for all disks in VMs configuration.
We want to use one ocfs
On Mon, Jun 8, 2015 at 1:24 PM, Arnaud Virlet wrote:
> Hi
>
> Actually we use libvirt VM with ceph rbd pool for storage.
> By default we want to have "disk cache=writeback" for all disks in libvirt.
> In /etc/ceph/ceph.conf, we have "rbd cache = true" and for each VMs XML we
> set "cache=writeback
Hello everybody,
I could not get ceph to work with the ceph packages shipped with debian
jessie: http://paste.debian.net/211771/
So I tried to use apt-pinning to use the eu.ceph.com apt repository, but
there are to many dependencies that are unresolved.
This is my apt configuration: http://paste
Hello,
On a fresh install of ceph, I started toget those errors on one osd:
0> 2015-06-08 14:22:25.582417 7f21d9239880 -1 osd/OSD.h: In function
'OSDMapRef OSDService::get_map(epoch_t)' thread 7f21d9239880 time
2015-06-08 14:22:25.579846
osd/OSD.h: 716: FAILED assert(ret)
ceph version 0.94
Thanks for you reply,
On 06/08/2015 12:31 PM, Andrey Korolyov wrote:
On Mon, Jun 8, 2015 at 1:24 PM, Arnaud Virlet wrote:
Hi
Actually we use libvirt VM with ceph rbd pool for storage.
By default we want to have "disk cache=writeback" for all disks in libvirt.
In /etc/ceph/ceph.conf, we have "
Isn’t the right parameter “network=writeback” for network devices like RBD?
Jan
> On 08 Jun 2015, at 12:31, Andrey Korolyov wrote:
>
> On Mon, Jun 8, 2015 at 1:24 PM, Arnaud Virlet wrote:
>> Hi
>>
>> Actually we use libvirt VM with ceph rbd pool for storage.
>> By default we want to have "dis
On Mon, Jun 8, 2015 at 2:48 PM, Arnaud Virlet wrote:
> Thanks for you reply,
>
> On 06/08/2015 12:31 PM, Andrey Korolyov wrote:
>>
>> On Mon, Jun 8, 2015 at 1:24 PM, Arnaud Virlet
>> wrote:
>>>
>>> Hi
>>>
>>> Actually we use libvirt VM with ceph rbd pool for storage.
>>> By default we want to hav
On 08/06/15 13:22, Jelle de Jong wrote:
> I could not get ceph to work with the ceph packages shipped with debian
> jessie: http://paste.debian.net/211771/
>
> So I tried to use apt-pinning to use the eu.ceph.com apt repository, but
> there are to many dependencies that are unresolved.
>
> This i
On Mon, 08 Jun 2015 14:14:51 +0200 Jelle de Jong wrote:
> On 08/06/15 13:22, Jelle de Jong wrote:
> > I could not get ceph to work with the ceph packages shipped with debian
> > jessie: http://paste.debian.net/211771/
> >
> > So I tried to use apt-pinning to use the eu.ceph.com apt repository,
>
Hey,
The next ceph breizh camp will take place at Rennes (Britany) the 16th June.
The meetup will begin at 10h00 at:
IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires
263 Avenue Général Leclerc
35000 Rennes
building IRISA/Inria 12 F, allée Jean Perrin
https://goo.gl/maps/hok
On 06/08/2015 01:59 PM, Andrey Korolyov wrote:
Am I understand you right that you are using certain template engine
for both OCFS- and RBD-backed volumes within a single VM` config and
it does not allow per-disk cache mode separation in a suggested way?
My VM has 3 disks on RBD backend. disks
On Mon, Jun 8, 2015 at 3:44 PM, Arnaud Virlet wrote:
>
>
> On 06/08/2015 01:59 PM, Andrey Korolyov wrote:
>>
>>
>> Am I understand you right that you are using certain template engine
>> for both OCFS- and RBD-backed volumes within a single VM` config and
>> it does not allow per-disk cache mode s
On 06/08/2015 03:17 PM, Andrey Korolyov wrote:
On Mon, Jun 8, 2015 at 3:44 PM, Arnaud Virlet wrote:
On 06/08/2015 01:59 PM, Andrey Korolyov wrote:
Am I understand you right that you are using certain template engine
for both OCFS- and RBD-backed volumes within a single VM` config and
it
On Mon, Jun 8, 2015 at 6:36 PM, Arnaud Virlet wrote:
>
>
> On 06/08/2015 03:17 PM, Andrey Korolyov wrote:
>>
>> On Mon, Jun 8, 2015 at 3:44 PM, Arnaud Virlet
>> wrote:
>>>
>>>
>>>
>>> On 06/08/2015 01:59 PM, Andrey Korolyov wrote:
Am I understand you right that you are using c
Hmm ... looking at the latest version of QEMU, it appears that the RBD cache
settings are changed prior to reading the configuration file instead of
overriding the value after the configuration file has been read [1]. Try
specifying the path to a new configuration file via the
"conf=/path/to/m
Hi Partick,
It looks confusing to use this. Is it need that we upload a txt file
to describe blueprint instead of editing directly online?
On Wed, May 27, 2015 at 5:05 AM, Patrick McGarry wrote:
> It's that time again, time to gird up our loins and submit blueprints
> for all work slated for the
On Mon, Jun 8, 2015 at 6:50 PM, Jason Dillaman wrote:
> Hmm ... looking at the latest version of QEMU, it appears that the RBD cache
> settings are changed prior to reading the configuration file instead of
> overriding the value after the configuration file has been read [1]. Try
> specifying
Hi,
I am trying to compile/create packages latest ceph version ( 519c3c9) from
hammer branch on an arm platform.For google-perftools i am compiling those from
https://code.google.com/p/gperftools/ .
The packages are generated fineI have used the same branch/commit and commands
to create package
Hi,
>>looking at the latest version of QEMU,
It's seem that it's was already this behaviour since the add of rbd_cache
parsing in rbd.c by josh in 2012
http://git.qemu.org/?p=qemu.git;a=blobdiff;f=block/rbd.c;h=eebc3344620058322bb53ba8376af4a82388d277;hp=1280d66d3ca73e552642d7a60743a0e2ce05f664;
On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote:
Hi,
looking at the latest version of QEMU,
It's seem that it's was already this behaviour since the add of rbd_cache
parsing in rbd.c by josh in 2012
http://git.qemu.org/?p=qemu.git;a=blobdiff;f=block/rbd.c;h=eebc3344620058322bb53ba8376af4a82
On Mon, Jun 8, 2015 at 10:43 PM, Josh Durgin wrote:
> On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote:
>>
>> Hi,
looking at the latest version of QEMU,
>>
>>
>> It's seem that it's was already this behaviour since the add of rbd_cache
>> parsing in rbd.c by josh in 2012
>>
>>
>> http://
> On Mon, Jun 8, 2015 at 10:43 PM, Josh Durgin wrote:
> > On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote:
> >>
> >> Hi,
>
> looking at the latest version of QEMU,
> >>
> >>
> >> It's seem that it's was already this behaviour since the add of rbd_cache
> >> parsing in rbd.c by josh in 2
Hi,
Gregory Farnum wrote:
>> 1. Can you confirm to me that currently it's impossible to restrict the read
>> and write access of a ceph account to a specific directory of a cephfs?
>
> It's sadly impossible to restrict access to the filesystem hierarchy
> at this time, yes. By making use of the
On Thu, Jun 4, 2015 at 1:13 AM, Luis Periquito wrote:
> Hi all,
>
> I've seen several chats on the monitor elections, and how the one with the
> lowest IP is always the master.
>
> Is there any way to change or influence this behaviour? Other than changing
> the IP of the monitor themselves?
Nope
Right - I see from the 0.80.8 notes that we merged a fix for #9073.
However (unfortunately) there were a number of patches that we
experimented with on this issue - and this looks like one of the earlier
ones (i.e not what we merged into master at the time), which is a bit
confusing (maybe it was t
Hi,
On 27/05/2015 22:34, Gregory Farnum wrote:
> Sorry for the delay; I've been traveling.
No problem, me too, I'm not really fast to answer. ;)
>> Ok, I see. According to the online documentation, the way to close
>> a cephfs client session is:
>>
>> ceph daemon mds.$id session ls
Hi Ilya,
Thanks for the reply. I knew that v2 image can be mapped if using default
striping parameters without --stripe-unit or --stripe-count.
It is just the rbd performance (IOPS & bandwidth) we tested hasn't met our
goal. We found at this point OSDs seemed not to be the bottleneck, so we want
-drive
file=rbd:poolceph1/vm-106-disk-1:mon_host=10.5.0.11;10.5.0.12;10.5.0.13:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/cephzimbra.keyring,if=none,id=drive-virtio0,aio=native,cache=none,detect-zeroes=on
- Mail original -
De: "Jason Dillaman"
À: "Andrey Korolyov"
Cc: "Josh
>>In the short-term, you can remove the "rbd cache" setting from your ceph.conf
That's not true, you need to remove the ceph.conf file.
Removing rbd_cache is not enough or default rbd_cache=false will apply.
I have done tests, here the result matrix
host ceph.conf : no rbd_cache: guest c
previous matrix was with ceph < giant
with ceph =>giant, rbd_cache=true by default, so cache=none not working if a
ceph.conf exist.
host conf : no value: guest cache=writeback : result : cache
host conf : rbd_cache=false : guest cache=writeback : result : nocache
(wrong)
host co
oops, sorry, my bad, I had wrong settings when testing.
you are right, remove rbd_cache from ceph.conf is enough to remove overloading
host conf : no value : guest cache=writeback : result : cache
host conf : rbd_cache=false : guest cache=writeback : result : nocache (wrong)
host con
Hi,
I'm doing benchmark (ceph master branch), with randread 4k qdepth=32,
and rbd_cache=true seem to limit the iops around 40k
no cache
1 client - rbd_cache=false - 1osd : 38300 iops
1 client - rbd_cache=false - 2osd : 69073 iops
1 client - rbd_cache=false - 3osd : 78292 iops
cache
--
Hi Patrick,
Facing 403 error while trying to upload the blueprint.
With regards,
Shishir
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Haomai
Wang
Sent: Monday, June 08, 2015 10:16 PM
To: Patrick McGarry
Cc: Ceph Devel; Ceph-User; ceph-annou
44 matches
Mail list logo