On 11/14/2014 12:52 PM, Alexandre DERUMIER wrote:
>>> Unfortunately I didn't, do you think hdparm could wrong results ?
> I really don't known how hdparm is doing his bench (block size ? number of
> thread ?).
>
>
> BTW, do you have also upgraded librbd on your kvm node ? (and restarted the
> vm
Unfortunately I didn't, do you think hdparm could wrong results ?
On 11/14/2014 12:37 PM, Alexandre DERUMIER wrote:
> Do you have tried to bench with something like fio ?
>
>
> - Mail original -
>
> De: "Florent Bautista"
> À: "Alexandre DERUMI
2014 12:20 PM, Alexandre DERUMIER wrote:
> can you try to disable rbd cache ? (which is enable by default in giant)
>
> [client]
> rbd_cache = false
>
>
> - Mail original -
>
> De: "Florent Bautista"
> À: ceph-us...@ceph.com
> Envoyé: Vendredi
Hi all,
On a testing cluster, I upgraded from Firefly to Giant.
Without changing anything, read performance on a RBD volume (in a VM)
has been divided by 4 !
With Firefly, in the VM (/dev/vda is a Virtio RBD device) :
hdparm -Tt /dev/vda
/dev/vda:
Timing cached reads: 20568 MB in 1.99 sec
Hi Wido,
I have a "connection refused" for a few days on your mirror using rsync
(via IPv4).
Is it a blacklist or an issue ?
Thank you
Florent
On 04/09/2014 08:04 AM, Wido den Hollander wrote:
> Hi,
>
> I just enabled rsync on the eu.ceph.com mirror.
>
> eu.ceph.com mirrors from Ceph.com every
Sorry, problem solved, I forgot "/swift/v1" at the end of URLs in
Keystone endpoint creation...
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
I want to set-up a RadosGW (Firefly) + Keystone (IceHouse) environment,
but I have a problem I can't solve.
It seems that authentication is OK, user get a token.
But when he wants to create a bucket, he get 403 error.
I have this in RadosGW logs :
2014-09-24 13:02:37.894674 7fd2b6db470
Hi all,
Today I have a problem using CephFS. I use firefly last release, with
kernel 3.16 client (Debian experimental).
I have a directory in CephFS, associated to a pool "pool2" (with
set_layout).
All is working fine, I can add and remove files, objects are stored in
the right pool.
But when C
On 09/05/2014 02:16 PM, Yan, Zheng wrote:
> On Fri, Sep 5, 2014 at 4:05 PM, Florent Bautista wrote:
>> Firefly :) last release.
>>
>> After few days, second MDS is still "stopping" and consuming CPU
>> sometimes... :)
> Try restarting the stopping MDS an
Firefly :) last release.
After few days, second MDS is still "stopping" and consuming CPU
sometimes... :)
On 09/04/2014 09:13 AM, Yan, Zheng wrote:
> which version of MDS are you using?
>
> On Wed, Sep 3, 2014 at 10:48 PM, Florent Bautista wrote:
>> Hi John and
cesses. If the MDS daemons are running but apparently
> unresponsive, you may be able to get a little bit of extra info from
> the running MDS by doing "ceph daemon mds. ", where
> interesting commands are dump_ops_in_flight, status, objecter_ops
>
> Hopefully that will give u
Hi everyone,
I use Ceph Firefly release.
I had a MDS cluster with only one MDS until yesterday, when I tried to add
a second one to test multi-mds. I thought I could get back to one MDS when
I want, but it seems we can't !
Both crashed this night, and I am unable to get them back today.
They ap
re is an osd cap that mentions the data pool but not the new
> pool you created; that would explain your symptoms.
>
> sage
>
> On Fri, 28 Feb 2014, Florent Bautista wrote:
>
>> Hi all,
>>
>> Today I'm testing CephFS with client-side kernel drivers.
>>
t; the new
> pool you created; that would explain your symptoms.
>
> sage
>
> On Fri, 28 Feb 2014, Florent Bautista wrote:
>
> > Hi all,
> >
> > Today I'm testing CephFS with client-side kernel drivers.
> >
> &g
Hi all,
Today I'm testing CephFS with client-side kernel drivers.
My installation is composed of 2 nodes, each one with a monitor and an OSD.
One of them is also MDS.
root@test2:~# ceph -s
cluster 42081905-1a6b-4b9e-8984-145afe0f22f6
health HEALTH_OK
monmap e2: 2 mons at
{0=192.168
>
> You can't change it afterwards, but when creating and image you can
> supply the --order value and change the default 22 into something you
> like:
>
> 22 = 4096KB
> 23 = 8192KB
> 24 = 16384KB
> 25 = 32768KB
> 26 = 65536KB
>
>> Or is it a fixed value in Ceph architecture ?
>>
>
> No, you can se
Hi all,
I'm new with Ceph and I would like to know if there is any way of
changing size of Ceph's internal objects.
I mean, when I put an image on RBD for exemple, I can see this:
rbd -p CephTest info base-127-disk-1
rbd image 'base-127-disk-1':
size 32768 MB in 8192 objects
order 22 (40
17 matches
Mail list logo