On Wed, Jul 10, 2013 at 3:28 AM, Gandalf Corvotempesta
wrote:
> Thank you for the response.
> You are talking of median expected writes, but should I consider the single
> disk write speed or the network speed? A single disk is 100MB/s so
> 100*30=3000MB of journal for each osd? Or should I consid
Hi,
I've just subscribe the mailing. I'm maybe breaking the thread as I
cannot "answer to all" ;o)
I'd like to share my research on understanding of this behavior.
A rados put is showing the expected behavior while the rados bench
doesn't even with a concurrency set to one.
As a new comer,
Hi all,
I have been able to create an user with --caps="usage=read, write" and bring
its usages with
GET /admin/usage?format=json HTTP/1.1
Host: ceph-server
Authorization: AWS {access-key}:{hash-of-header-and-secret}
After of this, I have created an user with --caps="user=read, write", but I
Hi all,
I have met same problem.
I want to use default "admin" user to manage ceph user
through "Admin Ops API" which is a REST API.
The document say that it should be used as S3 API.
However, i tried AWS java API and compose a program, the
program can't wo
Thank you Mark! This is very interesting work ;)
Awaiting another parts!
On Tue, Jul 9, 2013 at 4:41 PM, Mark Nelson wrote:
> Hi Guys,
>
> Just wanted to let everyone know that we've released part 1 of a series of
> performance articles that looks at Cuttlefish vs Bobtail on our Supermicro
> te
Sorry, no updates on my side. My wife got our second baby and I'm busy with
reality (changing nappies and stuff)
--
Sent from my mobile device
On 09.07.2013, at 22:18, "Jeppesen, Nelson"
mailto:nelson.jeppe...@disney.com>> wrote:
Any updates on this? My production cluster has been running on
On 09/07/13 19:32, Sage Weil wrote:
> * mds: support robust lookup by ino number (good for NFS) (Yan, Zheng)
Hello Sage,
I recall that lookups-by-inode could historically return -ESTALE if the
inode in question was no-longer in the MDS cache. This presumably
removes that deficiency?
Is there
Hi all,
I have installed the v0.37 of tgt.
To test this feature i follow the
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/ guide
When i launch the command:
tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 0
--backing-store iscsi-image --bstype rbd
fails
First i th
I was following the tech preview of libvirt/ceph integration in xenserver,
but ran into an issue with ceph auth in setting up the SR. any help would
be greatly appreciated.
uuid was generated per: http://eu.ceph.com/docs/wip-dump/rbd/libvirt/
according to inktank, storage pool auth syntax differs
On Wed, Jul 10, 2013 at 1:07 AM, Alvaro Izquierdo Jimeno
wrote:
> Hi all,
>
> I have been able to create an user with --caps="usage=read, write" and bring
> its usages with
> GET /admin/usage?format=json HTTP/1.1
> Host: ceph-server
> Authorization: AWS {access-key}:{hash-of-header-and-secret}
On Wed, 10 Jul 2013, David McBride wrote:
> On 09/07/13 19:32, Sage Weil wrote:
>
> > * mds: support robust lookup by ino number (good for NFS) (Yan, Zheng)
>
> Hello Sage,
>
> I recall that lookups-by-inode could historically return -ESTALE if the
> inode in question was no-longer in the MDS c
Hello again!
Part 2 is now out! We've got a whole slew of results for 4K FIO tests
on RBD:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-2-4k-rbd-performance/
Mark
On 07/09/2013 08:41 AM, Mark Nelson wrote:
Hi Guys,
Just wanted to let everyone know that we've released part
hi noah,
Some results for the read tests:
I set client_readahead_min=4193404 which is the default for hadoop
dfs.datanode.readahead.bytes also. I ran the dfsio test 6 times each for
HDFS, Ceph with default read ahead & ceph with readahead=4193404. Setting
read ahead in ceph did give about a 10%
On Wed, Jul 10, 2013 at 9:17 AM, ker can wrote:
>
> Seems like a good read ahead value that the ceph hadoop client can use as a
> default !
Great, I'll add this tunable to the list of changes to be pushed into
next release.
> I'll look at the DFS write tests later today any tuning suggest
On 07/06/2013 04:51 AM, Xue, Chendi wrote:
Hi, all
I wanna fetch debug librbd and debug rbd logs when I am using vm to read /
write.
Details:
I created a volume from ceph and attached it to a vm.
So I suppose when I do read/write in the VM, I can get some rbd debug
logs in the
On Wed, Jul 10, 2013 at 12:38 AM, Erwan Velu wrote:
> Hi,
>
> I've just subscribe the mailing. I'm maybe breaking the thread as I cannot
> "answer to all" ;o)
>
> I'd like to share my research on understanding of this behavior.
>
> A rados put is showing the expected behavior while the rados bench
Ran the DFS IO write tests:
- Increasing the journal log size did not make any difference for me ... i
guess the number i had set was sufficient. For the rest of the tests I kept
it at a generous 10GB.
- Separating out the journal from the data disk did make a difference as
expected. Unfortunately
I have been running a development radosgw instance for awhile and now
looking to start a larger production one. I am trying to track down
some issues with PUT on larger files I am having from certain S3
clients. I am running on the 0.61.4 RPMs on RHEL6 and have the
following in my ceph.conf on th
On Wed, Jul 10, 2013 at 6:28 PM, Derek Yarnell wrote:
> I have been running a development radosgw instance for awhile and now
> looking to start a larger production one. I am trying to track down
> some issues with PUT on larger files I am having from certain S3
> clients. I am running on the 0.
On 07/10/2013 04:12 AM, Toni F. [ackstorm] wrote:
Hi all,
I have installed the v0.37 of tgt.
To test this feature i follow the
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/ guide
When i launch the command:
tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 0
--backing-
-Mensaje original-
De: Yehuda Sadeh [mailto:yeh...@inktank.com]
Enviado el: miƩrcoles, 10 de julio de 2013 16:50
Para: Alvaro Izquierdo Jimeno
CC: Bright; ceph-users
Asunto: Re: [ceph-users] How to use Admin Ops API in Ceph Object Storage
On Wed, Jul 10, 2013 at 1:07 AM, Alvaro Izquierdo
21 matches
Mail list logo