On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
> Yes I read it and do no not understand what you mean when say *verify
> this*? All 3335808 inodes are definetly files and direcories created by
> ceph OSD process:
>
What I mean is how/why did Ceph create 3+ million files, where in the t
On Tue, 24 Mar 2015 07:56:33 +0100 (CET) Alexandre DERUMIER wrote:
> Hi,
>
> >>dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
> >>
> >>1073741824 Bytes (1,1 GB) kopiert, 2,53986 s, 423 MB/s
>
> How much do you get with o_dsync? (ceph journal use o_dsync, and some
> ssd are
>>No, 262144 ops total in 18 seconds.
>>
Oh ok ;)
>>"rbd bench-write" is clearly doing something VERY differently from "rados
>>bench" (and given its output was also written by somebody else), maybe some
>>Ceph dev can enlighten us?
Maybe rbd_cache is merging 4k block to 4M rados objects ?
do
Hi,
First of all, thank you for your detailed answer.
My Ceph Version is Hammer, sry should have mentioned that.
Yes we have 2 Intel 320 for the OS, the think process behind this was that the
OS Disk is not that important, and they were cheap but SSDs(power consumption).
The Plan was to put th
On Tue, 24 Mar 2015 08:36:40 +0100 (CET) Alexandre DERUMIER wrote:
> >>No, 262144 ops total in 18 seconds.
> >>
> Oh ok ;)
>
> >>"rbd bench-write" is clearly doing something VERY differently from
> >>"rados bench" (and given its output was also written by somebody
> >>else), maybe some Ceph dev
Hi,
Yeah, my problem is the performance with o_direct and o_dsync.
I guess you switched something at the rbd bench-write results:
>>elapsed: 18 ops: 262144 ops/sec: 14466.30 bytes/sec: 59253946.11
Means
elapsed: 18
ops: 262144
ops/sec: 14466.30
bytes/sec: 59253946.11
what makes ~4k per IO
Di
Hello,
On Tue, 24 Mar 2015 07:43:00 + Rottmann Jonas wrote:
> Hi,
>
> First of all, thank you for your detailed answer.
>
> My Ceph Version is Hammer, sry should have mentioned that.
>
> Yes we have 2 Intel 320 for the OS, the think process behind this was
> that the OS Disk is not that i
I can not reproduce the snapshot issue with BTRFS on the 3.17 kernel. My
test cluster had 48 btrfs OSDs on BTRFS for four months without an issue
since going to 3.17. The only concern I have is potential slowness over
time. We are not using compression. We are going production in one month
and alth
On Tue, 24 Mar 2015 07:24:05 -0600 Robert LeBlanc wrote:
> I can not reproduce the snapshot issue with BTRFS on the 3.17 kernel.
Good to know.
I shall give that a spin on one of my test cluster nodes then, once a
kernel over 3.16 actually shows up in Debian sid. ^o^
Christian
>My
> test clust
Hi,
this is ceph version 0,93
I can't create an image in an rbd-erasure-pool:
root@bd-0:~#
root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated
pool 'bs3.rep' created
root@bd-0:~# rbd create --size 1000 --pool bs3.rep test
root@bd-0:~#
root@bd-0:~# ceph osd pool create bs3.era 4096 4096
On Tue, Mar 24, 2015 at 12:13 AM, Christian Balzer wrote:
> On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
>
>> Yes I read it and do no not understand what you mean when say *verify
>> this*? All 3335808 inodes are definetly files and direcories created by
>> ceph OSD process:
>>
> What
Hi Markus,
On 24/03/2015 14:47, Markus Goldberg wrote:
> Hi,
> this is ceph version 0,93
> I can't create an image in an rbd-erasure-pool:
>
> root@bd-0:~#
> root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated
> pool 'bs3.rep' created
> root@bd-0:~# rbd create --size 1000 --pool bs3.re
- Mail original -
> Hi Markus,
> On 24/03/2015 14:47, Markus Goldberg wrote:
> > Hi,
> > this is ceph version 0,93
> > I can't create an image in an rbd-erasure-pool:
> >
> > root@bd-0:~#
> > root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated
> > pool 'bs3.rep' created
> > roo
Hi,
is there any way to use ceph-deploy with lvm ?
Stefan
Excuse my typo sent from my mobile phone.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is there an enumerated list of issues with snapshots on cache pools.
We currently have snapshots on a cache tier and haven't seen any
issues (development cluster). I just want to know what we should be
looking for.
On Tue, Mar 24, 2015 at 9:21 AM, Stéphane DUGRAVOT
wrote:
>
>
> __
I'm not sure why crushtool --test --simulate doesn't match what the
cluster actually does, but the cluster seems to be executing the rules
even though crushtool doesn't. Just kind of stinks that you have to
test the rules on actual data.
Should I create a ticket for this?
On Mon, Mar 23, 2015 at
On Tue, Mar 24, 2015 at 10:48 AM, Robert LeBlanc wrote:
> I'm not sure why crushtool --test --simulate doesn't match what the
> cluster actually does, but the cluster seems to be executing the rules
> even though crushtool doesn't. Just kind of stinks that you have to
> test the rules on actual da
Hi Experts,
After implemented Ceph initially with 3 OSDs, now I am facing an issue:
It reports healthy but sometimes(or often) fails to access the pools.
While sometimes it comes back to normal automatically.
For example:
*[*ceph@gcloudcon ceph-cluster]$ *rados -p volumes ls*
2015-03-24
> Hi Loic and Markus,
> By the way, Inktank do not support snapshot of a pool with cache tiering :
>
>*
> https://download.inktank.com/docs/ICE%201.2%20-%20Cache%20and%20Erasure%20Coding%20FAQ.pdf
Hi,
You seem to be talking about pool snapshots rather than RBD snapshots. But in
the linke
http://tracker.ceph.com/issues/11224
On Tue, Mar 24, 2015 at 12:11 PM, Gregory Farnum wrote:
> On Tue, Mar 24, 2015 at 10:48 AM, Robert LeBlanc wrote:
>> I'm not sure why crushtool --test --simulate doesn't match what the
>> cluster actually does, but the cluster seems to be executing the rules
On Tue, Mar 24, 2015 at 12:09 PM, Brendan Moloney wrote:
>
>> Hi Loic and Markus,
>> By the way, Inktank do not support snapshot of a pool with cache tiering :
>>
>>*
>> https://download.inktank.com/docs/ICE%201.2%20-%20Cache%20and%20Erasure%20Coding%20FAQ.pdf
>
> Hi,
>
> You seem to be talki
This was excellent advice. It should be on some official Ceph
troubleshooting page. It takes a while for the monitors to deal with new
info, but it works.
Thanks again!
--Greg
On Wed, Mar 18, 2015 at 5:24 PM, Sage Weil wrote:
> On Wed, 18 Mar 2015, Greg Chavez wrote:
> > We have a cuttlefish (0
Hi,
I'm having trouble setting up an object gateway on an existing cluster. The
cluster I'm trying to add the gateway to is running on a Precise 12.04
virtual machine.
The cluster is up and running, with a monitor, two OSDs, and a metadata
server. It returns HEALTH_OK and active+clean, so I am so
- Original Message -
> From: "Greg Meier"
> To: ceph-users@lists.ceph.com
> Sent: Tuesday, March 24, 2015 4:24:16 PM
> Subject: [ceph-users] Auth URL not found when using object gateway
>
> Hi,
>
> I'm having trouble setting up an object gateway on an existing cluster. The
> cluster I'
Hi,
Sreenath BH wrote :
> consider following values for a pool:
>
> Size = 3
> OSDs = 400
> %Data = 100
> Target PGs per OSD = 200 (This is default)
>
> The PG calculator generates number of PGs for this pool as : 32768.
>
> Questions:
>
> 1. The Ceph documentation recommends around 100 PGs/O
May be some one can spot a new light,
1. Only SSD-cache OSDs affected by this issue
2. Total cache OSD count is 12x60GiB, backend filesystem is ext4
3. I have created 2 cache tier pools with replica size=3 on that OSD,
both with pg_num:400, pgp_num:400
4. There was a crush ruleset:
superuser@ad
26 matches
Mail list logo