Hi,
I measured the data only what i transfered from client. Example 500MB
file transfered after complete if i measured the same file size will be 1GB not
10GB.
Our Configuration is :-
=
ce
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> V.Ranganath
> Sent: 12 June 2015 06:06
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] New to CEPH - VR@Sheeltron
>
> Dear Sir,
>
> I am New to CEPH. I have the following queries:
>
Hi,
I am trying to compile/create packages latest ceph version ( 519c3c9) from
hammer branch on an arm platform.
For google-perftools i am compiling those from
https://code.google.com/p/gperftools/ .
The packages are generated fine
I have used the same branch/commit and commands to create pa
Hi All,
I'm testing erasure coded pools. Is there any protection from bit-rot
errors on object read? If I modify one bit in object part (directly on
OSD) I'm getting *broken*object:
mon-01:~ # rados --pool ecpool get `hostname -f`_16 - | md5sum
bb2d82bbb95be6b9a039d135cc7a5d0d -
# m
>>> tombo schrieb am Dienstag, 9. Juni 2015 um 21:44:
>
> Hello guys,
>
Hi tombo,
that seem's to be related to http://tracker.ceph.com/issues/4282. We had the
same effects but limited by 1 hour. After that the authentication works again.
When increasing the log level when the problem ap
>
> Be warned that running SSD and HD based OSDs in the same server is not
> recommended. If you need the storage capacity, I'd stick to the journals
> on SSDs plan.
Can you please elaborate more why running SSD and HD based OSDs in the same
server is not
recommended ?
Thanks
Dominik
__
I don't know the official reason, but I would imagine the disparity in
performance would lead to weird behaviors and very spiky overall
performance. I would think that running a mix of SSD and HDD OSDs in the
same pool would be frowned upon, not just the same server.
On Fri, Jun 12, 2015 at 9:00 A
If you are careful about how you balance things, there's probably no
reason why SSDs and Spinners in the same server wouldn't work so long as
they are not in the same pool. I imagine that recommendation is
probably to keep things simple and have folks avoid designing unbalanced
systems.
Mark
On Fri, 12 Jun 2015 10:18:18 -0500 Mark Nelson wrote:
> If you are careful about how you balance things, there's probably no
> reason why SSDs and Spinners in the same server wouldn't work so long as
> they are not in the same pool. I imagine that recommendation is
> probably to keep things si
Sorry, it was a typo , I meant to say 1GB only.
I would say break the problem like the following.
1. Run some fio workload say (1G) on RBD and run ceph command like ‘ceph df’ to
see how much data it written. I am sure you will be seeing same data. Remember
by default ceph rados object size is 4M
Greetings experts,
I've got a test set up with CephFS configured to use an erasure coded pool +
cache tier on 0.94.2.
I have been writing lots of data to fill the cache to observe the behavior and
performance when it starts evicting objects to the erasure-coded pool.
The thing I have noticed
I noticed amd64 Ubuntu 12.04 hasn't updated its packages to 0.94.2
can you check this?
http://ceph.com/debian-hammer/dists/precise/main/binary-amd64/Packages
Package: ceph
Version: 0.94.1-1precise
Architecture: amd64
On Thu, Jun 11, 2015 at 10:35 AM Sage Weil wrote:
> This Hammer point release
Just had a go at reproducing this, and yeah, the behaviour is weird.
Our automated testing for cephfs doesn't include any cache tiering, so
this is a useful exercise!
With a writeback overlay cache tier pool on an EC pool, I write a bunch
of files, then do a rados cache-flush-evict-all, the
On Fri, Jun 12, 2015 at 11:07 AM, John Spray wrote:
>
> Just had a go at reproducing this, and yeah, the behaviour is weird. Our
> automated testing for cephfs doesn't include any cache tiering, so this is a
> useful exercise!
>
> With a writeback overlay cache tier pool on an EC pool, I write a
Thanks John, Greg.
If I understand this correctly, then, doing this:
rados -p hotpool cache-flush-evict-all
should start appropriately deleting objects from the cache pool. I just started
one up, and that seems to be working.
Otherwise, the cache's confgured timeouts/limits should get th
On Fri, Jun 12, 2015 at 11:59 AM, Lincoln Bryant wrote:
> Thanks John, Greg.
>
> If I understand this correctly, then, doing this:
> rados -p hotpool cache-flush-evict-all
> should start appropriately deleting objects from the cache pool. I just
> started one up, and that seems to be work
On Fri, Jun 12, 2015 at 1:07 AM, Paweł Sadowski wrote:
> Hi All,
>
> I'm testing erasure coded pools. Is there any protection from bit-rot
> errors on object read? If I modify one bit in object part (directly on
> OSD) I'm getting *broken*object:
Sorry, are you saying that you're getting a broken
Okay, Sam thinks he knows what's going on; here's a ticket:
http://tracker.ceph.com/issues/12000
On Fri, Jun 12, 2015 at 12:32 PM, Gregory Farnum wrote:
> On Fri, Jun 12, 2015 at 1:07 AM, Paweł Sadowski wrote:
>> Hi All,
>>
>> I'm testing erasure coded pools. Is there any protection from bit-rot
On 06/08/2015 09:23 PM, Alexandre DERUMIER wrote:
In the short-term, you can remove the "rbd cache" setting from your ceph.conf
That's not true, you need to remove the ceph.conf file.
Removing rbd_cache is not enough or default rbd_cache=false will apply.
I have done tests, here the result ma
Le 12/06/2015 09:55, Karanvir Singh a écrit :
Hi,
I am trying to compile/create packages latest ceph version ( 519c3c9)
from hammer branch on an arm platform.
For google-perftools i am compiling those from
https://code.google.com/p/gperftools/ .
The packages are generated fine
I have used
We've recently found similar problems creating a new cluster over an older
one, even after using "ceph-deploy purge", because some of the data
remained on /var/lib/ceph/*/* (ubuntu trusty) and the nodes were trying to
use old keyrings.
Hope it helps,
Alex
__
21 matches
Mail list logo