Hello John,
Created tracker for this issue Refer-- >
http://tracker.ceph.com/issues/18994
Thanks
On Fri, Feb 17, 2017 at 6:15 PM, John Spray wrote:
> On Fri, Feb 17, 2017 at 6:27 AM, Muthusamy Muthiah
> wrote:
> > On one our platform mgr uses 3 CPU cores . Is there a ticket available
> for
>
On 02/19/2017 12:15 PM, Patrick Donnelly wrote:
On Sat, Feb 18, 2017 at 2:55 PM, Noah Watkins wrote:
The least intrusive solution is to simply change the sandbox to allow
the standard file system module loading function as expected. Then any
user would need to make sure that every OSD had consi
On Mon, Feb 20, 2017 at 6:37 AM, Tim Serong wrote:
> Hi All,
>
> Pretend I'm about to upgrade from one Ceph release to another. I want
> to know that the cluster is healthy enough to sanely upgrade (MONs
> quorate, no OSDs actually on fire), but don't care about HEALTH_WARN
> issues like "too man
Could this be a synchronization issue in which case multi clients
visiting the same object, one client(the vm/qemu) is updating the object
while another client(ceph rbd export/export-diff execution) is reading the
content of the same object? How do Ceph make sure the consistency in this
case?
Z
Refer to my previous post for data you can gather that will help
narrow this down.
On Mon, Feb 20, 2017 at 6:36 PM, Jay Linux wrote:
> Hello John,
>
> Created tracker for this issue Refer-- >
> http://tracker.ceph.com/issues/18994
>
> Thanks
>
> On Fri, Feb 17, 2017 at 6:15 PM, John Spray wrote:
Hay All
I created an small dev ceph cluster and dmcrypt the OSD's but i cant seam
to see where the keys are stored after.
>From looking at the debug notes the dir should be "/etc/ceph/dmcrypt-keys",
but that foulder does not get created and no keys are stored??
Any help on this would be grate
Hi, everyone.
I read the source code. Could this be a case: a "WRITE" op designated to
OBJECT X is followed by a series of Ops at the end of which is a "READ" op
designated to the same OBJECT that come from the "rbd EXPORT" command; although
the "WRITE" op modified the ObjectContext of OBJECT
AFAIK, that fix is scheduled to be included in Hammer 0.94.10 (which
hasn't been released yet).
Is this issue only occurring on cloned images? Since Hammer is nearly
end-of-life, can you repeat this issue on Jewel? Are the affected
images using cache tiering? Can you determine an easy-to-reproduce
Hello, world!\n
I have been using CEPH RBD for a year or so as a virtual machine storage
backend, and I am thinking about moving our another subsystem to CEPH:
The subsystem in question is a simple replicated object storage,
currently implemented by a custom C code by yours truly. My ques
Hay All
I created an small dev ceph cluster and dmcrypt the OSD's but i cant seam
to see where the keys are stored after.
>From looking at the debug notes the dir should be "/etc/ceph/dmcrypt-keys",
but that foulder does not get created and no keys are stored??
Any help on this would be grate
Thanks! Seems non-standard, but it works. :)
C.
Anyone know what's wrong?
You can clear these by setting them to zero.
John
Everything is Jewel 10.2.5.
Thanks!
Chad.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
Hi Brian,
On 14 February 2017 at 19:33, Brian Andrus
wrote:
>
>
> On Tue, Feb 14, 2017 at 5:27 AM, Tyanko Aleksiev > wrote:
>
>> Hi Cephers,
>>
>> At University of Zurich we are using Ceph as a storage back-end for our
>> OpenStack installation. Since we recently reached 70% of occupancy
>> (m
Hi Wido,
Just to make sure I have everything straight,
> If the PG still doesn't recover do the same on osd.307 as I think that 'ceph
> pg X query' still hangs?
> The info from ceph-objectstore-tool might shed some more light on this PG.
You mean run the objectstore command on 307, not remove
Hi,
again, as I said, in normal operation everything is fine with SMR. They
perform well in particular for large sequential writes because of the on
platter cache (20 GB I think). All tests we have done were with good
SSDs for OSD cache.
Things blow up during backfill / recovery because the
On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak wrote:
> Hello, world!\n
>
> I have been using CEPH RBD for a year or so as a virtual machine storage
> backend, and I am thinking about moving our another subsystem to CEPH:
>
> The subsystem in question is a simple replicated object storage,
Gregory Farnum wrote:
: On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak wrote:
: > Hello, world!\n
: >
: > I have been using CEPH RBD for a year or so as a virtual machine storage
: > backend, and I am thinking about moving our another subsystem to CEPH:
: >
: > The subsystem in question is
On Mon, Feb 20, 2017 at 11:57 AM, Jan Kasprzak wrote:
> Gregory Farnum wrote:
> : On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak wrote:
> : > Hello, world!\n
> : >
> : > I have been using CEPH RBD for a year or so as a virtual machine storage
> : > backend, and I am thinking about moving o
On Sat, Feb 18, 2017 at 12:39 AM, Nick Fisk wrote:
> From what I understand in Jewel+ Ceph has the concept of an authorative
> shard, so in the case of a 3x replica pools, it will notice that 2 replicas
> match and one doesn't and use one of the good replicas. However, in a 2x
> pool your out of l
Hello,
Just a quick update since I didn't have time for this yesterday.
I did a similar test as below with only the XFS node active and as expected
results are opposite:
3937 IOPS 3.16
3595 IOPS 4.9
As opposed to what I found out yesterday:
---
Thus I turned off the XFS node and ran the test
Hello,
On Mon, 20 Feb 2017 14:12:52 -0800 Gregory Farnum wrote:
> On Sat, Feb 18, 2017 at 12:39 AM, Nick Fisk wrote:
> > From what I understand in Jewel+ Ceph has the concept of an authorative
> > shard, so in the case of a 3x replica pools, it will notice that 2 replicas
> > match and one does
On Mon, Feb 20, 2017 at 4:24 PM, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 20 Feb 2017 14:12:52 -0800 Gregory Farnum wrote:
>
>> On Sat, Feb 18, 2017 at 12:39 AM, Nick Fisk wrote:
>> > From what I understand in Jewel+ Ceph has the concept of an authorative
>> > shard, so in the case of a 3x
Hello,
On Mon, 20 Feb 2017 17:15:59 -0800 Gregory Farnum wrote:
> On Mon, Feb 20, 2017 at 4:24 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Mon, 20 Feb 2017 14:12:52 -0800 Gregory Farnum wrote:
> >
> >> On Sat, Feb 18, 2017 at 12:39 AM, Nick Fisk wrote:
> >> > From what I understa
Hi Jason,
Thanks for the reply.
We are not sure this issue is only occurring on cloned images. We think it
would be a generic synchronization issue. Our production/test setup are all
based on Hammer, so we don’t have a chance to touch Jewel. But we will try
Jewel latter.
We don’t use cache tieri
yes, https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-a
nd-ssd-within-the-same-box/ is enough
don't test on your production env. before you start, backup your cursh map.
ceph osd getcrushmap -o crushmap.bin
below's some hint:
ceph osd getcrushmap -o crushmap.bin
crushtool -d cru
Hello,
I have created a ceph cluster with one admin server, one monitor and two
osd's. The setup is completed. But when trying to add the ceph as
primary storage of cloudstack, I am getting the below error in error logs.
Am I missing something ? Please help.
2017-02-20 21:0
25 matches
Mail list logo