Dear Ceph / CephFS supports.
We are currently running Jewel 10.2.2.
From time to time we experience deep-scrub errors in pgs inside our
cephfs metadata pool. It is important to note that we do not see any
hardware errors on the osds themselves so the error must have some other
source.
The e
Hello,
On Mon, 29 Aug 2016 16:16:11 -0700 Eric Kolb wrote:
> Hello,
>
> Have read a few items about what occurs if the back-end cluster switch
> were to fail or be rebooted due to code updates. From the
> Troubleshooting OSDs guide
> (http://docs.ceph.com/docs/jewel/rados/troubleshooting/tro
On Tue, Aug 30, 2016 at 2:11 AM, Gregory Farnum wrote:
> On Mon, Aug 29, 2016 at 7:14 AM, Sean Redmond wrote:
>> Hi,
>>
>> I am running cephfs (10.2.2) with kernel 4.7.0-1. I have noticed that
>> frequently static files are showing empty when serviced via a web server
>> (apache). I have tracked
How Mehmet
OK so it does come from a rados put.
As you were able to check the VM device objet size is 4 MB.
So we'll see after you have removed the object with rados -p rbd rm.
I'll wait for an update.
JC
While moving. Excuse unintended typos.
> On Aug 29, 2016, at 14:34, Mehmet wrote:
Hello,
Have read a few items about what occurs if the back-end cluster switch
were to fail or be rebooted due to code updates. From the
Troubleshooting OSDs guide
(http://docs.ceph.com/docs/jewel/rados/troubleshooting/troubleshooting-osd/)
it states, "if the cluster (back-end) network fails o
Hey JC,
after setting up the ceph-cluster i tried to migrate an image from one
of our production vm into ceph via
# rados -p rbd put ...
but i have got always "file too large". I guess this file
# -rw-r--r-- 1 ceph ceph 100G Jul 31 01:04
vm-101-disk-2__head_383C3223__0
is the result of th
Hi,
Yes the file has no contents until the page cache is flushed.
I will give the fuse client a try and report back.
Thanks
On Mon, Aug 29, 2016 at 7:11 PM, Gregory Farnum wrote:
> On Mon, Aug 29, 2016 at 7:14 AM, Sean Redmond
> wrote:
> > Hi,
> >
> > I am running cephfs (10.2.2) with kernel
On Sat, Aug 27, 2016 at 3:01 AM, Francois Lafont
wrote:
> Hi,
>
> I had exactly the same error in my production ceph client node with
> Jewel 10.2.1 in my case.
>
> In the client node :
> - Ubuntu 14.04
> - kernel 3.13.0-92-generic
> - ceph 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
> - cep
On Mon, Aug 29, 2016 at 12:53 AM, Christian Balzer wrote:
> On Mon, 29 Aug 2016 12:51:55 +0530 gjprabu wrote:
>
>> Hi Chrishtian,
>>
>>
>>
>> Sorry for subject and thanks for your reply,
>>
>>
>>
>> > That's incredibly small in terms of OSD numbers, how many hosts? What
>> replication
On Mon, Aug 29, 2016 at 7:14 AM, Sean Redmond wrote:
> Hi,
>
> I am running cephfs (10.2.2) with kernel 4.7.0-1. I have noticed that
> frequently static files are showing empty when serviced via a web server
> (apache). I have tracked this down further and can see when running a
> checksum against
Hammer RPMs for 0.94.8 are still not available for EL6. Can this
please be addressed ?
Thank you in advance,
On 08/27/2016 06:25 PM,
alexander.v.lit...@gmail.com wrote:
RPMs are not available at the distro side.
On Fri, 26 Aug 2016 21:31:45 + (UTC), Sage Weil
wrote:
This Hammer poin
Hello JC,
in short for the records:
What you can try doing is to change the following settings on all the
OSDs that host this particular PG and see if it makes things better
[osd]
[...]
osd_scrub_chunk_max = 5 #
maximum number of chunks the scrub will
Hi Mehmet,
see inline
Keep me posted
JC
> On Aug 29, 2016, at 01:23, Mehmet wrote:
>
> Hey JC,
>
> thank you very much! - My answers inline :)
>
> Am 2016-08-26 19:26, schrieb LOPEZ Jean-Charles:
>> Hi Mehmet,
>> what is interesting in the PG stats is that the PG contains around
>> 700+ obj
Hello dear ceph users
i have a problem with installing ceph storage cluster . when i want to
active OSDs ceph-deploy can not create bootstrap-osd/ceph.keyring and throw
out after 300 second . this is my log . i dont know what should i do . i
did everything in quick reference of documantation
root
Hi,
I am running cephfs (10.2.2) with kernel 4.7.0-1. I have noticed that
frequently static files are showing empty when serviced via a web server
(apache). I have tracked this down further and can see when running a
checksum against the file on the cephfs file system on the node serving the
empty
Hi,
I am running cephfs (10.2.2) with kernel 4.7.0-1. I have noticed that
frequently static files are showing empty when serviced via a web server
(apache). I have tracked this down further and can see when running a
checksum against the file on the cephfs file system on the node serving the
empty
On Mon, Aug 29, 2016 at 2:38 PM, Ivan Grcic wrote:
> Hi Ilya,
>
> yes, thank you that was the issue. I was wondering why do my mons
> exchange so much data :)
>
> I didn't know we index the buckets using the actual id value, don't
> recall I red that somewhere.
> One shouldn't be too imaginative w
Hi Ilya,
yes, thank you that was the issue. I was wondering why do my mons
exchange so much data :)
I didn't know we index the buckets using the actual id value, don't
recall I red that somewhere.
One shouldn't be too imaginative with the id values then, heh :)
Thank you once again,
Ivan
On
Hi Christian,
Everything is fine, I have simple question 16TB total cepfs mounted
size and what is the usable space with OSD replica 2.
Regards
Prabu GJ
On Mon, 29 Aug 2016 13:23:43 +0530 Christian Balzer
wrote
On Mon, 29 Aug 2016 12:51:55 +0530 g
Hey JC,
thank you very much! - My answers inline :)
Am 2016-08-26 19:26, schrieb LOPEZ Jean-Charles:
Hi Mehmet,
what is interesting in the PG stats is that the PG contains around
700+ objects and you said that you are using RBD only in your cluster
if IIRC. With the default RBD order (4MB obje
Hi,
I actually met two problems.
The first one is about multipart upload described in
http://tracker.ceph.com/issues/13764. There are thousands of objects like
these in default.rgw.buckets.data pool
e58d3d68-c100-4af4-a611-dd7467e7132f.164106.2__shadow_some.exe.2~6fevHiwonk7H8QSmnzljct-s52z
On Mon, 29 Aug 2016 12:51:55 +0530 gjprabu wrote:
> Hi Chrishtian,
>
>
>
> Sorry for subject and thanks for your reply,
>
>
>
> > That's incredibly small in terms of OSD numbers, how many hosts? What
> replication size?
>
> Total host 5.
>
> Replicated size : 2
>
At t
Hi Chrishtian,
Sorry for subject and thanks for your reply,
> That's incredibly small in terms of OSD numbers, how many hosts? What
replication size?
Total host 5.
Replicated size : 2
> And also quite large in terms of OSD size, especially with this
configuration.
Hello,
First of all, the subject is misleading.
It doesn't matter if you're using CephFS, the toofull status is something
that OSDs are in.
On Mon, 29 Aug 2016 12:06:21 +0530 gjprabu wrote:
>
>
> Hi All,
>
>
>
>We are new with cephfs and we have 5 OSD and each size has 3.3TB.
Tha
24 matches
Mail list logo