Hi,
Here the slides of the ceph bluestore prensentation
http://events.linuxfoundation.org/sites/events/files/slides/LinuxCon%20NA%20BlueStore.pdf
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.co
On Wed, Aug 31, 2016 at 2:30 PM, Goncalo Borges
wrote:
> Here it goes:
>
> # xfs_info /var/lib/ceph/osd/ceph-78
> meta-data=/dev/sdu1 isize=2048 agcount=4, agsize=183107519 blks
> = sectsz=512 attr=2, projid32bit=1
> =
Here it goes:
# xfs_info /var/lib/ceph/osd/ceph-78
meta-data=/dev/sdu1 isize=2048 agcount=4, agsize=183107519 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0finobt=0
data = bsize=4096
On Wed, Aug 31, 2016 at 2:11 PM, Goncalo Borges
wrote:
> Hi Brad...
>
> Thanks for the feedback. I think we are making some progress.
>
> I have opened the following tracker issue:
> http://tracker.ceph.com/issues/17177 .
>
> There I give pointers for all the logs, namely the result of the pg que
Hi Brad...
Thanks for the feedback. I think we are making some progress.
I have opened the following tracker issue: http://tracker.ceph.com/issues/17177
.
There I give pointers for all the logs, namely the result of the pg query and
all osd logs after increasing the log levels (debug_ms=1, de
Hello,
for the record, it seems that a changelog/release note for 0.94.9 just
appeared.
http://docs.ceph.com/docs/master/release-notes/#v0-94-9-hammer
And that suggests it is there just to fix the issue you're seeing, no
other changes.
Also I've just upgrade my crappy test cluster from to 0.94.
Hi,
I just upgraded from 0.94.7 to 0.94.8 on one of our mon servers. Using
dpkg I can see that several of the package versions increased, but
several others did not (ceph and ceph-common):
root@hqceph1:~# dpkg -l |grep ceph
ii ceph 0.94.7-1trusty amd64distributed
On Wed, Aug 31, 2016 at 9:56 AM, Brad Hubbard wrote:
>
>
> On Tue, Aug 30, 2016 at 9:18 PM, Goncalo Borges
> wrote:
>> Just a small typo correction to my previous email. Without it the meaning
>> was completely different:.
>>
>> "At this point I just need a way to recover the pg safely and I
On Tue, Aug 30, 2016 at 9:18 PM, Goncalo Borges
wrote:
> Just a small typo correction to my previous email. Without it the meaning was
> completely different:.
>
> "At this point I just need a way to recover the pg safely and I do NOT see
> how I can do that since it is impossible to understa
hi
i have a problem with installing ceph storage cluster . when i want to active
OSDs ceph-deploy could not create bootstrap-osd/ceph.keyring and throw out
after 300 seconds .
root@admin-node:~/my-cluster# ceph-deploy osd activate node2:/var/local/osd0
node3:/var/local/osd1
[ceph_deploy.conf][
Thank you for the info, although something is still not working
correctly. Here is what I have on the first osd I tried to upgrade:
ii ceph 0.94.7-1trusty amd64distributed
storage and file system
ii ceph-common 0.94.7-1trusty amd64
that release is in the repo now - can't see any mention of it in release
notes etc..
Get:1 http://download.ceph.com/debian-hammer/ jessie/main librbd1 amd64
0.94.9-1~bpo80+1 [2503 kB]
Get:2 http://download.ceph.com/debian-hammer/ jessie/main librados2 amd64
0.94.9-1~bpo80+1 [2389 kB]
Get:3 http://
You can specify the version of the package to install with apt-get. Check the
installed version of ceph on the mon you already have upgraded with 'apt-cache
policy ceph' and use that version to install the same version on your other
ceph servers by running 'apt-get install ceph=0.94.8-1~bpo80+
I am seeing that now too...I am trying to upgrade the osd's now and I am
running into an issue :-(
Can someone please take a look?
I am mid-upgrade and this is now causing us not to be able to move
forward with the osd upgrades.
Thanks,
Shain
On 08/30/2016 01:07 PM, Brian :: wrote:
Seems
Seems the version number has jumped another point but the files aren't in
the repo yet? .94.9 ?
Get:1 http://download.ceph.com/debian-hammer/ jessie/main ceph-common amd64
0.94.9-1~bpo80+1 [6029 kB]
Err http://download.ceph.com/debian-hammer/ jessie/main librbd1 amd64
0.94.9-1~bpo80+1
404 Not F
Ceph has never been updated with an 'apt-get upgrade'. You have always had to
do an 'apt-get dist-upgrade' to upgrade it without specifically naming it like
you ended up doing. This is to prevent accidentally upgrading a portion of
your cluster when you run security updates on a system. When
Just an FYI...it turns out in addition to the normal 'apt-get update'
and 'apt-get upgrade'...I also had to do an 'apt-get upgrade ceph'
(which I have never had to do in the past as far as I can remember).
Now everything is showing up correctly.
Thanks,
Shain
On 08/30/2016 12:39 PM, Shain Mi
Hi all,
We are also interested in EL6 RPMs. My understanding was that EL6 would
continue to be supported through Hammer.
Is there anything we can do to help?
Thanks,
Lincoln
> On Aug 29, 2016, at 11:14 AM, Alex Litvak
> wrote:
>
> Hammer RPMs for 0.94.8 are still not available for EL6. C
Hi,
I have been able to pick through the process a little further and replicate
it via the command line. The flow seems looks like this:
1) The user uploads an image to webserver server 'uploader01' it gets
written to a path such as '/cephfs/webdata/static/456/JHL/66448H-755h.jpg'
on cephfs
2) T
Hi,
I just upgraded from 0.94.7 to 0.94.8 on one of our mon servers. Using
dpkg I can see that several of the package versions increased, but
several others did not (ceph and ceph-common):
root@hqceph1:~# dpkg -l |grep ceph
ii ceph 0.94.7-1trusty amd64distributed
Hello,
Le 30/08/2016 à 14:08, Steffen Weißgerber a écrit :
Hello,
after correcting the configuration for different qemu vm's with rbd disks
(we removed the cache=writethrough option to have the default
writeback mode) we have a strange behaviour after restarting the vm's.
For most of them the
Good news Jean-Charles :)
now i have deleted the object
[...]
-rw-r--r-- 1 ceph ceph 100G Jul 31 01:04 vm-101-disk-2__head_383C3223__0
[...]
root@:~# rados -p rbd rm vm-101-disk-2
and did run again a deep-scrub on 0.223.
root@gengalos:~# ceph pg 0.223 query
No blocked requests anymore :)
To
Hello,
after correcting the configuration for different qemu vm's with rbd disks
(we removed the cache=writethrough option to have the default
writeback mode) we have a strange behaviour after restarting the vm's.
For most of them the cache mode is now writeback as expected. But some
neverthless
Awesome Goncalo, that is very helpful.
Cheers.
On 08/30/2016 01:21 PM, Goncalo Borges wrote:
> Hi Dennis.
>
> That is the first issue we saw and has nothing to do with the amd processors
> (which only relates to the second issue we saw). So the fix in the patch
>
> https://github.com/ceph/ceph
Hi Gregory,
Our cause we have 6TB data and replica 2 and its around 12TB size
occupied, still i have remaining 4TB even though it says this error.
51 active+undersized+degraded+remapped+backfill_toofull
Regards
Prabu GJ
On Mon, 29 Aug 2016 23:44:12 +0530 Gregory Fa
Just a small typo correction to my previous email. Without it the meaning was
completely different:.
"At this point I just need a way to recover the pg safely and I do NOT see how
I can do that since it is impossible to understand what is the problematic osd
with the incoherent object."
Sorry
Hi Dennis.
That is the first issue we saw and has nothing to do with the amd processors
(which only relates to the second issue we saw). So the fix in the patch
https://github.com/ceph/ceph/pull/10027
should work for you.
In our case we went for the full compilation for our own specific reason
> Op 30 augustus 2016 om 12:59 schreef "Dennis Kramer (DBS)" :
>
>
> Hi Goncalo,
>
> Thank you for providing below info. I'm getting the exact same errors:
> ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
> 1: (()+0x2ae88e) [0x5647a76f488e]
> 2: (()+0x113d0) [0x7f7d14c393d0]
>
Hi Goncalo,
Thank you for providing below info. I'm getting the exact same errors:
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
1: (()+0x2ae88e) [0x5647a76f488e]
2: (()+0x113d0) [0x7f7d14c393d0]
3: (Client::get_root_ino()+0x10) [0x5647a75eb730]
4: (CephFuse::Handle::make_fake
On Tue, Aug 30, 2016 at 10:46 AM, Dennis Kramer (DBS) wrote:
>
>
> On 08/29/2016 08:31 PM, Gregory Farnum wrote:
>> On Sat, Aug 27, 2016 at 3:01 AM, Francois Lafont
>> wrote:
>>> Hi,
>>>
>>> I had exactly the same error in my production ceph client node with
>>> Jewel 10.2.1 in my case.
>>>
>>> I
On 08/29/2016 08:31 PM, Gregory Farnum wrote:
> On Sat, Aug 27, 2016 at 3:01 AM, Francois Lafont
> wrote:
>> Hi,
>>
>> I had exactly the same error in my production ceph client node with
>> Jewel 10.2.1 in my case.
>>
>> In the client node :
>> - Ubuntu 14.04
>> - kernel 3.13.0-92-generic
>> - c
You are correct it only seems to impact recently modified files.
On Tue, Aug 30, 2016 at 3:36 AM, Yan, Zheng wrote:
> On Tue, Aug 30, 2016 at 2:11 AM, Gregory Farnum
> wrote:
> > On Mon, Aug 29, 2016 at 7:14 AM, Sean Redmond
> wrote:
> >> Hi,
> >>
> >> I am running cephfs (10.2.2) with kernel
Hi Wido,
thank for the response. I had a weird incident where servers in the
cluster all rebooted, I can't pin point what the issue could be.
Thanks again.
On Tue, Aug 30, 2016 at 11:06 AM, Wido den Hollander wrote:
>
> > Op 30 augustus 2016 om 10:16 schreef Ishmael Tsoaela <
> ishmae...@gmai
Hello
I've got a small cluster of 3 osd servers and 30 osds between them running
Jewel 10.2.2 on Ubuntu 16.04 LTS with stock kernel version 4.4.0-34-generic.
I am experiencing rather frequent osd crashes, which tend to happen a few times
a month on random osds. The latest one gave me the foll
> Op 30 augustus 2016 om 10:16 schreef Ishmael Tsoaela :
>
>
> Hi All,
>
>
> Is there a way to have ceph reweight osd automatically?
No, there is none at the moment.
>
> As well could a osd reaching 92% cause the entire cluster to reboot?
>
No, it will block, but it will not reboot any ma
Hi Brad
Thanks for replying.
Regarding versions and architecture,, everything is using 10.2.2 and the same
arquitectures. We did had a single client using ceph-fuse 9.2.0 until yesterday.
I can run a deep scrub with the log levels mentioned if it safe to do it in an
inconsistent pg. I have rea
Hi,
On 08/29/2016 08:30 PM, Gregory Farnum wrote:
> Ha, yep, that's one of the bugs Giancolo found:
>
> ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
> 1: (()+0x299152) [0x7f91398dc152]
> 2: (()+0x10330) [0x7f9138bbb330]
> 3: (Client::get_root_ino()+0x10) [0x7f91397df6c0]
>
Hi All,
Is there a way to have ceph reweight osd automatically?
As well could a osd reaching 92% cause the entire cluster to reboot?
Thank you in advance,
Ishmael Tsoaela
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
On Tue, Aug 30, 2016 at 3:50 PM, Goncalo Borges
wrote:
> Dear Ceph / CephFS supports.
>
> We are currently running Jewel 10.2.2.
>
> From time to time we experience deep-scrub errors in pgs inside our cephfs
> metadata pool. It is important to note that we do not see any hardware
> errors on th
39 matches
Mail list logo