Hi guys,
In the past few months, I've read some posts about upgrading from
Hammer. Maybe I've missed something, but I didn't really read something
on QEMU/KVM behaviour in this context.
At the moment, we're using:
> $ qemu-system-x86_64 --version
> QEMU emulator version 2.3.0 (Debian 1:2.3+dfsg-
> Op 13 december 2016 om 9:05 schreef Kees Meijs :
>
>
> Hi guys,
>
> In the past few months, I've read some posts about upgrading from
> Hammer. Maybe I've missed something, but I didn't really read something
> on QEMU/KVM behaviour in this context.
>
> At the moment, we're using:
>
> > $ qe
Hi,
Recently I lost 5 out of 12 journal OSDs (2xSDD failure at one time).
size=2, min_size=1. I know, should rather be 3/2, I have plans to switch to
it asap.
CEPH started to throw many failures, then I removed these two SSDs, and
recreated journal OSD from scratch. In my case, all data on main O
On Tue, Dec 13, 2016 at 4:38 PM, JiaJia Zhong
wrote:
> hi cephers:
> we are using ceph hammer 0.94.9, yes, It's not the latest ( jewel),
> with some ssd osds for tiering, cache-mode is set to readproxy,
> everything seems to be as expected,
> but when reading some small files from c
shinjo, thanks for your help,
#1 How small is actual data?
23K, 24K, 165K , I didn't record all of them.
#2 Is the symptom reproduceable with same size of different data?
no, we have some processes to create files, the 0-byte-files became normal
after they were covered by those processes.
On Tue, Dec 13, 2016 at 7:35 AM, Dietmar Rieder
wrote:
> Hi,
>
> this is good news! Thanks.
>
> As far as I see the RBD supports (experimentally) now EC data pools. Is
> this true also for CephFS? It is not stated in the announce, so I wonder
> if and when EC pools are planned to be supported by C
Hi @all,
I have a little remark concerning at least the Trusty ceph packages (maybe
it concerns another distributions, I don't know).
I'm pretty sure that before the 10.2.5 version, the restart of the daemons
wasn't managed during the packages upgrade and with the 10.2.5 version it's
the case. I
On Tue, Dec 13, 2016 at 6:03 AM, Goncalo Borges
wrote:
> Hi Ceph(FS)ers...
>
> I am currently running in production the following environment:
>
> - ceph/cephfs in 10.2.2.
> - All infrastructure is in the same version (rados cluster, mons, mds and
> cephfs clients).
> - We mount cephfs using ceph-
On 12/13/2016 12:42 PM, Francois Lafont wrote:
> But, _by_ _principle_, in the specific case of ceph (I know it's not the
> usual case of packages which provide daemons), I think it would be more
> safe and practical that the ceph packages don't manage the restart of
> daemons.
And I say (even if
On Tue, 13 Dec 2016, Dong Wu wrote:
> Hi, all
>I have a cluster with nearly 1000 osds, and each osd already
> occupied 2.5G physical memory on average, which cause each host 90%
> memory useage. when use tcmalloc, we can use "ceph tell osd.* release"
> to release unused memory, but in my cluste
Hi John,
Thanks for your answer.
The mentioned modification of the pool validation would than allow for
CephFS having the data pools on EC while keeping the metadata on a
replicated pool, right?
Dietmar
On 12/13/2016 12:35 PM, John Spray wrote:
> On Tue, Dec 13, 2016 at 7:35 AM, Dietmar Rieder
>
On Tue, Dec 13, 2016 at 12:18 PM, Dietmar Rieder
wrote:
> Hi John,
>
> Thanks for your answer.
> The mentioned modification of the pool validation would than allow for
> CephFS having the data pools on EC while keeping the metadata on a
> replicated pool, right?
I would expect so.
John
>
> Diet
Thank you for the tip.
Just found out the repo is empty, am i doing something wrong?
http://mirror.centos.org/centos/7/cr/x86_64/Packages/
---
Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade
2016-12-12 17:31 GMT-03:00 Ilya Dryomov :
> On Mon, Dec 12, 2016 at 9:16 PM, D
On Tue, Dec 13, 2016 at 2:45 PM, Diego Castro
wrote:
> Thank you for the tip.
> Just found out the repo is empty, am i doing something wrong?
>
> http://mirror.centos.org/centos/7/cr/x86_64/Packages/
The kernel in the OS repo seems new enough:
http://mirror.centos.org/centos/7/os/x86_64/Packages
On Tue, Dec 13, 2016 at 6:45 AM, Diego Castro
wrote:
> Thank you for the tip.
> Just found out the repo is empty, am i doing something wrong?
>
> http://mirror.centos.org/centos/7/cr/x86_64/Packages/
>
Sorry for the confusion. CentOS 7.3 shipped a few hours ago.
- Ken
___
On Fri, Dec 9, 2016 at 9:42 PM, Gregory Farnum wrote:
> On Fri, Dec 9, 2016 at 6:58 AM, plataleas wrote:
>> Hi all
>>
>> We enabled CephFS on our Ceph Cluster consisting of:
>> - 3 Monitor servers
>> - 2 Metadata servers
>> - 24 OSD (3 OSD / Server)
>> - Spinning disks, OSD Journal is on SSD
>>
NP, i'll test as soon as it arrives on Openlogic's mirrors.
---
Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade
2016-12-13 11:07 GMT-03:00 Ken Dreyer :
> On Tue, Dec 13, 2016 at 6:45 AM, Diego Castro
> wrote:
> > Thank you for the tip.
> > Just found out the repo is em
On Mon, Dec 5, 2016 at 5:24 PM, Goncalo Borges
wrote:
> Hi Greg, Jonh...
>
> To Jonh: Nothing is done in tge background between two consecutive df
> commands,
>
> I have opened the following tracker issue:
> http://tracker.ceph.com/issues/18151
>
> (sorry, all the issue headers are empty apart f
Hi Greg
Thanks for following it up.
We are aiming to upgrade to 10.2.5 in early January. Will let you know once
that is done, and what do we get as outputs.
Cheers
Goncalo
From: Gregory Farnum [gfar...@redhat.com]
Sent: 14 December 2016 06:59
To: Goncalo B
I’m looking at performance and storage impacts of EC vs. Replication.
After an initial EC investigation LRC is an interesting option. Can anyone
tell me the state of the LRC plugin? Is it considered production ready in
the same sense that EC is production ready in Jewel? Or is the LRC plugin
consid
Ok, thanks for your explanation!
I read those warnings about size 2 + min_size 1 (we are using ZFS as RAID6,
called zraid2) as OSDs.
Time to raise replication!
Kevin
2016-12-13 0:00 GMT+01:00 Christian Balzer :
> On Mon, 12 Dec 2016 22:41:41 +0100 Kevin Olbrich wrote:
>
> > Hi,
> >
> > just in c
Hi John.
Comments in line.
Hi Ceph(FS)ers...
I am currently running in production the following environment:
- ceph/cephfs in 10.2.2.
- All infrastructure is in the same version (rados cluster, mons, mds and
cephfs clients).
- We mount cephfs using ceph-fuse.
Since yesterday that we have ou
Are CephFS snapshots still considered unstable/experimental in Kraken?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, Dec 13, 2016 at 4:34 PM, Darrell Enns wrote:
> Are CephFS snapshots still considered unstable/experimental in Kraken?
Sadly, yes. I had a solution but they didn't account for hard links.
When we decided we wanted to support those instead of ignoring them or
trying to set up boundary regio
OK, thanks for the update Greg!
- Darrell
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
On Wed, 14 Dec 2016 00:06:14 +0100 Kevin Olbrich wrote:
> Ok, thanks for your explanation!
> I read those warnings about size 2 + min_size 1 (we are using ZFS as RAID6,
> called zraid2) as OSDs.
>
This is similar to my RAID6 or RAID10 backed OSDs with regards to having
very resilient, ext
On Wed, 14 Dec 2016, Dong Wu wrote:
> Thanks for your response.
>
> 2016-12-13 20:40 GMT+08:00 Sage Weil :
> > On Tue, 13 Dec 2016, Dong Wu wrote:
> >> Hi, all
> >>I have a cluster with nearly 1000 osds, and each osd already
> >> occupied 2.5G physical memory on average, which cause each host
> ps: When we first met this issue, restarting the mds could cure that. (but
> that was ceph 0.94.1).
Is this still working?
Since you're using 0.94.9, bug(#12551) you mentioned was fixed.
Can you do the followings to see object appear to you as ZERO size is
actually there:
# rados -p ${cache
-- Original --
From: "Shinobu Kinjo";
Date: Wed, Dec 14, 2016 10:56 AM
To: "JiaJia Zhong";
Cc: "CEPH list"; "ukernel";
Subject: Re: [ceph-users] can cache-mode be set to readproxy for tiercachewith
ceph 0.94.9 ?
> ps: When we first met this issue, restarti
29 matches
Mail list logo