On Tue, Mar 28, 2017 at 4:22 PM, Marcus Furlong wrote:
> On 22 March 2017 at 19:36, Brad Hubbard wrote:
>> On Wed, Mar 22, 2017 at 5:24 PM, Marcus Furlong wrote:
>
>>> [435339.965817] [ cut here ]
>>> [435339.965874] WARNING: at fs/xfs/xfs_aops.c:1244
>>> xfs_vm_releasepa
On 22 March 2017 at 19:36, Brad Hubbard wrote:
> On Wed, Mar 22, 2017 at 5:24 PM, Marcus Furlong wrote:
>> [435339.965817] [ cut here ]
>> [435339.965874] WARNING: at fs/xfs/xfs_aops.c:1244
>> xfs_vm_releasepage+0xcb/0x100 [xfs]()
>> [435339.965876] Modules linked in: vfa
I've copied Dan who may have some thoughts on this and has been
involved with this code.
On Tue, Mar 28, 2017 at 3:58 PM, Mika c wrote:
> Hi Brad,
>Thanks for your help. I found that's my problem. Forget attach file name
> with words ''keyring".
>
> And sorry to bother you again. Is it possib
Hi Brad,
Thanks for your help. I found that's my problem. Forget attach file name
with words ''keyring".
And sorry to bother you again. Is it possible to create a minimum privilege
client for the api to run?
Best wishes,
Mika
2017-03-24 19:32 GMT+08:00 Brad Hubbard :
> On Fri, Mar 24, 201
Nix the second question, as I understand it, ceph doesn't work in mixed
IPv6 and legacy IPv4 environments.
Still, would like to hear from people running it in SLAAC environments.
On Mon, Mar 27, 2017 at 12:49 PM, Richard Hesse
wrote:
> Has anyone run their Ceph OSD cluster network on IPv6 using
Hi Yang,
> Do you mean get the perf counters via api? At first this counter is
only for a particular ImageCtx (connected client), then you can
read the counters by the perf dump command in my last mail I think.
Yes, I did mean to get counters via API. And looks like I can adapt this
admin-daem
Hello,
On Mon, 27 Mar 2017 17:48:38 +0200 Mattia Belluco wrote:
> I mistakenly answered to Wido instead of the whole Mailing list ( weird
> ml settings I suppose)
>
> Here it is my message:
>
>
> Thanks for replying so quickly. I commented inline.
>
> On 03/27/2017 01:34 PM, Wido den Holland
Hello,
On Mon, 27 Mar 2017 16:09:09 +0100 Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Wido den Hollander
> > Sent: 27 March 2017 12:35
> > To: ceph-users@lists.ceph.com; Christian Balzer
> > Subject: Re: [ceph-
On Mon, 27 Mar 2017 17:32:53 -0600 Adam Carheden wrote:
> What's the biggest PMR disk I can buy, and how do I tell if a disk is PMR?
>
> I'm well aware that I shouldn't use SMR disks:
> http://ceph.com/planet/do-not-use-smr-disks-with-ceph/
>
> But newegg and the like don't seem to advertise SMR
What's the biggest PMR disk I can buy, and how do I tell if a disk is PMR?
I'm well aware that I shouldn't use SMR disks:
http://ceph.com/planet/do-not-use-smr-disks-with-ceph/
But newegg and the like don't seem to advertise SMR vs PMR and I can't
even find it on manufacturer's websites (at least
you need to recompile ceph with jemalloc, without have tcmalloc dev librairies.
LD_PRELOAD has never work for jemalloc and ceph
- Mail original -
De: "Engelmann Florian"
À: "ceph-users"
Envoyé: Lundi 27 Mars 2017 16:54:33
Objet: [ceph-users] libjemalloc.so.1 not used?
Hi,
we are tes
Make sure the OSD processes on the Jewel node are running. If you didn't change
the ownership to user ceph, they won't start.
> On Mar 27, 2017, at 11:53, Jaime Ibar wrote:
>
> Hi all,
>
> I'm upgrading ceph cluster from Hammer 0.94.9 to jewel 10.2.6.
>
> The ceph cluster has 3 servers (one
Hi Cephers.
Couldn't find any special documentation about the "S3 object expiration" so I assume it should work "AWS S3 like" (?!?) ... BUT ...
we have a test cluster based on 11.2.0 - Kraken and I set some object expiration dates via CyberDuck and DragonDisk, but the objects are still there,
I can't guarantee it's the same as my issue, but from that it sounds the
same.
Jewel 10.2.4, 10.2.5 tested
hypervisors are proxmox qemu-kvm, using librbd
3 ceph nodes with mon+osd on each
-faster journals, more disks, bcache, rbd_cache, fewer VMs on ceph, iops
and bw limits on client side, jumbo
Has anyone run their Ceph OSD cluster network on IPv6 using SLAAC? I know
that ceph supports IPv6, but I'm not sure how it would deal with the
address rotation in SLAAC, permanent vs outgoing address, etc. It would be
very nice for me, as I wouldn't have to run any kind of DHCP server or use
static
In an OpenStack (mitaka) cloud, backed by a ceph cluster (10.2.6 jewel), using
libvirt/qemu (1.3.1/2.5) hypervisors on Ubuntu 14.04.5 compute and ceph hosts,
we occasionally see hung processes (usually during boot, but otherwise as
well), with errors reported in the instance logs as shown below.
I'm following up to myself here, but I'd love to hear if anyone knows
how the global quotas can be set in jewel's radosgw. I haven't found
anything which has an effect - the documentation says to use:
radosgw-admin region-map get > regionmap.json
...edit the json file
radosgw-admin region-map s
Hi all,
I'm upgrading ceph cluster from Hammer 0.94.9 to jewel 10.2.6.
The ceph cluster has 3 servers (one mon and one mds each) and another 6
servers with
12 osds each.
The monitoring and mds have been succesfully upgraded to latest jewel
release, however
after upgrade the first osd server(
I mistakenly answered to Wido instead of the whole Mailing list ( weird
ml settings I suppose)
Here it is my message:
Thanks for replying so quickly. I commented inline.
On 03/27/2017 01:34 PM, Wido den Hollander wrote:
>
>> Op 27 maart 2017 om 13:22 schreef Christian Balzer :
>>
>>
>>
>> Hell
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Wido den Hollander
> Sent: 27 March 2017 12:35
> To: ceph-users@lists.ceph.com; Christian Balzer
> Subject: Re: [ceph-users] New hardware for OSDs
>
>
> > Op 27 maart 2017 om 13:22 schreef C
Hi,
we are testing Ceph as block storage (XFS based OSDs) running in a hyper
converged setup with KVM as hypervisor. We are using NVMe SSD only (Intel DC
P5320) and I would like to use jemalloc on Ubuntu xenial (current kernel
4.4.0-64-generic). I tried to use /etc/default/ceph and uncommented
Jason,
do you think it's good idea to introduce a rbd_config object to
record some configurations of per-pool, such as default_features.
That means, we can set some configurations differently in different
pool. In this way, we can also handle the per-pool setting in rbd-mirror.
Thanx
Yang
O
On Mon, Mar 27, 2017 at 4:00 AM, Dongsheng Yang
wrote:
> Hi Fulvio,
>
> On 03/24/2017 07:19 PM, Fulvio Galeazzi wrote:
>
> Hallo, apologies for my (silly) questions, I did try to find some doc on
> rbd-mirror but was unable to, apart from a number of pages explaining how to
> install it.
>
> My en
@ Niv Azriel : What is your leveldb version and has it been fixed now?
@ Wido den Hollander : I also meet a similar problem:
the size of my leveldb is about
17GB(300+ osds), there are a lot of sst files(each sst file is 2MB) in
/var/lib/ceph/mon. (a networ
> Op 27 maart 2017 om 8:41 schreef Muthusamy Muthiah
> :
>
>
> Hi Wido,
>
> Yes slow map update was happening and CPU hitting 100%.
So it indeed seems you are CPU bound at that moment. That's indeed a problem
when you have a lot of map changes to work through on the OSDs.
It's recommended t
I suppose the other option here, which I initially dismissed because
Red Hat are not supporting it, is to have a CephFS dir/tree bound to a
cache-tier fronted EC pool. Is anyone having luck with such a setup?
On 3 March 2017 at 21:40, Blair Bethwaite wrote:
> Hi Marc,
>
> Whilst I agree CephFS wo
Thanks for the useful reply Robin and sorry for not getting back sooner...
> On Fri, Mar 03, 2017 at 18:01:00 +, Robin H. Johnson wrote:
> On Fri, Mar 03, 2017 at 10:55:06 +1100, Blair Bethwaite wrote:
>> Does anyone have any recommendations for good tools to perform
>> file-system/tree backup
> Op 27 maart 2017 om 13:22 schreef Christian Balzer :
>
>
>
> Hello,
>
> On Mon, 27 Mar 2017 12:27:40 +0200 Mattia Belluco wrote:
>
> > Hello all,
> > we are currently in the process of buying new hardware to expand an
> > existing Ceph cluster that already has 1200 osds.
>
> That's quite s
> Op 26 maart 2017 om 9:44 schreef Niv Azriel :
>
>
> after network issues, ceph cluster fails.
> leveldb grows and takes a lot of space
> ceph mon cant write to leveldb because there is not enough space on
> filesystem.
> (there is a lot of ldb file on /var/lib/ceph/mon)
>
It is normal that t
Hello,
On Mon, 27 Mar 2017 12:27:40 +0200 Mattia Belluco wrote:
> Hello all,
> we are currently in the process of buying new hardware to expand an
> existing Ceph cluster that already has 1200 osds.
That's quite sizable, is the expansion driven by the need for more space
(big data?) or to incre
Hey Brad,
Many thanks for the explanation...
> ~~~
> WARNING: the following dangerous and experimental features are enabled:
> ~~~
> Can I ask why you want to disable this warning?
We using bluestore with kraken, we are aware that this is in tech preview.
To hide these warning compiled like thi
Hello all,
we are currently in the process of buying new hardware to expand an
existing Ceph cluster that already has 1200 osds.
We are currently using 24 * 4 TB SAS drives per osd with an SSD journal
shared among 4 osds. For the upcoming expansion we were thinking of
switching to either 6 or 8 TB
Hi,
Does anyone have any cluster of a decent scale running on Kraken and bluestore?
How are you finding it? Have you had any big issues arise?
Was it running non bluestore before and have you noticed any improvement? Read
? Write? IOPS?
,Ashley
Sent from my iPhone
_
On 03/27/2017 04:06 PM, Masha Atakova wrote:
Hi Yang,
Hi Masha,
Thank you for your reply. This is very useful indeed that there are
many ImageCtx objects for one image.
But in my setting, I don't have any particular ceph client connected
to ceph (I could, but this is not the point). I'm
Hi Yang,
Thank you for your reply. This is very useful indeed that there are many
ImageCtx objects for one image.
But in my setting, I don't have any particular ceph client connected to
ceph (I could, but this is not the point). I'm trying to get metrics for
particular image while not perfor
Hi Fulvio,
On 03/24/2017 07:19 PM, Fulvio Galeazzi wrote:
Hallo, apologies for my (silly) questions, I did try to find some doc
on rbd-mirror but was unable to, apart from a number of pages
explaining how to install it.
My environment is CenOS7 and Ceph 10.2.5.
Can anyone help me understand
Hello,
We are facing some performance issue with rados bench marking on a 5 node
cluster with PG num 4096 vs 8192.
As per the PG calculation below is our specification
Size OSD % Data Targets PG count
5 340 100 100 8192
5 340 100 50 4096
With 8192 PG count we got good performance with 409
37 matches
Mail list logo