Rgw hammer -> jewel
Next method helped me
After upgrading newly remake rgw on jewel
ceph auth del client.rgw.ceph403
rm /var/lib/ceph/radosgw/ceph-rgw.ceph403/
ceph-deploy --overwrite-conf rgw create ceph403
systemctl stop ceph-radosgw.target
systemctl start ceph-radosgw.target
systemctl status ce
Hi Jiajia zhong,
I'm using mixed SSD and HDD on the same node and I did it from url
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/,
I don't get any problems when run SSD and HDD on the same node. Now I want
to increase Ceph thoughput by increase network int
Hi,
what is the current CPU recommendation for storage nodes with multiple
HDDs attached? In the hardware recommendations [1] it says "Therefore,
OSDs should have a reasonable amount of processing power (e.g., dual
core processors).", but I guess this is for servers with a single OSD.
How many co
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Andreas Gerstmayr
> Sent: 06 March 2017 12:58
> To: Ceph Users
> Subject: [ceph-users] Current CPU recommendations for storage nodes with
> multiple HDDs
>
> Hi,
>
> what is the current CPU
Maxime
I forgot to mention a couple more things that you can try when using SMR HDD.
You could try to use ext4 with the “lazy” initialization. Another option is
specifying the “lazytime” ext4 mount option. Depending on your workload, you
could possibly see some big improvements.
Rick
> On Feb 18
2017-03-06 14:08 GMT+01:00 Nick Fisk :
>
> I can happily run 12 disks on a 4 core 3.6Ghz Xeon E3. I've never seen
> average CPU usage over 15-20%. The only time CPU hits 100% is for the ~10
> seconds when the OSD boots up. Running Jewel BTW.
>
> So, I would say that during normal usage you should h
Thanks for the suggestion, however I think my more immediate problem is
the ms_handle_reset messages. I do not think the mds are getting the
updates when I send them.
Dan
On 03/04/2017 09:08 AM, John Spray wrote:
On Fri, Mar 3, 2017 at 9:48 PM, Daniel Davidson
wrote:
ceph daemonperf mds.cep
On Mon, Mar 6, 2017 at 3:03 PM, Daniel Davidson
wrote:
> Thanks for the suggestion, however I think my more immediate problem is the
> ms_handle_reset messages. I do not think the mds are getting the updates
> when I send them.
I wouldn't assume that. You can check the current config state to se
Hi, we have a 7 nodes ubuntu ceph hammer pool (78 OSD to be exact).
This weekend we'be experienced a huge outage from our customers vms
(located on pool CUSTOMERS, replica size 3 ) when lots of OSD's
started to slow request/block PG's on pool PRIVATE ( replica size 1 )
basically all PG's blocked wh
Hi,
I am new to Ceph and just trying to get to grips with all the different
concepts.
What I would like to achieve is the following:
1. We have two sites, a main and a backup site. The main site is used
actively for production, and the backup site is there for disaster recovery
but is also used
On 03/03/2017 07:40 AM, K K wrote:
Hello, all!
I have successfully create 2 zone cluster(se and se2). But my radosgw
machines are sending many GET /admin/log requests to each other after
put 10k items to cluster via radosgw. It's look like:
2017-03-03 17:31:17.897872 7f21b9083700 1 civetw
Hey cephers,
Just as a heads up, there may be some temporary outages next week
(13-16 Mar) of git.ceph.com and drop.ceph.com and we migrate some
infrastructure. Please plan accordingly.
If you have any questions please feel free to reach out to me in the
meantime. Thanks.
--
Best Regards,
Pa
Am 28.02.2017 um 09:48 schrieb linux...@boku.ac.at:
> /Hi,/
>
> /
> actually i can´t install hammer on wheezy:/
>
> /~# cat /etc/apt/sources.list.d/ceph.list
> deb http://download.ceph.com/debian-hammer/ wheezy main
>
> ~# cat /etc/issue
> Debian GNU/Linux 7 \n \l
> /
>
> /~# apt-cache search c
Hello,
It's now 10 months after this thread:
http://www.spinics.net/lists/ceph-users/msg27497.html (plus next message)
and we're at the fifth iteration of Jewel and still
osd_tier_promote_max_objects_sec
and
osd_tier_promote_max_bytes_sec
are neither documented (master or jewel), nor mention
On Tue, Mar 7, 2017 at 12:28 AM, Christian Balzer wrote:
>
>
> Hello,
>
> It's now 10 months after this thread:
>
> http://www.spinics.net/lists/ceph-users/msg27497.html (plus next message)
>
> and we're at the fifth iteration of Jewel and still
>
> osd_tier_promote_max_objects_sec
> and
> osd_tie
Hi,
I'm building Ceph 10.2.5 and doing some benchmarking with Erasure Coding.
However I notice that perf can't find any symbols in Erasure Coding libraries.
It seems those have been stripped, whereas most other stuff has the symbols
intact.
How can I build with symbols or make sure they don't get
On Tue, 7 Mar 2017 01:44:53 + John Spray wrote:
> On Tue, Mar 7, 2017 at 12:28 AM, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > It's now 10 months after this thread:
> >
> > http://www.spinics.net/lists/ceph-users/msg27497.html (plus next message)
> >
> > and we're at the fifth iteratio
Hello,
On Mon, 6 Mar 2017 16:06:51 +0700 Vy Nguyen Tan wrote:
> Hi Jiajia zhong,
>
> I'm using mixed SSD and HDD on the same node and I did it from url
> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/,
> I don't get any problems when run SSD and HDD on t
has anyone on the list done an upgrade from hammer (something later than
0.94.6) to jewel with a cache tier configured? i tried doing one last week
and had a hiccup with it. i'm curious if others have been able to
successfully do the upgrade and, if so, did they take any extra steps
related to the
On Mon, 6 Mar 2017 19:57:11 -0700 Mike Lovell wrote:
> has anyone on the list done an upgrade from hammer (something later than
> 0.94.6) to jewel with a cache tier configured? i tried doing one last week
> and had a hiccup with it. i'm curious if others have been able to
> successfully do the upg
On Fri, Mar 3, 2017 at 11:40 PM, Sage Weil wrote:
> On Fri, 3 Mar 2017, Mike Lovell wrote:
>> i started an upgrade process to go from 0.94.7 to 10.2.5 on a production
>> cluster that is using cache tiering. this cluster has 3 monitors, 28 storage
>> nodes, around 370 osds. the upgrade of the monit
21 matches
Mail list logo