Hi Ceph Users,
I deploy development cluster using vstart with 3 MONs and 3 OSDs.
On my experiment, Kill one of the monitor nodes by its pid. like this:
$ kill -SIGSEGV 27557
After a new monitor leader is chosen, I would like to re-run the monitor
that I've killed in the previous step. How do I
Hello,
Can we use rbd journaling without using rbd mirroring in Jewel ? So that
we can set rbd journals on SSD pools and improve write IOPS on standard
(no mirrored) RBD images.
Assuming IOs are acknowleged when written to the journal pool.
Everything I read regarding RBD journaling is relate
Looks like they are having major challenges getting that ceph cluster
running again.. Still down.
On Tuesday, October 11, 2016, Ken Dreyer wrote:
> I think this may be related:
>
http://www.dreamhoststatus.com/2016/10/11/dreamcompute-us-east-1-cluster-service-disruption/
>
> On Tue, Oct 11, 2016
Sure. Thank you for help. I'll check it.
发件人: John Spray
发送时间: 2016年10月11日 12:56:02
收件人: Lu Dillon
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] can I create multiple pools for cephfs
On Tue, Oct 11, 2016 at 1:46 PM, Lu Dillon wrote:
> Hi, John
>
> Thanks.
On Tue, Oct 11, 2016 at 12:20 AM, Davie De Smet
wrote:
> Hi,
>
> We do use hardlinks a lot. The application using the cluster has a build in
> 'trashcan' functionality based on hardlinks. Obviously, all removed files and
> hardlinks are not visible anymore on the CephFS mount itself. Can I manua
Hi,
I am evaluating RGW Multisite for disaster recovery.
I created us-east zone(master) and us-west zone in us zone group.
how can I see replication status:
which object is not replicated yet,replicating or replicated?
When one zone is down, I can not compare object list for each zone.
So this i
HI Chris,
That's an interesting point, I bet the managed switches don't have jumbo
frames enabled.
I think I am going to leave everything at our colo for now.
Cheers,
Mike
On Tue, Oct 11, 2016 at 2:42 PM, Chris Taylor wrote:
>
>
> I see on this list often that peering issues are related to ne
I see on this list often that peering issues are related to networking
and MTU sizes. Perhaps the HP 5400's or the managed switches did not
have jumbo frames enabled?
Hope that helps you determine the issue in case you want to move the
nodes back to the other location.
Chris
On 2016-10-11
Hi David,
VLAN connectivity was good, the nodes could talk to each other on either
their private or public network. I really think they were doing something
weird across the fiber, not and issue with Ceph or how it was setup.
Thanks for the help!
Cheers,
Mike
On Tue, Oct 11, 2016 at 2:39 PM, D
There is a config option to have a public and private network configured on
your storage nodes. The private is what they would use to talk to each other
to do backfilling and recovery while the public is where clients access the
cluster. If your servers were able to communicate with each other
Hi Goncalo,
Thanks for your reply! I finally figured out that our issue was with the
physical setup of the nodes. Se had one OSD and MON node in our office and
the others are co-located at our ISP. We have an almost dark fiber going
between our two buildings connected via HP 5400's, but it real
You're right that you could be fine above the warning threshold, you can also
be in a bad state while being within the threshold. There is no one size fits
all for how much memory you need. The defaults and recommendations are there
as a general guide to get you started. If you want to push t
On Tue, Oct 11, 2016 at 2:30 PM, Henrik Korkuc wrote:
> On 16-10-11 14:30, John Spray wrote:
>>
>> On Tue, Oct 11, 2016 at 12:00 PM, Henrik Korkuc wrote:
>>>
>>> Hey,
>>>
>>> After a bright idea to pause 10.2.2 Ceph cluster for a minute to see if
>>> it
>>> will speed up backfill I managed to cor
Le 11 octobre 2016 15:10:22 GMT+02:00, David Turner
a écrit :
>
>
>This is terrifying to think about increasing that threshold. You may
>have enough system resources while running healthy, but when
>recovering, especially during peering, your memory usage can more than
>double. If you have too
First I'm addressing increasing your PG counts as that is what you specifically
asked about, however I do not believe that is your problem and I'll explain
that later.
There are a few recent threads on the ML about increasing the pg_num and
pgp_num on a cluster. But if you learn how to search
Hi Patrick,
1) Université de Lorraine. (7.000 researchers and staff members, 60.000
students, 42 schools and education structures, 60 research labs).
2) RHCS cluster: 144 OSDs on 12 nodes for 520 TB raw capacity.
VMware clusters: 7 VMware clusters (40 ESXi hosts). First need is
to provide
Hi,
Yes there is a problem at the moment, there is another ML thread with more
details.
The eu repo mirror should still be working eu.ceph.com
Thanks
On 11 Oct 2016 3:07 p.m., "wenngong" wrote:
> Hi Dear,
>
> I am trying to study and install ceph from official website. But I cannot
> access:
Hi Dear,
I am trying to study and install ceph from official website. But I cannot
access: ceph.com and got this error message:
Error establishing a database connection
Meanwhile, I also got error when install ceph-deploy:
http://ceph.com/rpm-firefly/el7/noarch/repodata/repomd.xml: [Errno 1
On 16-10-11 14:30, John Spray wrote:
On Tue, Oct 11, 2016 at 12:00 PM, Henrik Korkuc wrote:
Hey,
After a bright idea to pause 10.2.2 Ceph cluster for a minute to see if it
will speed up backfill I managed to corrupt my MDS journal (should it happen
after cluster pause/unpause, or is it some so
I would actually recommend the exact opposite configuration for a
high-performance, journaled image: a small but fast SSD/NVMe-backed
pool for the journal data, and a large pool for your image data.
With the librbd in-memory, writeback cache enabled, the IO operations
will be completed as soon as
On Oct 11, 2016, at 1:21 AM, Thomas HAMEL
mailto:hm...@t-hamel.fr>> wrote:
Hello, I have the same problem and I wanted to make a few remarks.
One of the main advice on pg count in the docs is: if you have less than 5
osds, use 128 pgs per pool. This kind of rule of thumb is really what you are
On Tue, Oct 11, 2016 at 1:46 PM, Lu Dillon wrote:
> Hi, John
>
> Thanks. Is this a new future of jewel? Now, we are using hammer.
The path-based auth caps and restricting by namespace are new in
Jewel, the pool layouts have existed much longer. I do not recommend
using CephFS on anything older t
Hi, John
Thanks. Is this a new future of jewel? Now, we are using hammer.
发送自我的三星 Galaxy 智能手机。
原始信息
由: John Spray
日期: 2016/10/11 19:17 (GMT+08:00)
收件人: 卢 迪
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] can I create multiple pools for cephfs
On Tue, Oct 11, 2016 at 12
I think this may be related:
http://www.dreamhoststatus.com/2016/10/11/dreamcompute-us-east-1-cluster-service-disruption/
On Tue, Oct 11, 2016 at 5:57 AM, Sean Redmond wrote:
> Hi,
>
> Looks like the ceph website and related sub domains are giving errors for
> the last few hours.
>
> I noticed th
Hi,
Looks like the ceph website and related sub domains are giving errors for
the last few hours.
I noticed the below that I use are in scope.
http://ceph.com/
http://docs.ceph.com/
http://download.ceph.com/
http://tracker.ceph.com/
Thanks
___
ceph-us
On Tue, Oct 11, 2016 at 12:18 PM, Tomáš Kukrál wrote:
> Hi,
> I wanted to have more control over the configuration than provided by
> ceph-deploy and tried Ceph-ansible https://github.com/ceph/ceph-ansible.
>
> However, it was too complicated and i have created ceph-ansible-simple
> https://github
On Tue, Oct 11, 2016 at 12:00 PM, Henrik Korkuc wrote:
> Hey,
>
> After a bright idea to pause 10.2.2 Ceph cluster for a minute to see if it
> will speed up backfill I managed to corrupt my MDS journal (should it happen
> after cluster pause/unpause, or is it some sort of a bug?). I had "Overall
>
Hi,
We have production platform of Ceph in our farm of Openstack. This platform
have following specs:
1 Admin Node
3 Monitors
7 Ceph Nodes with 160 OSD of SAS HDD 1.2TB 10K. Maybe 30 OSD have Journal
SSD...we are in update progress...;-)
All network have 10GB Ethernet link and we have some pro
Hi,
I wanted to have more control over the configuration than provided by
ceph-deploy and tried Ceph-ansible https://github.com/ceph/ceph-ansible.
However, it was too complicated and i have created ceph-ansible-simple
https://github.com/tomkukral/ceph-ansible-simple
Feel free to use it and le
On Tue, Oct 11, 2016 at 12:05 PM, 卢 迪 wrote:
> Hi All,
>
>
> When I create meta and data pools for CEPHFS, all clients use the same pools
> for cephfs. Can I create multiple pools for different user?
>
>
> For example, I have pool_A for client A to mount cephfs; I want to create a
> pool_B for cli
Hi All,
When I create meta and data pools for CEPHFS, all clients use the same pools
for cephfs. Can I create multiple pools for different user?
For example, I have pool_A for client A to mount cephfs; I want to create a
pool_B for client B to mount cephfs. The purpose is to make sure client
Hey,
After a bright idea to pause 10.2.2 Ceph cluster for a minute to see if
it will speed up backfill I managed to corrupt my MDS journal (should it
happen after cluster pause/unpause, or is it some sort of a bug?). I had
"Overall journal integrity: DAMAGED", etc
I was following
http://doc
Hello,
I search some information about radosgw-object-expirer program.
Google search help and they are no man page.
Someone can help me ?
Thanks
Morgan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-use
Hello, I have the same problem and I wanted to make a few remarks.
One of the main advice on pg count in the docs is: if you have less than 5
osds, use 128 pgs per pool. This kind of rule of thumb is really what you are
looking for when you begin. But this one is very misleading, especially if y
It may looks like a boys club but I believe that sometimes for the
proof-of-concept projects or in the beginning of the commercial project
without a lot of invesments it worth to consider used hardware. For
example, it's possible to find used Quanta LB6M switches with 24x 10GbE
SFP+ ports for $298
Hello,
On Tue, 11 Oct 2016 08:30:47 +0200 Gandalf Corvotempesta wrote:
> Il 11 ott 2016 3:05 AM, "Christian Balzer" ha scritto:
> > 10Gb/s MC-LAG (white box) switches are also widely available and
> > affordable.
> >
>
> At which models are you referring to?
> I've never found any 10gb switche
Hi,
We do use hardlinks a lot. The application using the cluster has a build in
'trashcan' functionality based on hardlinks. Obviously, all removed files and
hardlinks are not visible anymore on the CephFS mount itself. Can I manually
remove the strays on the OSD's themselves? Or do you mean th
37 matches
Mail list logo