Hi ,
Recently , we need to setup Ceph'sObject storage , so far also don't
setup success, to use s3 to connect displayrefused. could you pls give some
help , like :setup documents, some configure file, thanks a lot!!
- 原始邮件 -
发件人:Derek Yarn
On 14/01/2014 07:49, ZHOU Yuan wrote:> Hi Loic, thanks for the education!
>
> I’m also trying to understand the new ‘indep’ mode. Is this new mode designed
> for Ceph-EC only? It seems that all of the data in 3-copy system are
> equivalent and this new algorithm should also work?
>
In the be
> > When using a pool size of 3, I get the following behavior when one OSD
> > fails:
> > * the affected PGs get marked active+degraded
> >
> > * there is no data movement/backfill
>
> Works as designed, if you have the default crush map in place (all replicas
> must
> be on DIFFERENT hosts). You
On 01/14/2014 09:44 AM, Dietmar Maurer wrote:
>>> When using a pool size of 3, I get the following behavior when one OSD
>>> fails:
>>> * the affected PGs get marked active+degraded
>>>
>>> * there is no data movement/backfill
>>
>> Works as designed, if you have the default crush map in place (a
Hello,
I have got 3 servers, with 3 HDD-OSD / server (4 TB WD RE). I'm using
radosgw primary. Every .rgw.* pool has 3 replica. Every server is rados
gateway with apache2+fastcgi (ceph patched version). Servers type:
SuperMicro SSG-6047R-E1R36L.
I've got 10 client machines which uploads objects pa
Hi,
One thing to note: it’s usually helpful to start a fresh thread rather than
reply to an unrelated one to reach out for help in the future. If you have a
followup to this message, you might want to start a new email thread.
The documentation here (
http://ceph.com/docs/master/install/insta
> Sorry, it seems as if I had misread your question: Only a single OSD fails,
> not the
> whole server?
Yes, only a single OSD is down and marked out.
> Then there should definitively be a backfilling in place.
no, this does not happen. Many PGs stay in degraded state (I tested this
several
On 01/14/2014 10:06 AM, Dietmar Maurer wrote:
> Yes, only a single OSD is down and marked out.
Sorry for the misunderstanding then.
>> Then there should definitively be a backfilling in place.
>
> no, this does not happen. Many PGs stay in degraded state (I tested this
> several times now).
> Are you aware of this?
> http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
> => Stopping w/out Rebalancing
What do you think is wrong with my setup? I want to re-balance. The problem is
that it does not
happen at all!
I do exactly the same test with and without 'ceph osd c
seems this is a bug, but it only happens with
- 3 nodes
- 4 OSDs per node
- pools size 3
- tunables optimal
Tested with 0.72 and 0.74
Note: does not occur when using 3 ODSs per node
Can somebody reproduce this?
> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:cep
> 4k random write, around 300 iops/second, 1.2 mbps.
> Do these figures look reasonable to others? What kind of IOPS should I be
> expecting?
Hi,
Bit late to the party but here a my results with fio:
My results where highly depended on the number of jobs that are configured with
fio:
Result jo
Tks! Actually now I have solved the problem. The process always hang because of
the iptables of Centos started and didn't accepte the corresponding port.
Best regards!
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Monday, January 13, 2014 9:43 PM
To: You,
Hi ceph-users and ceph-devel,
I came across an issue after restarting monitors of the cluster, that
authentication fails which prevents running any ceph command.
After we did some maintenance work, I restart OSD, however, I found that the
OSD would not join the cluster automatically after being
On Tue, 14 Jan 2014, GuangYang wrote:
> Hi ceph-users and ceph-devel,
> I came across an issue after restarting monitors of the cluster, that
> authentication fails which prevents running any ceph command.
>
> After we did some maintenance work, I restart OSD, however, I found that the
> OSD wou
Seems that marking an OSD as 'out' has other effects than removing an OSD from
crush map.
I guess weights are not changed if the OSD is marked out?
So how can I test that with crushtool?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
On Tue, 14 Jan 2014, Dietmar Maurer wrote:
>
> Seems that marking an OSD as ?out? has other effects than removing an OSD
> from crush map.
>
> I guess weights are not changed if the OSD is marked out?
>
>
Right. The 'out' is a like an exception. The PGs on that OSD are
redistributed uniform
Thanks Sage.
-bash-4.1$ sudo ceph --admin-daemon /var/run/ceph/ceph-mon.osd151.asok
mon_status
{ "name": "osd151",
"rank": 2,
"state": "electing",
"election_epoch": 85469,
"quorum": [],
"outside_quorum": [],
"extra_probe_peers": [],
"sync_provider": [],
"monmap": { "epoch": 1,
Hello,
In http://ceph.com/docs/next/rbd/rbd-config-ref/ it is said that:
"The kernel driver for Ceph block devices can use the Linux page cache to
improve performance."
Is there anywhere that provides more details about this?
As in, "can" implies that it might need to be enabled somewhere, som
On Tuesday, January 14, 2014, Christian Balzer wrote:
>
> Hello,
>
> In http://ceph.com/docs/next/rbd/rbd-config-ref/ it is said that:
>
> "The kernel driver for Ceph block devices can use the Linux page cache to
> improve performance."
>
> Is there anywhere that provides more details about this?
It's referring to the standard linux page cache.
http://www.moses.uklinux.net/patches/lki-4.html which is not something
you need to set up.
I use ceph for an opennebula storage which is qemu-kvm based and have
had no issues with live migrations.
If the storage is marked "shareable" the live mi
On Tue, 14 Jan 2014, Gregory Farnum wrote:
> On Tuesday, January 14, 2014, Christian Balzer wrote:
> Also on that page we read:
> "Since the cache is local to the client, there?s no coherency if
> there are
> others accesing the image. Running GFS or OCFS on top of RBD
>
This is a big release, with lots of infrastructure going in for
firefly. The big items include a prototype standalone frontend for
radosgw (which does not require apache or fastcgi), tracking for read
activity on the osds (to inform tiering decisions), preliminary cache
pool support (no snapshots
Hello,
Firstly thanks to Greg and Sage for clearing this up.
Now all I need for a very early Xmas is ganeti 2.10 released and a Debian
KVM release that has RBD enabled. ^o^
Meaning that for now I'm stuck with the kernel route in my setup.
On Wed, 15 Jan 2014 03:31:06 + michael wrote:
> It's
We observe strange behavior with some configurations. PGs stays in degraded
state after
a single OSD failure.
I can also show the behavior using crushtool with the following map:
--crush map-
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
t
24 matches
Mail list logo