I'm working on a draft document of how to set up regions and zones
with metadata replication. Data replication is on the way, but I
haven't worked with it yet. Let me know how it goes, because this
still requires some testing and user feedback.
http://ceph.com/docs/wip-doc-radosgw/radosgw/federate
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Friday, September 13, 2013 3:17 PM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] problem with ceph-deploy hanging
>
>On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
> wrote
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Friday, September 13, 2013 3:17 PM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] problem with ceph-deploy hanging
>
>On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
>mailto:j
Maybe a doc bug somewhere? Quick start preflight says, wget -q -O-
'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' |
sudo apt-key add -
You need to have "sudo" before apt-key add -
On Fri, Sep 13, 2013 at 1:16 PM, Gruher, Joseph R
wrote:
> Hello all-
>
>
>
> I’m setting up a
On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
wrote:
> Actually, after further analysis, I think the problem is different from what
> I originally suspected. Let me restate the issue from the top. When I run
> ceph-deploy it hangs on the last line shown here and just sits there
> indefinitel
Actually, after further analysis, I think the problem is different from what I
originally suspected. Let me restate the issue from the top. When I run
ceph-deploy it hangs on the last line shown here and just sits there
indefinitely, that's the top level symptom:
root@cephtest01:~# ceph-deplo
Hello all-
I'm setting up a new Ceph cluster (my first time - just a lab experiment, not
for production) by following the docs on the ceph.com website. The preflight
checklist went fine, I installed and updated Ubuntu 12.04.2, set up my user and
set up passwordless SSH, etc. I ran "ceph-deplo
How I can force unmap mapped device?
Force I mean - unmap during usage as hot unplug cable on HDD.
It will be good for force unmap image from other node.
I need in the firm belief that image mounted only for one node, but sometime I
have buzzed processes which work with image on old node and can't
Grrahh! I checked the spam folder, but not this new "Promotions tab".
That's where the confirmation email was. However, the link doesn't
work. Perhaps its expired? As before, the "lost password" form says
"Invalid username or password" and the "Register" form says "Login has
already been taken
Any ideas about this?
On Thu, Sep 12, 2013 at 2:27 PM, sriram wrote:
> Adding to the previous issue. I dont see any files specified in 1,2 and 3
> below. I dont have fastcgi.conf, ceph.conf or s3gw.fcgi. I have followed
> everything up till that point in the wiki. Is there anything missing in t
I believe that's too high of an allowed skew with the default lease etc
settings. The actual complaint is "I got a lease which has ALREADY expired
and can't do anything with that!"
You'll need to either get your clock skew down to less than, say, 1/4
second (which is perfectly doable over three no
On Fri, 13 Sep 2013, Dominik Mostowiec wrote:
> Hi,
> I have ntpd installed on servers, time seems to be ok.
>
> I have strange log:
> 2013-09-12 07:34:40.238659 7fd63ac3e700 -1
> mon.4@3(peon).p0.075434axos(auth active c 581328..581348) lease_expire
> from mon.0 10.177.64.4:6789/0 is seconds in
On 13 September 2013 17:12, Simon Leinen wrote:
>
> [We're not using is *instead* of rbd, we're using it *in addition to*
> rbd. For example, our OpenStack (users') cinder volumes are stored in
> rbd.]
So you probably have cinder volumes in rbd but you boot instances from
images. This is why y
>> Just out of curiosity. Why you are using cephfs instead of rbd?
[We're not using is *instead* of rbd, we're using it *in addition to*
rbd. For example, our OpenStack (users') cinder volumes are stored in
rbd.]
To expand on what my colleague Jens-Christian wrote:
> Two reasons:
> - we are
Hey Alan,
I see that you are using gmail. If you have the new interface make
sure you look under the "promotions" tab (that's where my test email
just went). You can also search for email from
'redm...@tracker.ceph.com' to locate the "Your Ceph account
activation" email. Feel free to email me d
Hi,
I have ntpd installed on servers, time seems to be ok.
I have strange log:
2013-09-12 07:34:40.238659 7fd63ac3e700 -1
mon.4@3(peon).p0.075434axos(auth active c 581328..581348) lease_expire
from mon.0 10.177.64.4:6789/0 is seconds in the past; mons are laggy
or clocks are too skewed
But value
On 09/13/2013 03:38 AM, Sage Weil wrote:
On Thu, 12 Sep 2013, Dominik Mostowiec wrote:
Hi,
Today i have some issues with ceph cluster.
After new mon election many osd has been marked failed.
Some time later osd boot and i think recover because meny slow request appear.
Cluster come back after ab
Hello,
How can I decrease logging level of radosgw? I uploaded 400k pieces of
objects and my radosgw log raises to 2 GiB. Current settings:
rgw_enable_usage_log = true
rgw_usage_log_tick_interval = 30
rgw_usage_log_flush_threshold = 1024
rgw_usage_max_shards = 32
rgw_usage_max_user_shards = 1
rgw
> Just out of curiosity. Why you are using cephfs instead of rbd?
Two reasons:
- we are still on Folsom
- Experience with "shared storage" as this is something our customers are
asking for all the time
cheers
jc
___
ceph-users mailing list
ceph-users@
Hi
Just out of curiosity. Why you are using cephfs instead of rbd?
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph-users mailing list
ceph-users@l
>
> All servers mount the same filesystem. Needless to say, that we are a bit
> worried…
>
> The bug was introduced in 3.10 kernel, will be fixed in 3.12 kernel by commit
> 590fb51f1c (vfs: call d_op->d_prune() before unhashing dentry). Sage may
> backport the fix to 3.11 and 3.10 kernel soon.
On Fri, Sep 13, 2013 at 3:09 PM, Jens-Christian Fischer <
jens-christian.fisc...@switch.ch> wrote:
> Hi all
>
> we have started to use CephFS as the backing storage for OpenStack VM
> instance images. We have one pool (backed by SSDs) in our CephCluster that
> is exposed via CephFS to the differen
Hi all
we have started to use CephFS as the backing storage for OpenStack VM instance
images. We have one pool (backed by SSDs) in our CephCluster that is exposed
via CephFS to the different physical hosts on our machine.
The problem we see, is that the different hosts have different views on t
23 matches
Mail list logo