On Mon, Aug 18, 2014 at 7:32 PM, Kenneth Waegeman
wrote:
>
> - Message from Haomai Wang -
>Date: Mon, 18 Aug 2014 18:34:11 +0800
>
>From: Haomai Wang
> Subject: Re: [ceph-users] ceph cluster inconsistency?
> To: Kenneth Waegeman
> Cc: Sage Weil , ceph-users@lists.c
On Mon, 18 Aug 2014, Robert LeBlanc wrote:
> This may be a better question for Federico. I've pulled the systemd stuff
> from git and I have it working, but only if I have the volumes listed in
> fstab. Is this the intended way that systemd will function for now or am I
> missing a step? I'm pretty
This may be a better question for Federico. I've pulled the systemd stuff
from git and I have it working, but only if I have the volumes listed in
fstab. Is this the intended way that systemd will function for now or am I
missing a step? I'm pretty new to systemd.
Thanks,
Robert LeBlanc
On Mon,
I have the same results. The primary zone (with log_meta and log_data
true) have bilog data, the secondary zone (with log_meta and log_data
false) do not have bilog data.
I'm just guessing here (I can't test it right now)... I would think that
disabling log_meta and log_data will stop adding new
After replace broken disk and ceph osd in it, cluster:
ceph health detail
HEALTH_WARN 2 pgs stuck unclean; recovery 60/346857819 degraded (0.000%)
pg 3.884 is stuck unclean for 570722.873270, current state
active+remapped, last acting [143,261,314]
pg 3.154a is stuck unclean for 577659.917066, curr
On 08/18/2014 02:20 PM, John Morris wrote:
On 08/18/2014 01:49 PM, Sage Weil wrote:
On Mon, 18 Aug 2014, John Morris wrote:
rule by_bank {
ruleset 3
type replicated
min_size 3
max_size 4
step take default
step choose firstn 0 type bank
Hi Sage,
it seems the pools must be added to the MDS first:
ceph mds add_data_pool 3# = SSD-r2
ceph mds add_data_pool 4# = SAS-r2
After these commands the "setfattr -n ceph.dir.layout.pool" worked.
Thanks,
-Dieter
On Mon, Aug 18, 2014 at 10:19:08PM +0200, Kasper Dieter wrote:
Hi Sage,
I know about the setattr syntax from
https://github.com/ceph/ceph/blob/master/qa/workunits/fs/misc/layout_vxattrs.sh
=
setfattr -n ceph.dir.layout.pool -v data dir
setfattr -n ceph.dir.layout.pool -v 2 dir
But, in my case it is not working:
[root@rx37-1 ~]# setfattr -n ceph.dir
Hi Dieter,
There is a new xattr based interface. See
https://github.com/ceph/ceph/blob/master/qa/workunits/fs/misc/layout_vxattrs.sh
The nice part about this interface is no new tools are necessary (just
standard 'attr' or 'setfattr' commands) and it is the same with both
ceph-fuse a
Hi Sage,
a couple of months ago (maybe last year) I was able to change the
assignment of Directorlies and Files of CephFS to different pools
back and forth (with cephfs set_layout as well as with setfattr).
Now (with ceph v0.81 and Kernel 3.10 an the client side)
neither 'cephfs set_layout' nor
The next Ceph development release is here! This release contains several
meaty items, including some MDS improvements for journaling, the ability
to remove the CephFS file system (and name it), several mon cleanups with
tiered pools, several OSD performance branches, a new "read forward" RADOS
On 08/18/2014 01:49 PM, Sage Weil wrote:
On Mon, 18 Aug 2014, John Morris wrote:
rule by_bank {
ruleset 3
type replicated
min_size 3
max_size 4
step take default
step choose firstn 0 type bank
step choose firstn 0 type osd
Hi Pierre —
You can manipulate your CRUSH map to make use of ‘chassis’ in addition to the
default ‘host’ type. I’ve done this with FatTwin and FatTwin^2 boxes with
great success.
For more reading take a look at:
http://ceph.com/docs/master/rados/operations/crush-map/
In particular the ‘Move
Hello guys,
I just acquired some brand new machines I would like to rely upon for a
storage cluster (and some virtualization). These machines are, however,
« twin servers », ie. each blade (1U) comes with two different machines
but a single psu.
I think two replicas would be enough for the intend
On Mon, 18 Aug 2014, John Morris wrote:
> rule by_bank {
> ruleset 3
> type replicated
> min_size 3
> max_size 4
> step take default
> step choose firstn 0 type bank
> step choose firstn 0 type osd
> step emit
> }
You probably want:
On 08/18/2014 12:13 PM, John Morris wrote:
On 08/14/2014 02:35 AM, Christian Balzer wrote:
The default (firefly, but previous ones are functionally identical) crush
map has:
---
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_siz
I take it that OSD 8, 13, and 20 are some of the stopped OSDs.
I wasn't able to get ceph to execute ceph pg force_create until the OSDs in
[recovery_state][probing_osds] from ceph pg query were online. I ended up
reformatting most of them, and re-adding them to the cluster.
What's wrong with tho
Oh yes, we don't have ARM packages for wheezy.
On Mon, Aug 11, 2014 at 7:12 PM, joshua Kay wrote:
> Hi,
>
>
>
> I am running into an error when I am attempting to use ceph-deploy install
> when creating my cluster. I am attempting to run ceph on Debian 7.0 wheezy
> with an ARM processor. When I
On 08/14/2014 02:35 AM, Christian Balzer wrote:
The default (firefly, but previous ones are functionally identical) crush
map has:
---
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step
Do you have the full paste of the ceph-deploy output?
Tracing the URL we definitely not have google-perftools packages for
Wheezy, the full output might help understanding what is going on
On Mon, Aug 11, 2014 at 8:01 PM, joshua Kay wrote:
> Hi,
>
> When I attempt to use the ceph-deploy install
Greetings cephalofolk,
Now that the Ceph Day events are becoming much more of a community
undertaking (as opposed to a Inktank-hosted event), we are really
ramping things up. There are currently four events planned in the
near future, and we need speakers for all of them!
http://ceph.com/cephday
Hi
I am trying to use cache tiering and read the topic about mapping OSD
with pools
(http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds).
I can't realize why OSDs were splitted on spinner and SSD type on root
level of CRUSH map?
Is it possible to to
Hi all,
We have a small ceph cluster running version 0.80.1 with cephfs on five nodes.
Last week some osd's were full and shut itself down. To help de osd's start
again I added some extra osd's and moved some placement group directories on
the full osd's (which has a copy on another osd) to anot
Hi Kurt,
I have pointed my DNS '*.gateway.testes.local' and 'gateway.testes.local,
to the same IP (the radosgw server).
I have added rgw_dns_name has you suggested to the config (it was comment
out). I will try everything and give feedback.
By the way, when I restart ceph-radosgw service, I get
Hi Marco,
Is your DNS setup to use the wildcard (*.gateway.testes.local)?
I noticed that you're using it in the server alias, but that you don't have an
"rgw_dns_name" configured in your ceph.conf. The rgw_dns_name should be set to
"gateway.testes.local" if your dns is configured to use the wi
Hi,
Is there any configuration option in ceph.conf for enabling/disabling
the bilog list?
I mean the result of this command:
radosgw-admin bilog list
One ceph cluster gives me results - list of operations which were made
to the bucket, and the other one gives me just an empty list. I can't
see w
What has changed in the cluster compared to my first mail, the cluster was in a
position to repair one pg, but now has a different pg in status
"active+clean+replay"
root@ceph-admin-storage:~# ceph pg dump | grep "^2.92"
dumped all in format plain
2.920000000activ
Hi Craig,
I brought the cluster in a stable condition. All slow osds are no longer in the
cluster. All remaining 36 osds are more than 100 MB / sec writeable (dd
if=/dev/zero of=testfile-2.txt bs=1024 count=4096000). No ceph client is
connected to the cluster. The ceph nodes are in idle. Now se
- Message from Haomai Wang -
Date: Mon, 18 Aug 2014 18:34:11 +0800
From: Haomai Wang
Subject: Re: [ceph-users] ceph cluster inconsistency?
To: Kenneth Waegeman
Cc: Sage Weil , ceph-users@lists.ceph.com
On Mon, Aug 18, 2014 at 5:38 PM, Kenneth Waegeman
wrote:
Hi
On Mon, Aug 18, 2014 at 5:38 PM, Kenneth Waegeman
wrote:
> Hi,
>
> I tried this after restarting the osd, but I guess that was not the aim
> (
> # ceph-kvstore-tool /var/lib/ceph/osd/ceph-67/current/ list _GHOBJTOSEQ_|
> grep 6adb1100 -A 100
> IO error: lock /var/lib/ceph/osd/ceph-67/current//LOCK
Hi there,
I have FastCgiWrapper Off in fastcgi.conf file; I also have SELinux in
permissive state; 'ps aux | grep rados' shows me radosgw is running;
The problems stays the same... I can login with S3 credentials, create
buckets, but uploads write this in the logs:
[Mon Aug 18 12:00:28.636378 201
Hi,
I tried this after restarting the osd, but I guess that was not the aim
(
# ceph-kvstore-tool /var/lib/ceph/osd/ceph-67/current/ list
_GHOBJTOSEQ_| grep 6adb1100 -A 100
IO error: lock /var/lib/ceph/osd/ceph-67/current//LOCK: Resource
temporarily unavailable
tools/ceph_kvstore_tool.cc: In
Yes, these are recent changes from John. Because of these changes:
commit 90e6daec9f3fe2a3ba051301ee50940278ade18b
Author: John Spray
Date: Tue Apr 29 15:39:45 2014 +0100
osdmap: Don't create FS pools by default
Because many Ceph users don't use the filesystem,
don't create t
Yes, these are recent changes from John. Because of these changes:
commit 90e6daec9f3fe2a3ba051301ee50940278ade18b
Author: John Spray
Date: Tue Apr 29 15:39:45 2014 +0100
osdmap: Don't create FS pools by default
Because many Ceph users don't use the filesystem,
don't create the '
34 matches
Mail list logo