I just noticed that one of my OSD's has its xfs filesystem created with
isize=256, instead of the 2048 it should have been created with.
Is this going to be hurting performance enough to warrant burning the OSD and
recreating it?
And is there a way to change it on the fly (I expect not, but may
Most of my OSD nodes are equipped with low voltage xeon (L5420) processors what
is probably somewhat of an overkill. For that reason I run them with other
services as well. However the board you are suggesting could be the answer for
less power consumption on the OSD nodes.
That still leaves me
Dear ceph users,
I just had a problem on a ceph cluster running three nodes with each:
- 24 cores
- 32Go RAM
- 2 SATA disks as OSD
- 2 SSD disks using software RAID for sys + journals
- libvirt+kvm
- ceph network use 2 dedicated Gigabit interfaces with active-passive to
two switches
Journal p
Dear users/experts,
Does anyone know how to use radosgw-admin log show? It seems to not properly
read the --bucket parameter.
# radosgw-admin log show --bucket=asdf --date=2013-11-28-09
--bucket-id=default.7750582.1
error reading log 2013-11-28-09-default.7750582.1-: (2) No such file or
directo
Hi,
I I have replication = 3, obj A -> PG -> [1,2,3].
Osd.1 -master, 2,3 replica.
osd.1 -> host1,
osd.2 -> host2,
osd.3 -> host3.
Radosgw on host2 requests(GET) for obj A to osd.1 or to local osd.2 ?
--
Regards
Dominik
___
ceph-users mailing list
ceph-
Hi,
> Note this breaks AWS S3 compatibility and is why it is a configurable.
So how does AWS S3 handle Public access to objects?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Michael,
> Sounds like what I was having starting a couple of days ago, played
[...]
yes, that sounds ony too familiar. :-(
> Updated to 3.12 kernel and restarted all of the ceph nodes and it's now
> happily churning through a rados -p rbd bench 300 write -t 120 that
Weird - but if that s
> our Ceph cluster suddenly went into a state of OSDs constantly having
> blocked or slow requests, rendering the cluster unusable. This happened
> during normal use, there were no updates, etc.
our cluster seems to have recovered overnight and is back
to normal behaviour. This morning, everything
On 11/28/13, 4:44 AM, Micha Krause wrote:
> Hi,
>
>> Note this breaks AWS S3 compatibility and is why it is a configurable.
>
> So how does AWS S3 handle Public access to objects?
You have to explicitly set public ACL on each object.
--
Derek T. Yarnell
University of Maryland
Institute for A
Hi all,
I am seeing some weirdness when trying to deploy Ceph Emperor on fedora 19
using ceph-deploy. Problem occurs when trying to install ceph-deploy, and seems
to point to the version of pushy in your repository:
[root@ceph02 ~]# yum install ceph-deploy
Loaded plugins: priorities, protectbas
Anybody using MONs and RGW inside docker containers?
I would like to use a server with two docker containers, one for mon
and one for RGW
This to archieve a better isolation between services and some reusable
components (the same container can be exported and used multiple times
on multiple server
I played with Docker for a while and ran into some issues (perhaps
from my own ignorance of Docker principles). The biggest issue seemed
to be that the IP was relatively ephemeral, which the MON really
doesn't like. I couldn't find a reliably intuitive way to have the
MON get either the same IP o
A long time ago I got my MDS cluster into a state where I have two active
MDS nodes with a third for failover. This setup is not perfectly stable, so
I want to drop down to one active MDS node with two nodes for failover. Is
there any documentation for how to do this?
I tried the command "ceph mds
Hi,
I have made a mistake, and create a pool named "-help",
Execute command "ceph osd lspools", and returns:
0 data,1 metadata,2 rbd,3 testpool1,4 testpool2,5 -help,6 testpool3,
The problem is: now I want to delete or rename the pool '-help', when
I run the comm
On Thu, Nov 28, 2013 at 5:52 PM, Walter Huf wrote:
> A long time ago I got my MDS cluster into a state where I have two active
> MDS nodes with a third for failover. This setup is not perfectly stable, so
> I want to drop down to one active MDS node with two nodes for failover. Is
> there any docu
Hmm indeed, it changed my mds status from
e45538: 2/2/1 up {0=2=up:active,1=0=up:active}, 1 up:standby
to
e45541: 1/1/1 up {0=2=up:active}, 2 up:standby
Thank you very much!
On Thu, Nov 28, 2013 at 8:45 PM, Gregory Farnum wrote:
> On Thu, Nov 28, 2013 at 5:52 PM, Walter Huf wrote:
> > A long
Hi
ceph osd pool delete --help
OR
ceph osd pool delete -h
2013/11/29 You, RongX
> Hi,
>
> I have made a mistake, and create a pool named "-help",
>
> Execute command "ceph osd lspools", and returns:
>
> 0 data,1 metadata,2 rbd,3 testpool1,4 testpool2,5 -help,6
> te
Hi,
> The problem is: now I want to delete or rename the pool '-help',
maybe you will try using double-hyphen ("--") [1] , e.g. something (not tested)
like
ceph osd pool rename -- "-help" aaa
ceph osd pool delete -- -help
regards
Danny
[1] http://unix.stackexchange.com/questi
Expensive at launch and limited to Windows as of now, but very
interesting nevertheless. 120GB SSD and 1TB spinning disk - separately
addressable - all in a 2.5" FF:
http://www.wdc.com/en/products/products.aspx?id=1190
___
ceph-users mailing li
Hello to all,
I use heavily ceph-deploy and it works really well.
I've just one question : is there an option (i have not found) or a way to let
ceph-deploy osd create ...
create an OSD with a weight of 0 ?
My goal is to reweight step-by-step, after, news OSDs, to be sure that
it will not disturb
20 matches
Mail list logo