Hi ceph-users,
Ceph recommends the PGs number of a pool is (100 * OSDs) / Replicas, per my
understanding, the number of PGs for a pool should be fixed even we scale out /
in the cluster by adding / removing OSDs, does that mean if we double the OSD
numbers, the PG number for a pool is not optima
Hi,
Try editing /etc/sudoers and changing following line:
Defaultsrequiretty
to
Defaults!requiretty
Thanks
PK
On Tue, Oct 8, 2013 at 1:49 PM, Alfredo Deza wrote:
> On Tue, Oct 8, 2013 at 1:09 PM, wrote:
> > Hello,
> >
> >
> >
> > I have reached the stage on the install where I am run
On Tue, Oct 8, 2013 at 2:35 PM, Snider, Tim wrote:
> Does inceasing the number of monitors affect Ceph cluster performance
> (everything else remaining ==)? If it does I hope it’s positive.
In general it won't affect performance at all since the monitors are
out of the data path. If you managed
Does inceasing the number of monitors affect Ceph cluster performance
(everything else remaining ==)? If it does I hope it's positive.
And - will accessing Ceph monitors thru a haproxy server also improve
performance?
Thanks,
Tim
___
ceph-users mailin
We're actually pursuing a similar configuration where it's easily
conceivable that we would have 230+ block devices that we want to mount
on a server.
We are moving to a configuration where each user in our cluster has a
distinct ceph block device for their storage. We're mapping them on our
nas
On 10/08/2013 07:58 PM, Gaylord Holder wrote:
Always nice to see I've hit a real problem, and not just my being dumb.
May I ask why you are even trying to map so many RBD devices? Do you
need access to >230 all at the same time on each host?
Can't you map them when you need them and unmap t
Always nice to see I've hit a real problem, and not just my being dumb.
-Gaylord
On 10/08/2013 01:46 PM, Gregory Farnum wrote:
I believe this is a result of how we used the kernel interfaces
(allocating a major device ID for each RBD volume), and some kernel
limits (only 8 bits for storing majo
On Tue, Oct 8, 2013 at 1:09 PM, wrote:
> Hello,
>
>
>
> I have reached the stage on the install where I am running the ceph-deploy
> install command from the admin node to the server node.
>
>
>
> I get the following output:
>
>
>
> [ceph_deploy.install][DEBUG ] Installing stable version dumpling
I believe this is a result of how we used the kernel interfaces
(allocating a major device ID for each RBD volume), and some kernel
limits (only 8 bits for storing major device IDs, and some used for
other purposes). See http://tracker.ceph.com/issues/5048
I believe we have discussed not using a m
I'm testing how many rbds I can map on a single server.
I've created 10,000 rbds in the rbd pool, but I can only actually map 230.
Mapping the 230th one fails with:
rbd: add failed: (16) Device or resource busy
Is there a way to bump this up?
-Gaylord
_
Given the current status and configuration of a ceph cluster, how can I
determine the amount of data that may be written to each pool before it
becomes full? For this calculation we can assume that no further data is
written to any other pool. Or before any OSD the pool is mapped to becomes
full a
Hello,
I have reached the stage on the install where I am running the ceph-deploy
install command from the admin node to the server node.
I get the following output:
[ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster
ceph hosts ldtdsr02se18
[ceph_deploy.install][DEBUG
On Tue, Oct 8, 2013 at 8:21 AM, wrote:
> Ok,
>
>
>
> Think I spotted the issue. I think the guide has the subuser credentials
> which map to the swift tenant:user in the wrong order.
>
>
>
> Guide states:
>
> sudo radosgw-admin subuser create --uid=johndoe --subuser=johndoe:swift
> --access=full
Ok,
Think I spotted the issue. I think the guide has the subuser credentials which
map to the swift tenant:user in the wrong order.
Guide states:
sudo radosgw-admin subuser create --uid=johndoe --subuser=johndoe:swift
--access=full
Should be:
sudo radosgw-admin subuser create --uid=johndoe --
Hi All,
Novice question here. I'm stuck on the final step of "object storage quick
start" guide whilst building out a 2 node demo POC on Ubuntu 12.04.
sudo radosgw-admin subuser create --uid=swiftuser --subuser=swiftuser:swift
--access=full
could not create subuser: unable to parse request, use
Doh!
I’ve got python packages coming out of my ears. Clearly downloaded the wrong
one there.
That did the job. Thanks.
From: Abhay Sachan [mailto:abhay...@gmail.com]
Sent: Tuesday, October 08, 2013 3:36 PM
To: Whittle, Alistair: Investment Bank (LDN)
Cc: ceph-users@lists.ceph.com
Subject: Re
Thanks.
I sadly have to do this manually, and have been going through the dependencies.
There are a LOT to work through, especially it seems around python.
I am getting the following error when trying to install one of the dependencies
(python-babel):
error: Failed dependencies:
python
You need to install RHEL6 package not EL5, you can get it from here
http://mirror.centos.org/centos/6/os/i386/Packages/python-babel-0.9.4-5.1.el6.noarch.rpm
-Abhay
On Tue, Oct 8, 2013 at 8:02 PM, wrote:
> Thanks.
>
> ** **
>
> I sadly have to do this manually, and have been going through t
Hi all,
Version 1.2.7 of ceph-deploy has been released, the easy ceph deployment tool.
As always, there were a good amount of bug fixes into this release and a wealth
of improvements. 1.2.6 was not announced as it was a small bug fix
from the previous
release.
Installation instructions: https://
Hi ceph-users,
After walking through the operations document, I still have several questions
in terms of operation / monitoring for ceph which need you help. Thanks!
1. Does ceph provide build in monitoring mechanism for Rados and RadosGW?
Taking Rados for example, is it possible to monitor the
Hi Abhay
I had a similar mon<->mon communication problem using ceph-deploy which was
down to iptables rules. Depending on what OS you are running, by default the
ports Ceph uses may be blocked. As per
http://ceph.com/docs/master/rados/configuration/network-config-ref/ you need to
open ports
On Tue, 2013-10-08 at 11:55 +0200, Kees Bos wrote:
> Hi,
>
> It seems to me that ceph-deploy doesn't consider the [mon] and [osd]
> sections of {cluster}.conf Is this intentional or will this be
> implemented down the road.
>
Well, at least some of the [osd] section settings are effective (e.g.
On 08/10/13 10:58, Abhay Sachan wrote:
Hi Joao,
When I run the following command on the node "ceph --admin-daemon
/var/run/ceph/ceph-mon.dec.asok mon_status"
I get the following output:
{ "name": "dec",
"rank": 0,
"state": "probing",
"election_epoch": 0,
"quorum": [],
"outside_quo
Hi Joao,
When I run the following command on the node "ceph --admin-daemon
/var/run/ceph/ceph-mon.dec.asok mon_status"
I get the following output:
{ "name": "dec",
"rank": 0,
"state": "probing",
"election_epoch": 0,
"quorum": [],
"outside_quorum": [
"dec"],
"extra_probe_peers":
Hi,
It seems to me that ceph-deploy doesn't consider the [mon] and [osd]
sections of {cluster}.conf Is this intentional or will this be
implemented down the road.
- Kees
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
Hi Joao,
I gave the path for the keyring, and these messages are being printed on
the screen:
2013-10-07 15:18:55.048151 7fa1c43bc700 0 -- :/1026989 >>
15.213.24.231:6789/0 pipe(0x7fa1b4000990 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fa1b400dfe0).fault
2013-10-07 15:18:58.048774 7fa1c42bb700 0 -- :/10269
Hi All,
I am trying to understand the benefits of running multiple clusters on the same
hardware. Is anyone able to provide any insight into this?
In the documentation,
http://ceph.com/docs/master/rados/deployment/ceph-deploy-new/#naming-a-cluster,
mention is made of running multiple clusters
Hi All,
What's the best way to try and track down why this isn't working for us?
It doesn't seem that there are any other options I can provide. Lack of
(working) radosgw integration with keystone would be a huge blocker for us
being able to adopt ceph as part of our product set.
Thanks,
Darren
On 08/10/13 05:40, Abhay Sachan wrote:
Hi Joao,
Thanks for replying. All of my monitors are up and running and connected
to each other. "ceph -s" is failing on the cluster with following error:
2013-10-07 10:12:25.099261 7fd1b948d700 -1 monclient(hunting): ERROR:
missing keyring, cannot use ceph
Hi Joao,
I tried using the latest ceph-deploy (1.2.6) and latest dumpling release
too (0.67.4). I am getting following messages during monitor creation on
RHEL 6.4.
2013-10-07 13:54:09,864 [ceph_deploy.new][DEBUG ] Creating new cluster
named ceph
2013-10-07 13:54:09,864 [ceph_deploy.new][DEBUG ] R
Ok, I found where i have seen info about upgrade bobtail->dumpling:
http://www.spinics.net/lists/ceph-users/msg03408.html
--
Regards
Dominik
2013/10/8 Dominik Mostowiec :
> ok, if I do not know for sure it is safe i will do this step by step.
> But i'm almost sure that i have seen instructions to
ok, if I do not know for sure it is safe i will do this step by step.
But i'm almost sure that i have seen instructions to upgrade bobtail
to dumpling
--
Regards
Dominik
2013/10/8 Maciej Gałkiewicz :
> On 8 October 2013 09:23, Dominik Mostowiec wrote:
>> Yes,
>> in: V0.67.2 "DUMPLING":
>> "T
Hi ceph-users,
When I tried to use admin ops API, I have met two issues so far.
1. It seems to succeed in getting its usage info, but why is the body empty
and no other entities, like bytes_sent, owner, bucket, etc?
> GET /admin/usage?format=json HTTP/1.1
> Host:
> Accept: */*
> Date: Tue, 08 O
On 8 October 2013 09:23, Dominik Mostowiec wrote:
> Yes,
> in: V0.67.2 “DUMPLING”:
> "This is an imporant point release for Dumpling. Most notably, it
> fixes a problem when upgrading directly from v0.56.x Bobtail to
> v0.67.x Dumpling (without stopping at v0.61.x Cuttlefish along the
> way)."
>
>
Yes,
in: V0.67.2 “DUMPLING”:
"This is an imporant point release for Dumpling. Most notably, it
fixes a problem when upgrading directly from v0.56.x Bobtail to
v0.67.x Dumpling (without stopping at v0.61.x Cuttlefish along the
way)."
But there is no instructions how upgrade bobtail->dumpling (or I
> I tried putting Flashcache on my spindle OSDs using an Intel SSL and it works
> great.
> This is getting me read and write SSD caching instead of just write
> performance on the journal.
> It should also allow me to protect the OSD journal on the same drive as the
> OSD data and still get bene
http://ceph.com/docs/master/release-notes/
Am 08.10.2013 07:37, schrieb Dominik Mostowiec:
hi,
It is possible to (safe) upgrade directly from bobtail (0.56.6) to
dumpling (latest)?
Is there any instruction?
___
ceph-users mailing list
ceph-users@lis
37 matches
Mail list logo