Thanks! I'm still puzzled as to _what_ data is moving if the OSD was
previously "out" and didn't host any PG (according to pg dump). The
host only had one other OSD which was already "out" and had zero weight.
It looks like Ceph is moving some other data, which wasn't hosted on
the re-weighted O
Thanks for your input John! This doesn't really match the doc [1],
which suggests just taking them out and only using "reweight" in case of
issues (with small clusters).
Is "reweight" considered a must before removing and OSD?
Cheers
On 13/02/18 12:34, John Petrini wrote:
> The rule of thumb is
Thanks for your input John! This doesn't really match the doc [1],
which suggests just taking them out and only using "reweight" in case of
issues (with small clusters).
Is "reweight" considered a must before removing and OSD?
Cheers
On 13/02/18 12:34, John Petrini wrote:
> The rule of thumb is
Hi all,
I'm in the process of decommissioning some OSDs and thought I'd
previously migrated all data off them by marking them "out" (which did
trigger a fair amount of remapping as expected).
Looking at the pgmap ('ceph pg dump') confirmed that none of the "out"
OSDs was hosting any more PGs (col
I'm planning to migrate an existing Filestore cluster with (SATA)
SSD-based journals fronting multiple HDD-hosted OSDs - should be a
common enough setup. So I've been trying to parse various contributions
here and Ceph devs' blog posts (for which, thanks!)
Seems the best way to repurpose that har
In case this is useful, the steps to make this work are given here:
http://tracker.ceph.com/issues/13833#note-2 (the bug context documents
the shortcoming; I believe this happens if you create the journal
partition manually).
HTH,
Christian
On 12/06/16 10:18, Anthony D'Atri wrote:
The GUID
istian
On 12/05/16 22:17, Gregory Farnum wrote:
On Thu, May 12, 2016 at 12:42 PM, Christian Sarrasin
wrote:
Thanks Greg!
If I understood correctly, your suggesting this:
cd /etc/ceph
grep -v 'mon host' testcluster.conf > testcluster_client.conf
diff testcluster.conf testcluster_cli
ors specified to connect to.
Error connecting to cluster: ObjectNotFound
So this doesn't seem to work. Any other suggestion is most welcome.
Cheers,
Christian
On 12/05/16 21:06, Gregory Farnum wrote:
On Thu, May 12, 2016 at 6:45 AM, Christian Sarrasin
wrote:
I'm trying to run monitors
I'm trying to run monitors on a non-standard port and having trouble
connecting to them. The below shows the ceph client attempting to
connect to default port 6789 rather than 6788:
ceph --cluster testcluster status
2016-05-12 13:31:12.246246 7f710478c700 0 -- :/2044977896 >>
192.168.10.201:
Hi there,
The docs have an ominous warning that one shouldn't run the RBD client
(to mount block devices) on a machine which also serves OSDs [1]
Due to budget constraints, this topology would be useful in our
situation. Couple of q's:
1) Does the limitation also apply if the OSD daemon is
quot;: ".rgw.buckets.extra"}}
On 08/10/15 23:46, Yehuda Sadeh-Weinraub wrote:
When you start radosgw, do you explicitly state the name of the region
that gateway belongs to?
On Thu, Oct 8, 2015 at 2:19 PM, Christian Sarrasin
wrote:
Hi Yehuda,
Hi Shilpa,
Thank you very much for the suggestion.
My understanding of the (admittedly not officially documented) concept
of default_placement should precisely be to act as the name implies if
the client does *not* specify a placement. For my use-case
(multi-tenancy support), relying on tena
Hi Yehuda,
Yes I did run "radosgw-admin regionmap update" and the regionmap appears
to know about my custom placement_target. Any other idea?
Thanks a lot
Christian
radosgw-admin region-map get
{ "regions": [
{ "key": "default",
"val": { "name": "default",
"ap
ot;.rgw.buckets.extra"}}]}
radosgw-admin user info --uid=user2
{ "user_id": "user2",
"display_name": "User2",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers"
What are the best options to setup the Ceph radosgw so it supports
separate/independent "tenants"? What I'm after:
1. Ensure isolation between tenants, ie: no overlap/conflict in bucket
namespace; something separate radosgw "users" doesn't achieve
2. Ability to backup/restore tenants' pools in
15 matches
Mail list logo