Hi there,
I have a Ceph cluster with radosgw and I use it in my production
environment for a while. Now I decided to set up another cluster in another
geo place to have a disaster recovery plan. I read some docs like
http://docs.ceph.com/docs/jewel/radosgw/federated-config/, but all of them
is abo
I hadn't tried manual compaction, but it did the trick. The db shrunk down to
50MB and the OSD booted instantly. Thanks!
I'm confused as to why the OSDs weren't doing this themselves, especially as
the operation only took a few seconds. But for now I'm happy that this is easy
to rectify if we r
Hey everyone,
Ceph CERN Day will be a full-day event dedicated to fostering Ceph's
research and non-profit user communities. The event is hosted by the
Ceph team from the CERN IT department.
We invite this community to meet and discuss the status of the Ceph
project, recent improvements, and road
Hi...this page is for an old version (jewel). They call federated multi-site
nowadays. Read this one instead [
http://docs.ceph.com/docs/master/radosgw/multisite/ |
http://docs.ceph.com/docs/master/radosgw/multisite/ ] . There's some
instructions in the end about migrating a single site
Marce
On Mon, Jun 17, 2019 at 4:09 PM David Turner wrote:
>
> This was a little long to respond with on Twitter, so I thought I'd share my
> thoughts here. I love the idea of a 12 month cadence. I like October because
> admins aren't upgrading production within the first few months of a new
> release
Tarek,
Of course you are correct about the client nodes. I executed this command
inside of container that runs mon. Or it can be done on the bare metal
node that runs mon. You essentially quering mon configuration database.
On Tue, Jun 25, 2019 at 8:53 AM Tarek Zegar wrote:
> "config get" o
Hi all,
I have installed ceph luminous, with 43 OSD(3TB)
Checking pool statistics
ceph df detail
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
117TiB 69.3TiB 48.0TiB 40.91 4.20M
POOLS:
NAMEID QUOTA OBJECTS QUOTA BY
On Mon, Jun 24, 2019 at 4:30 PM Alex Litvak
wrote:
>
> Jason,
>
> What are you suggesting to do ? Removing this line from the config database
> and keeping in config files instead?
I think it's a hole right now in the MON config store that should be
addressed. I've opened a tracker ticket [1] t
On 6/24/19 1:49 PM, David Turner wrote:
> It's aborting incomplete multipart uploads that were left around. First
> it will clean up the cruft like that and then it should start actually
> deleting the objects visible in stats. That's my understanding of it
> anyway. I'm int he middle of cleaning u
Thank you..
Looking into the URL...
On Tue, 25 Jun, 2019, 12:18 PM Torben Hørup, wrote:
> Hi
>
> You could look into the radosgw elasicsearch sync module, and use that
> to find the objects last modified.
>
> http://docs.ceph.com/docs/master/radosgw/elastic-sync-module/
>
> /Torben
>
> On 25.06
Sasha,
Sorry I don't get it, the documentation for the command states that in
order to see the config DB for all do: "ceph config dump"
To see what's in the config DB for a particular daemon do: "ceph config get
"
To see what's set for a particular daemon (be it from the config db,
override, conf
On Tue, Jun 25, 2019 at 2:40 PM Tarek Zegar wrote:
> Sasha,
>
> Sorry I don't get it, the documentation for the command states that in
> order to see the config DB for all do: *"ceph config dump"*
> To see what's in the config DB for a particular daemon do: *"ceph config
> get "*
> To see what's
MAX AVAIL is the amount of data you can still write to the cluster
before *anyone one of your OSDs* becomes near full. If MAX AVAIL is not
what you expect it to be, look at the data distribution using ceph osd
tree and make sure you have a uniform distribution.
Mohamad
On 6/25/19 11:46 AM, Davis
Thank you for explanation Jason, and thank you for opening a ticket for my
issue.
On 6/25/2019 1:56 PM, Jason Dillaman wrote:
On Tue, Jun 25, 2019 at 2:40 PM Tarek Zegar mailto:tze...@us.ibm.com>> wrote:
Sasha,
Sorry I don't get it, the documentation for the command states that in ord
The sizes are determined by rocksdb settings - some details can be found
here: https://tracker.ceph.com/issues/24361
One thing to note, in this thread
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030775.html
it's noted that rocksdb could use up to 100% extra space during compact
The placement of PGs is random in the cluster and takes into account any
CRUSH rules which may also skew the distribution. Having more PGs will help
give more options for placing PGs, but it still may not be adequate. It is
recommended to have between 100-150 PGs per OSD, and you are pretty close.
There may also be more memory coping involved instead of just passing
pointers around as well, but I'm not 100% sure.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, Jun 24, 2019 at 10:28 AM Jeff Layton
wrote:
> On Mon, 2019-06-24 at 15
How can I tell ceph to give up on "incomplete" PGs?
I have 12 pgs which are "inactive, incomplete" that won't recover. I
think this is because in the past I have carelessly pulled disks too
quickly without letting the system recover. I suspect the disks that
have the data for these are long gone
If you are running Luminous or newer, you can simply enable the balancer module
[1].
[1]
http://docs.ceph.com/docs/luminous/mgr/balancer/
From: ceph-users on behalf of Robert
LeBlanc
Sent: Tuesday, June 25, 2019 5:22 PM
To: jinguk.k...@ungleich.ch
Cc: ceph-us
Here we go again! As usual the conference theme is intended to
inspire, not to restrict; talks on any topic in the world of free and
open source software, hardware, etc. are most welcome, and Ceph talks
definitely fit.
I've added this to https://pad.ceph.com/p/cfp-coordination as well.
On 6/25/19 12:46 AM, Rudenko Aleksandr wrote:
Hi, Konstantin.
Thanks for the reply.
I know about stale instances and that they remained from prior version.
I ask about “marker” of bucket. I have bucket “clx” and I can see his
current marker in stale-instances list.
As I know, stale-instan
21 matches
Mail list logo