Hey all.
I was wondering if Ceph Octopus is capable of automating/managing snapshot
creation/retention and then replication? Ive seen some notes about it, but
can't seem to find anything solid.
Open to suggestions as well. Appreciate any input!
___
Care to provide anymore detail?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Does anyone else have any suggestions or options outside of a separate
dedicated OS? Seems like this should be something pretty simple and straight
forward that Ceph is missing.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an e
Benji is independent from Ceph. It utilizes Ceph snapshots to do the backups,
but it has nothing to do with managing Ceph snapshots.
I am simply looking for the ability to manage Ceph snapshots. For example. Take
a snapshot every 30 minutes, keep 8 of those 30 minute snapshots.
___
I thought Octopus brought the new snapshot replication feature to the table?
Was there issues with it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
That is exactly what I am thinking. My mistake, I should have specified RBD.
Is snapshots scheduling/retention for RBD already in Octopus as well?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.i
I have been doing some testing with RBD-Mirror Snapshots to a remote Ceph
cluster.
Does anyone know if the images on the remote cluster can be utilized in anyway?
Would love the ability to clone them, or even readonly would be nice.
___
ceph-users ma
Two separate 4 node clusters with 10 OSD's in each node. Micron 9300 NVMe's are
the OSD drives. Heavily based on the Micron/Supermicro white papers.
When I attempt to protect the snapshot on a remote image, it errors with read
only.
root@Bunkcephmon2:~# rbd snap protect CephTestPool1/vm-100-d
uld be able to clone the mirrored snapshot on the remote
cluster even though it’s not protected, IIRC.
Zitat von Adam Boyhan :
> Two separate 4 node clusters with 10 OSD's in each node. Micron 9300
> NVMe's are the OSD drives. Heavily based on the Micron/Supermicro
> white
Cc: "Eugen Block" , "ceph-users" , "Matt
Wilder"
Sent: Wednesday, January 20, 2021 3:28:39 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Wed, Jan 20, 2021 at 3:10 PM Adam Boyhan wrote:
>
> That's what I though as wel
I have a rbd-mirror snapshot on 1 image that failed to replicate and now its
not getting cleaned up.
The cause of this was my fault based on my steps. Just trying to understand how
to clean up/handle the situation.
Here is how I got into this situation.
- Created manual rbd snapshot on the
" , "Matt
Wilder"
Sent: Wednesday, January 20, 2021 3:28:39 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Wed, Jan 20, 2021 at 3:10 PM Adam Boyhan wrote:
>
> That's what I though as well, specially based on this.
>
>
>
Decided to request a resync to see the results, I have a very aggressive
snapshot mirror schedule of 5 minutes, replication just keeps starting on the
latest snapshot before it finishes. Pretty sure this would just loop over and
over if I don't remove the schedule.
root@Ccscephtest1:~# rbd sna
2021 9:25:11 AM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Thu, Jan 21, 2021 at 8:34 AM Adam Boyhan wrote:
>
> When cloning the snapshot on the remote cluster I can't see my ext4
> filesystem.
>
> Using the same exact snapshot on both s
I have noticed that RBD-Mirror snapshot mode can only manage to take 1 snapshot
per second. For example I have 21 images in a single pool. When the schedule is
triggered it takes the mirror snapshot of each image 1 at a time. It doesn't
feel or look like a performance issue as the OSD's are Micr
quot;
To: "adamb"
Cc: "Eugen Block" , "ceph-users" , "Matt
Wilder"
Sent: Thursday, January 21, 2021 9:42:26 AM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Thu, Jan 21, 2021 at 9:40 AM Adam Boyhan wrote:
>
> After the
Looks like a script and cron will be a solid work around.
Still interested to know if there are any options to make it so rbd-mirror can
take more than 1 mirror snap per second.
From: "adamb"
To: "ceph-users"
Sent: Thursday, January 21, 2021 11:18:36 AM
Subject: [ceph-users] RBD-Mirror
ot;adamb"
Cc: "ceph-users"
Sent: Thursday, January 21, 2021 2:18:06 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Scalability
On Thu, Jan 21, 2021 at 2:00 PM Adam Boyhan wrote:
>
> Looks like a script and cron will be a solid work around.
>
> Still interest
# cat /proc/mounts | grep nbd0
/dev/nbd0 /usr2 ext4 rw,relatime 0 0
From: "Jason Dillaman"
To: "adamb"
Cc: "Eugen Block" , "ceph-users" , "Matt
Wilder"
Sent: Thursday, January 21, 2021 3:01:46 PM
Subject: Re: [ceph-users] Re: RBD-Mi
t;Eugen Block" , "ceph-users" , "Matt
Wilder"
Sent: Thursday, January 21, 2021 3:01:46 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Thu, Jan 21, 2021 at 11:51 AM Adam Boyhan wrote:
>
> I was able to trigger the issue again.
>
;t
repeat what you are seeing and we do have test cases that really
hammer random IO on primary images, create snapshots, rinse-and-repeat
and they haven't turned up anything yet.
Thanks!
On Fri, Jan 22, 2021 at 1:50 PM Adam Boyhan wrote:
>
> I have been doing a lot of tes
t Backup Image Uses
Any chance you can attempt to repeat the process on the latest master
or pacific branch clients (no need to upgrade the MONs/OSDs)?
On Fri, Jan 22, 2021 at 2:32 PM Adam Boyhan wrote:
>
> The steps are pretty straight forward.
>
> - Create rbd image of 500G o
r"
Sent: Friday, January 22, 2021 3:44:26 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Fri, Jan 22, 2021 at 3:29 PM Adam Boyhan wrote:
>
> I will have to do some looking into how that is done on Proxmox, but most
> definitely.
Thanks, appreciate
riday, January 22, 2021 3:44:26 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Fri, Jan 22, 2021 at 3:29 PM Adam Boyhan wrote:
>
> I will have to do some looking into how that is done on Proxmox, but most
> definitely.
Thanks, appreciate it.
> __
anuary 28, 2021 12:53:50 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Thu, Jan 28, 2021 at 10:31 AM Jason Dillaman wrote:
>
> On Wed, Jan 27, 2021 at 7:27 AM Adam Boyhan wrote:
> >
> > Doing some more testing.
> >
> > I can
This is a odd one. I don't hit it all the time so I don't think its expected
behavior.
Sometimes I have no issues enabling rbd-mirror snapshot mode on a rbd when its
in use by a KVM VM. Other times I hit the following error, the only way I can
get around it is to power down the KVM VM.
root@
That makes sense. Appreciate it.
From: "Jason Dillaman"
To: "adamb"
Cc: "ceph-users"
Sent: Friday, January 29, 2021 9:39:28 AM
Subject: Re: [ceph-users] Unable to enable RBD-Mirror Snapshot on image when VM
is using RBD
On Fri, Jan 29, 2021 at 9:34 AM
ct: Re: [ceph-users] Unable to enable RBD-Mirror Snapshot on image when VM
is using RBD
On Fri, Jan 29, 2021 at 9:34 AM Adam Boyhan wrote:
>
> This is a odd one. I don't hit it all the time so I don't think its expected
> behavior.
>
> Sometimes I have no issues en
To: "adamb"
Cc: "ceph-users" , "Matt Wilder"
Sent: Thursday, January 28, 2021 12:53:50 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Thu, Jan 28, 2021 at 10:31 AM Jason Dillaman wrote:
>
> On Wed, Jan 27, 2021 at 7:27 AM Adam Bo
ve the option of using KRBD but not sure if that will help in this
situation.
From: "Jason Dillaman"
To: "adamb"
Cc: "ceph-users"
Sent: Friday, January 29, 2021 9:39:28 AM
Subject: Re: [ceph-users] Unable to enable RBD-Mirror Snapshot on image when VM
Isn't this somewhat reliant on the OSD type?
Redhat/Micron/Samsung/Supermicro have all put out white papers backing the idea
of 2 copies on NVMe's as safe for production.
From: "Magnus HAGDORN"
To: pse...@avalon.org.ua
Cc: "ceph-users"
Sent: Wednesday, February 3, 2021 4:43:08 AM
Subjec
I believe you partition the device, and then create your OSD pointing at a
partition.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
dhils...@performair.com
www.PerformAir.com
-Original Message-
From: Adam Boyhan [mailto:ad...@
I know there is already a few threads about 2x replication but I wanted to
start one dedicated to discussion on NVMe. There are some older threads, but
nothing recent that addresses how the vendors are now pushing the idea of 2x.
We are in the process of considering Ceph to replace our Nimble s
All great input and points guys.
Helps me lean towards 3 copes a bit more.
I mean honestly NVMe cost per TB isn't that much more than SATA SSD now.
Somewhat surprised the salesmen aren't pitching 3x replication as it makes them
more money.
From: "Anthony D'Atri"
To: "ceph-users"
Sent:
same time ?
What are the numbers ?
On 2/5/21 12:26 PM, Wido den Hollander wrote:
>
>
> On 04/02/2021 18:57, Adam Boyhan wrote:
>> All great input and points guys.
>>
>> Helps me lean towards 3 copes a bit more.
>>
>> I mean honestly NVMe cost per TB i
second site entirely, you can always re-sync from scratch
- assuming decent network bandwidth.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________
From: Adam Boyhan
Sent: 05 February 2021 13:58:34
To: Frank Schilder
These guys are great.
[ https://croit.io/ | https://croit.io/ ]
From: "Schweiss, Chip"
To: "ceph-users"
Sent: Tuesday, February 16, 2021 9:42:24 AM
Subject: [ceph-users] SUSE POC - Dead in the water
For the past several months I had been building a sizable Ceph cluster that
will be up t
I have a small cluster on Pacific with roughly 600 RBD images. Out of those
600 images I have 2 which are in a somewhat odd state.
root@cephmon:~# rbd info Cloud-Ceph1/vm-134-disk-0
rbd image 'vm-134-disk-0':
size 1000 GiB in 256000 objects
order 22 (4 MiB objects)
snaps
We are looking to role out a all flash Ceph cluster as storage for our cloud
solution. The OSD's will be on slightly slower Micron 5300 PRO's, with WAL/DB
on Micron 7300 MAX NVMe's.
My main concern with Ceph being able to fit the bill is its snapshot abilities.
For each RBD we would like the
Its my understanding that pool snapshots would basically require us to be in a
all or nothing situation were we would have to revert all RBD's in a pool. If
we could clone a pool snapshot for filesystem level access like a rbd snapshot,
that would help a ton.
Thanks,
Adam Boyhan
S
Looking to role out a all flash Ceph cluster. Wanted to see if anyone else was
using Micron drives along with some basic input on my design so far?
Basic Config
Ceph OSD Nodes
8x Supermicro A+ Server 2113S-WTRT
- AMD EPYC 7601 32 Core 2.2Ghz
- 256G Ram
- AOC-S3008L-L8e HBA
- 10GB SFP+ for
Appreciate the input.
Looking at those articles they make me feel like the 40G they are talking about
is 4x Bonded 10G connections.
Im looking at 40Gbps without bonding for throughput. Is that still the same?
[ https://www.fs.com/products/29126.html |
https://www.fs.com/products/29126.html
Ok, so 100G seems to be the better choice. I will probably go with some of
these.
[ https://www.fs.com/products/75808.html |
https://www.fs.com/products/75808.html ]
From: "Paul Emmerich"
To: "EDH"
Cc: "adamb" , "ceph-users"
Sent: Friday, January 31, 2020 8:49:29 AM
Subject: Re: [c
43 matches
Mail list logo