Re: [ceph-users] Questions regarding backing up Ceph

2019-07-24 Thread Fabian Niepelt
Hi, thanks for the reply. Am Mittwoch, den 24.07.2019, 15:26 +0200 schrieb Wido den Hollander: > > On 7/24/19 1:37 PM, Fabian Niepelt wrote: > > Hello ceph-users, > > > > I am currently building a Ceph cluster that will serve as a backend for > > Openstack and

[ceph-users] Questions regarding backing up Ceph

2019-07-24 Thread Fabian Niepelt
d I backup the pools that are used for object storage? Of course, I'm also open to completely other ideas on how to backup Ceph and would appreciate hearing how you people are doing your backups. Any help is much appreciated. Greetings Fabian ___

[ceph-users] Samsung 983 NVMe M.2 - experiences?

2019-03-29 Thread Fabian Figueredo
.samsung.com/us/business/products/computing/ssd/enterprise/983-dct-960gb-mz-1lb960ne/ The idea is buy 10 units. Anyone have any thoughts/experiences with this drives? Thanks, Fabian ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/li

Re: [ceph-users] Proxmox/ceph upgrade and addition of a new node/OSDs

2018-09-21 Thread Fabian Grünbichler
On Fri, Sep 21, 2018 at 09:03:15AM +0200, Hervé Ballans wrote: > Hi MJ (and all), > > So we upgraded our Proxmox/Ceph cluster, and if we have to summarize the > operation in a few words : overall, everything went well :) > The most critical operation of all is the 'osd crush tunables optimal', I >

Re: [ceph-users] [Ceph-maintainers] download.ceph.com repository changes

2018-08-02 Thread Fabian Grünbichler
On Mon, Jul 30, 2018 at 11:36:55AM -0600, Ken Dreyer wrote: > On Fri, Jul 27, 2018 at 1:28 AM, Fabian Grünbichler > wrote: > > On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote: > >> Hi all, > >> > >> After the 12.2.6 release went out, we've

Re: [ceph-users] [Ceph-maintainers] download.ceph.com repository changes

2018-07-27 Thread Fabian Grünbichler
On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote: > Hi all, > > After the 12.2.6 release went out, we've been thinking on better ways > to remove a version from our repositories to prevent users from > upgrading/installing a known bad release. > > The way our repos are structured toda

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-19 Thread Fabian Grünbichler
On Mon, Jun 18, 2018 at 07:15:49PM +, Sage Weil wrote: > On Mon, 18 Jun 2018, Fabian Grünbichler wrote: > > it's of course within your purview as upstream project (lead) to define > > certain platforms/architectures/distros as fully supported, and others > > as be

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-18 Thread Fabian Grünbichler
On Wed, Jun 13, 2018 at 12:36:50PM +, Sage Weil wrote: > Hi Fabian, thanks for your quick, and sorry for my delayed response (only having 1.5 usable arms atm). > > On Wed, 13 Jun 2018, Fabian Grünbichler wrote: > > On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-13 Thread Fabian Grünbichler
On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote: > [adding ceph-maintainers] [and ceph-devel] > > On Mon, 4 Jun 2018, Charles Alva wrote: > > Hi Guys, > > > > When will the Ceph Mimic packages for Debian Stretch released? I could not > > find the packages even after changing the sourc

Re: [ceph-users] No more Luminous packages for Debian Jessie ??

2018-03-07 Thread Fabian Grünbichler
On Wed, Mar 07, 2018 at 02:04:52PM +0100, Fabian Grünbichler wrote: > On Wed, Feb 28, 2018 at 10:24:50AM +0100, Florent B wrote: > > Hi, > > > > Since yesterday, the "ceph-luminous" repository does not contain any > > package for Debian Jessie. > > &g

Re: [ceph-users] No more Luminous packages for Debian Jessie ??

2018-03-07 Thread Fabian Grünbichler
On Wed, Feb 28, 2018 at 10:24:50AM +0100, Florent B wrote: > Hi, > > Since yesterday, the "ceph-luminous" repository does not contain any > package for Debian Jessie. > > Is it expected ? AFAICT the packages are all there[2], but the Packages file only references the ceph-deploy package so apt d

Re: [ceph-users] ceph-volume lvm deactivate/destroy/zap

2018-01-09 Thread Fabian Grünbichler
On Tue, Jan 09, 2018 at 02:14:51PM -0500, Alfredo Deza wrote: > On Tue, Jan 9, 2018 at 1:35 PM, Reed Dier wrote: > > I would just like to mirror what Dan van der Ster’s sentiments are. > > > > As someone attempting to move an OSD to bluestore, with limited/no LVM > > experience, it is a completely

Re: [ceph-users] Hangs with qemu/libvirt/rbd when one host disappears

2017-12-07 Thread Fabian Grünbichler
On Thu, Dec 07, 2017 at 09:59:43AM +0100, Marcus Priesch wrote: > Hello Brad, > > thanks for your answer ! > > >> at least the point of all is that a single host should be allowed to > >> fail and the vm's continue running ... ;) > > > > You don't really have six MONs do you (although I know the

Re: [ceph-users] Increasing mon_pg_warn_max_per_osd in v12.2.2

2017-12-04 Thread Fabian Grünbichler
On Mon, Dec 04, 2017 at 11:21:42AM +0100, SOLTECSIS - Victor Rodriguez Cortes wrote: > > > Why are you OK with this? A high amount of PGs can cause serious peering > > issues. OSDs might eat up a lot of memory and CPU after a reboot or such. > > > > Wido > > Mainly because there was no warning

Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Fabian Grünbichler
On Thu, Nov 30, 2017 at 11:25:03AM -0500, Alfredo Deza wrote: > Thanks all for your feedback on deprecating ceph-disk, we are very > excited to be able to move forwards on a much more robust tool and > process for deploying and handling activation of OSDs, removing the > dependency on UDEV which ha

Re: [ceph-users] ceph-disk is now deprecated

2017-11-30 Thread Fabian Grünbichler
On Thu, Nov 30, 2017 at 07:04:33AM -0500, Alfredo Deza wrote: > On Thu, Nov 30, 2017 at 6:31 AM, Fabian Grünbichler > wrote: > > On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote: > >> On Tue, Nov 28, 2017 at 9:22 AM, David Turner > >> wrote: >

Re: [ceph-users] ceph-disk is now deprecated

2017-11-30 Thread Fabian Grünbichler
On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote: > On Tue, Nov 28, 2017 at 9:22 AM, David Turner wrote: > > Isn't marking something as deprecated meaning that there is a better option > > that we want you to use and you should switch to it sooner than later? I > > don't understand ho

Re: [ceph-users] [Ceph-announce] Luminous v12.2.1 released

2017-10-02 Thread Fabian Grünbichler
On Thu, Sep 28, 2017 at 05:46:30PM +0200, Abhishek wrote: > This is the first bugfix release of Luminous v12.2.x long term stable > release series. It contains a range of bug fixes and a few features > across CephFS, RBD & RGW. We recommend all the users of 12.2.x series > update. > > For more det

Re: [ceph-users] Ceph packages for Debian Stretch?

2017-06-21 Thread Fabian Grünbichler
On Wed, Jun 21, 2017 at 05:30:02PM +0900, Christian Balzer wrote: > > Hello, > > On Wed, 21 Jun 2017 09:47:08 +0200 (CEST) Alexandre DERUMIER wrote: > > > Hi, > > > > Proxmox is maintening a ceph-luminous repo for stretch > > > > http://download.proxmox.com/debian/ceph-luminous/ > > > > > >

[ceph-users] sortbitwise warning broken on Ceph Jewel?

2017-05-16 Thread Fabian Grünbichler
The Kraken release notes[1] contain the following note about the sortbitwise flag and upgrading from <= Jewel to > Jewel: The sortbitwise flag must be set on the Jewel cluster before upgrading to Kraken. The latest Jewel (10.2.4+) releases issue a health warning if the flag is not set, so this is

[ceph-users] Question about the OSD host option

2017-04-21 Thread Fabian
specified. Why do the OSD daemon need the host option? What happened if it doesn't exist? Is there any best practice about naming the OSDs? Or a trick to avoid the [OSD.ID] for each daemon? [1]http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-da

Re: [ceph-users] Automatic OSD start on Jewel

2017-01-04 Thread Fabian Grünbichler
On Wed, Jan 04, 2017 at 12:55:56PM +0100, Florent B wrote: > On 01/04/2017 12:18 PM, Fabian Grünbichler wrote: > > On Wed, Jan 04, 2017 at 12:03:39PM +0100, Florent B wrote: > >> Hi everyone, > >> > >> I have a problem with automatic start of OSDs o

Re: [ceph-users] Automatic OSD start on Jewel

2017-01-04 Thread Fabian Grünbichler
On Wed, Jan 04, 2017 at 12:03:39PM +0100, Florent B wrote: > Hi everyone, > > I have a problem with automatic start of OSDs on Debian Jessie with Ceph > Jewel. > > My osd.0 is using /dev/sda5 for data and /dev/sda2 for journal, it is > listed in ceph-disk list : > > /dev/sda : > /dev/sda1 other

Re: [ceph-users] Create file bigger than osd

2015-01-19 Thread Fabian Zimmermann
2: 704 pgs: 704 active+clean; 573 GB data, 1150 GB used, 8870 GB / 10020 GB avail; 4965 B/s wr, 1 op/s -- And I'm currently unable to reproduce the problem. Next time I will try your commands to get more information. I also took a look into the logs, but no

Re: [ceph-users] Create file bigger than osd

2015-01-19 Thread Fabian Zimmermann
2 nodes = 2x12x410G HDD/OSDs A user created a 500G rbd-volume. First I thought the 500G rbd may have caused the osd to fill, but after reading your explainnations this seems impossible. I just found another 500G file created by this user in cephfs, may this have caused the trouble? Thanks a lot fo

[ceph-users] Create file bigger than osd

2015-01-19 Thread Fabian Zimmermann
Hi, if I understand the pg-system correctly it's impossible to create a file/volume which is bigger than the smallest osd of a pg, isn't it? What could I do to get rid of this limitation? Thanks, Fabian ___ ceph-users mailing list

[ceph-users] rbd cp vs rbd snap flatten

2015-01-16 Thread Fabian Zimmermann
Hi, if I want to clone a running vm-hdd, would it be enough to "cp" or do I have to "snap, protect, flatten, unprotect, rm" the snapshot to get a as consistent as possible clone? Or: Does cp use a internal snapshot while copying the bloc

Re: [ceph-users] How to backup mon-data?

2014-05-24 Thread Fabian Zimmermann
. nevertheless - thanks a lot, Fabian ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to backup mon-data?

2014-05-23 Thread Fabian Zimmermann
Hi, > Am 23.05.2014 um 17:31 schrieb "Wido den Hollander" : > > I wrote a blog about this: > http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors/ so you assume restoring the old data is working, or did you proof this? Fabian __

Re: [ceph-users] How to backup mon-data?

2014-05-23 Thread Fabian Zimmermann
hotting the pool could help? Backup: * create a snapshot * shutdown one mon * backup mon-dir Restore: * import mon-dir * create further mons until quorum is restored * restore snapshot Possible?.. :D Thanks, Fabian signature.asc Description: Message signed with OpenPGP using GPGMail __

[ceph-users] How to backup mon-data?

2014-05-23 Thread Fabian Zimmermann
disaster recover. What’s the correct way to backup mon-data - if there is any? Thanks, Fabian signature.asc Description: Message signed with OpenPGP using GPGMail ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph

[ceph-users] Segmentation fault RadosGW

2014-05-15 Thread Fabian Zimmermann
1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 1 max_new 1000 log_file /var/