Hi, thanks for the reply.
Am Mittwoch, den 24.07.2019, 15:26 +0200 schrieb Wido den Hollander:
>
> On 7/24/19 1:37 PM, Fabian Niepelt wrote:
> > Hello ceph-users,
> >
> > I am currently building a Ceph cluster that will serve as a backend for
> > Openstack and
d I
backup the pools that are used for object storage?
Of course, I'm also open to completely other ideas on how to backup Ceph and
would appreciate hearing how you people are doing your backups.
Any help is much appreciated.
Greetings
Fabian
___
.samsung.com/us/business/products/computing/ssd/enterprise/983-dct-960gb-mz-1lb960ne/
The idea is buy 10 units.
Anyone have any thoughts/experiences with this drives?
Thanks,
Fabian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
On Fri, Sep 21, 2018 at 09:03:15AM +0200, Hervé Ballans wrote:
> Hi MJ (and all),
>
> So we upgraded our Proxmox/Ceph cluster, and if we have to summarize the
> operation in a few words : overall, everything went well :)
> The most critical operation of all is the 'osd crush tunables optimal', I
>
On Mon, Jul 30, 2018 at 11:36:55AM -0600, Ken Dreyer wrote:
> On Fri, Jul 27, 2018 at 1:28 AM, Fabian Grünbichler
> wrote:
> > On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote:
> >> Hi all,
> >>
> >> After the 12.2.6 release went out, we've
On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote:
> Hi all,
>
> After the 12.2.6 release went out, we've been thinking on better ways
> to remove a version from our repositories to prevent users from
> upgrading/installing a known bad release.
>
> The way our repos are structured toda
On Mon, Jun 18, 2018 at 07:15:49PM +, Sage Weil wrote:
> On Mon, 18 Jun 2018, Fabian Grünbichler wrote:
> > it's of course within your purview as upstream project (lead) to define
> > certain platforms/architectures/distros as fully supported, and others
> > as be
On Wed, Jun 13, 2018 at 12:36:50PM +, Sage Weil wrote:
> Hi Fabian,
thanks for your quick, and sorry for my delayed response (only having
1.5 usable arms atm).
>
> On Wed, 13 Jun 2018, Fabian Grünbichler wrote:
> > On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote
On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote:
> [adding ceph-maintainers]
[and ceph-devel]
>
> On Mon, 4 Jun 2018, Charles Alva wrote:
> > Hi Guys,
> >
> > When will the Ceph Mimic packages for Debian Stretch released? I could not
> > find the packages even after changing the sourc
On Wed, Mar 07, 2018 at 02:04:52PM +0100, Fabian Grünbichler wrote:
> On Wed, Feb 28, 2018 at 10:24:50AM +0100, Florent B wrote:
> > Hi,
> >
> > Since yesterday, the "ceph-luminous" repository does not contain any
> > package for Debian Jessie.
> >
&g
On Wed, Feb 28, 2018 at 10:24:50AM +0100, Florent B wrote:
> Hi,
>
> Since yesterday, the "ceph-luminous" repository does not contain any
> package for Debian Jessie.
>
> Is it expected ?
AFAICT the packages are all there[2], but the Packages file only
references the ceph-deploy package so apt d
On Tue, Jan 09, 2018 at 02:14:51PM -0500, Alfredo Deza wrote:
> On Tue, Jan 9, 2018 at 1:35 PM, Reed Dier wrote:
> > I would just like to mirror what Dan van der Ster’s sentiments are.
> >
> > As someone attempting to move an OSD to bluestore, with limited/no LVM
> > experience, it is a completely
On Thu, Dec 07, 2017 at 09:59:43AM +0100, Marcus Priesch wrote:
> Hello Brad,
>
> thanks for your answer !
>
> >> at least the point of all is that a single host should be allowed to
> >> fail and the vm's continue running ... ;)
> >
> > You don't really have six MONs do you (although I know the
On Mon, Dec 04, 2017 at 11:21:42AM +0100, SOLTECSIS - Victor Rodriguez Cortes
wrote:
>
> > Why are you OK with this? A high amount of PGs can cause serious peering
> > issues. OSDs might eat up a lot of memory and CPU after a reboot or such.
> >
> > Wido
>
> Mainly because there was no warning
On Thu, Nov 30, 2017 at 11:25:03AM -0500, Alfredo Deza wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which ha
On Thu, Nov 30, 2017 at 07:04:33AM -0500, Alfredo Deza wrote:
> On Thu, Nov 30, 2017 at 6:31 AM, Fabian Grünbichler
> wrote:
> > On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
> >> On Tue, Nov 28, 2017 at 9:22 AM, David Turner
> >> wrote:
>
On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
> On Tue, Nov 28, 2017 at 9:22 AM, David Turner wrote:
> > Isn't marking something as deprecated meaning that there is a better option
> > that we want you to use and you should switch to it sooner than later? I
> > don't understand ho
On Thu, Sep 28, 2017 at 05:46:30PM +0200, Abhishek wrote:
> This is the first bugfix release of Luminous v12.2.x long term stable
> release series. It contains a range of bug fixes and a few features
> across CephFS, RBD & RGW. We recommend all the users of 12.2.x series
> update.
>
> For more det
On Wed, Jun 21, 2017 at 05:30:02PM +0900, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 21 Jun 2017 09:47:08 +0200 (CEST) Alexandre DERUMIER wrote:
>
> > Hi,
> >
> > Proxmox is maintening a ceph-luminous repo for stretch
> >
> > http://download.proxmox.com/debian/ceph-luminous/
> >
> >
> >
The Kraken release notes[1] contain the following note about the
sortbitwise flag and upgrading from <= Jewel to > Jewel:
The sortbitwise flag must be set on the Jewel cluster before upgrading
to Kraken. The latest Jewel (10.2.4+) releases issue a health warning if
the flag is not set, so this is
specified.
Why do the OSD daemon need the host option? What happened if it doesn't
exist?
Is there any best practice about naming the OSDs? Or a trick to avoid
the [OSD.ID] for each daemon?
[1]http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-da
On Wed, Jan 04, 2017 at 12:55:56PM +0100, Florent B wrote:
> On 01/04/2017 12:18 PM, Fabian Grünbichler wrote:
> > On Wed, Jan 04, 2017 at 12:03:39PM +0100, Florent B wrote:
> >> Hi everyone,
> >>
> >> I have a problem with automatic start of OSDs o
On Wed, Jan 04, 2017 at 12:03:39PM +0100, Florent B wrote:
> Hi everyone,
>
> I have a problem with automatic start of OSDs on Debian Jessie with Ceph
> Jewel.
>
> My osd.0 is using /dev/sda5 for data and /dev/sda2 for journal, it is
> listed in ceph-disk list :
>
> /dev/sda :
> /dev/sda1 other
2: 704 pgs: 704
active+clean; 573 GB data, 1150 GB used, 8870 GB / 10020 GB avail; 4965 B/s wr,
1 op/s
--
And I'm currently unable to reproduce the problem.
Next time I will try your commands to get more information.
I also took a look into the logs, but no
2 nodes = 2x12x410G HDD/OSDs
A user created a 500G rbd-volume. First I thought the 500G rbd may have
caused the osd to fill, but after reading your explainnations this seems
impossible.
I just found another 500G file created by this user in cephfs, may this
have caused the trouble?
Thanks a lot fo
Hi,
if I understand the pg-system correctly it's impossible to create a
file/volume which is bigger than the smallest osd of a pg, isn't it?
What could I do to get rid of this limitation?
Thanks,
Fabian
___
ceph-users mailing list
Hi,
if I want to clone a running vm-hdd, would it be enough to "cp" or do I
have to "snap, protect, flatten, unprotect, rm" the snapshot to get a as
consistent as possible clone?
Or: Does cp use a internal snapshot while copying the bloc
.
nevertheless - thanks a lot,
Fabian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
> Am 23.05.2014 um 17:31 schrieb "Wido den Hollander" :
>
> I wrote a blog about this:
> http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors/
so you assume restoring the old data is working, or did you proof this?
Fabian
__
hotting the pool could help?
Backup:
* create a snapshot
* shutdown one mon
* backup mon-dir
Restore:
* import mon-dir
* create further mons until quorum is restored
* restore snapshot
Possible?.. :D
Thanks,
Fabian
signature.asc
Description: Message signed with OpenPGP using GPGMail
__
disaster recover.
What’s the correct way to backup mon-data - if there is any?
Thanks,
Fabian
signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 1
max_new 1000
log_file /var/
32 matches
Mail list logo