Re: [ceph-users] unsubscribe

2019-07-12 Thread Brian Topping
It’s in the mail headers on every email: mailto:ceph-users-requ...@lists.ceph.com?subject=unsubscribe > On Jul 12, 2019, at 5:00 PM, Robert Stanford wrote: > > unsubscribe > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph

Re: [ceph-users] How does monitor know OSD is dead?

2019-07-02 Thread Brian :
I wouldn't say that's a pretty common failure. The flaw here perhaps is the design of the cluster and that it was relying on a single power source. Power sources fail. Dual power supplies connected to a b power sources in the data centre is pretty standard. On Tuesday, July 2, 2019, Bryan Henderso

Re: [ceph-users] Weird behaviour of ceph-deploy

2019-06-17 Thread Brian Topping
I don’t have an answer for you, but it’s going to help others to have shown: Versions of all nodes involved and multi-master configuration Confirm forward and reverse DNS and SSH / remote sudo since you are using deploy Specific steps that did not behave properly > On Jun 17, 2019, at 6:29 AM, CUZA

[ceph-users] one pg blocked at ctive+undersized+degraded+remapped+backfilling

2019-06-13 Thread Brian Chang-Chien
We want to change index pool(radosgw) rule from sata to ssd, when we run ceph osd pool set default.rgw.buckets.index crush_ruleset x All of index pg migrated to ssd, but only one pg is still stuck in sata and cannot be migrated and it status is active+undersized+degraded+remapped+backfilling ceph

Re: [ceph-users] pool migration for cephfs?

2019-05-15 Thread Brian Topping
ver. It’s very difficult to move that metadata once a file is copied and even harder to deal with a situation where the destination volume went live and some files on the destination are both newer versions and missing metadata. Brian > On May 15, 2019, at 6:05 AM, Lars Täuber wrote: &

Re: [ceph-users] PG stuck peering - OSD cephx: verify_authorizer key problem

2019-04-26 Thread Brian Topping
> On Apr 26, 2019, at 1:50 PM, Gregory Farnum wrote: > > Hmm yeah, it's probably not using UTC. (Despite it being good > practice, it's actually not an easy default to adhere to.) cephx > requires synchronized clocks and probably the same timezone (though I > can't swear to that.) Apps don’t “se

Re: [ceph-users] SOLVED: Multi-site replication speed

2019-04-20 Thread Brian Topping
ay pretty much sorted out everything I was looking to do as well as what I did not fully understand about the Ceph stack. All in all, a very informative adventure! Hopefully the thread is helpful to others who follow. I’m happy to answer questions off-thread as well. best, Brian [1] http

Re: [ceph-users] Multi-site replication speed

2019-04-19 Thread Brian Topping
estId":"tx02a01-005cba9593-371d-right","HostId":"371d-right-us”} When I stop the `data sync run`, these 404s stop, so clearly the `data sync run` isn’t changing a state in the rgw, but doing something synchronously. In the past, I have done a `data sync in

Re: [ceph-users] Are there any statistics available on how most production ceph clusters are being used?

2019-04-19 Thread Brian Topping
> On Apr 19, 2019, at 10:59 AM, Janne Johansson wrote: > > May the most significant bit of your life be positive. Marc, my favorite thing about open source software is it has a 100% money back satisfaction guarantee: If you are not completely satisfied, you can have an instant refund, just for

Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-19 Thread Brian :
ware/ > > not saying it definitely is, or isn't malware-ridden, but it sure was shady at that time. > I would suggest not pointing people to it. > > Den tors 18 apr. 2019 kl 16:41 skrev Brian : : >> >> Hi Marc >> >> Filezilla has decent S3 support https://file

Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Brian :
Hi Marc Filezilla has decent S3 support https://filezilla-project.org/ ymmv of course! On Thu, Apr 18, 2019 at 2:18 PM Marc Roos wrote: > > > I have been looking a bit at the s3 clients available to be used, and I > think they are quite shitty, especially this Cyberduck that processes > files w

Re: [ceph-users] Multi-site replication speed

2019-04-18 Thread Brian Topping
. With that, I’m going to set up a lab rig and see if I can build a fully replicated state. At that point, I’ll have a better understanding of what a working system responds like and maybe I can at least ask better questions, hopefully figure it out myself. Thanks again! Brian > On Apr

Re: [ceph-users] Multi-site replication speed

2019-04-15 Thread Brian Topping
> On Apr 15, 2019, at 5:18 PM, Brian Topping wrote: > > If I am correct, how do I trigger the full sync? Apologies for the noise on this thread. I came to discover the `radosgw-admin [meta]data sync init` command. That’s gotten me with something that looked like this for seve

Re: [ceph-users] Multi-site replication speed

2019-04-15 Thread Brian Topping
syncing > full sync: 0/128 shards > incremental sync: 128/128 shards > data is caught up with source If I am correct, how do I trigger the full sync? Thanks!! Brian ___

Re: [ceph-users] Multi-site replication speed

2019-04-14 Thread Brian Topping
> On Apr 14, 2019, at 2:08 PM, Brian Topping wrote: > > Every so often I might see the link running at 20 Mbits/sec, but it’s not > consistent. It’s probably going to take a very long time at this rate, if > ever. What can I do? Correction: I was looking at statistics

[ceph-users] Multi-site replication speed

2019-04-14 Thread Brian Topping
Hi all! I’m finally running with Ceph multi-site per http://docs.ceph.com/docs/nautilus/radosgw/multisite/ , woo hoo! I wanted to confirm that the process can be slow. It’s been a couple of hours since the sync started and `radosgw-admin sy

Re: [ceph-users] 1/3 mon not working after upgrade to Nautilus

2019-03-25 Thread Brian Topping
Did you check port access from other nodes? My guess is a forgotten firewall re-emerged on that node after reboot. Sent from my iPhone > On Mar 25, 2019, at 07:26, Clausen, Jörn wrote: > > Hi again! > >> moment, one of my three MONs (the then active one) fell out of the > > "active one" i

Re: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

2019-02-19 Thread Brian Topping
ially if you are going to do this on bare metal. I can give you some ideas about how to lay things out if you are running with limited hardware. Brian ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Downsizing a cephfs pool

2019-02-08 Thread Brian Topping
he world. > > (resending because the previous reply wound up off-list) > > On 09/02/2019 10.39, Brian Topping wrote: >> Thanks again to Jan, Burkhard, Marc and Hector for responses on this. To >> review, I am removing OSDs from a small cluster and running up against

Re: [ceph-users] Downsizing a cephfs pool

2019-02-08 Thread Brian Topping
Thanks again to Jan, Burkhard, Marc and Hector for responses on this. To review, I am removing OSDs from a small cluster and running up against the “too many PGs per OSD problem due to lack of clarity. Here’s a summary of what I have collected on it: The CephFS data pool can’t be changed, only

Re: [ceph-users] Downsizing a cephfs pool

2019-02-08 Thread Brian Topping
Thanks Marc and Burkhard. I think what I am learning is it’s best to copy between filesystems with cpio, if not impossible to do it any other way due to the “fs metadata in first pool” problem. FWIW, the mimic docs still describe how to create a differently named cluster on the same hardware. B

Re: [ceph-users] Downsizing a cephfs pool

2019-02-08 Thread Brian Topping
m always creating pools starting 8 pg's and when I know I am at > what I want in production I can always increase the pg count. > > > > -Original Message- > From: Brian Topping [mailto:brian.topp...@gmail.com] > Sent: 08 February 2019 05:30 > To: Ceph Users

[ceph-users] Downsizing a cephfs pool

2019-02-07 Thread Brian Topping
Hi all, I created a problem when moving data to Ceph and I would be grateful for some guidance before I do something dumb. I started with the 4x 6TB source disks that came together as a single XFS filesystem via software RAID. The goal is to have the same data on a cephfs volume, but with these

Re: [ceph-users] Rezising an online mounted ext4 on a rbd - failed

2019-01-30 Thread Brian Godette
Did you mkfs with -O 64bit or have it in the [defaults] section of /etc/mke2fs.conf before creating the filesystem? If you didn't 4TB is as big as it goes and can't be changed after the fact. If the device is already larger than 4TB when you create the filesystem, mkfs does the right then and si

Re: [ceph-users] One host with 24 OSDs is offline - best way to get it back online

2019-01-26 Thread Brian Topping
I went through this as I reformatted all the OSDs with a much smaller cluster last weekend. When turning nodes back on, PGs would sometimes move, only to move back, prolonging the operation and system stress. What I took away is it’s least overall system stress to have the OSD tree back to tar

Re: [ceph-users] Problem with OSDs

2019-01-21 Thread Brian Topping
> On Jan 21, 2019, at 6:47 AM, Alfredo Deza wrote: > > When creating an OSD, ceph-volume will capture the ID and the FSID and > use these to create a systemd unit. When the system boots, it queries > LVM for devices that match that ID/FSID information. Thanks Alfredo, I see that now. The name co

Re: [ceph-users] quick questions about a 5-node homelab setup

2019-01-21 Thread Brian Topping
> On Jan 18, 2019, at 3:48 AM, Eugen Leitl wrote: > > > (Crossposting this from Reddit /r/ceph , since likely to have more technical > audience present here). > > I've scrounged up 5 old Atom Supermicro nodes and would like to run them > 365/7 for limited production as RBD with Bluestore (ide

[ceph-users] Problem with OSDs

2019-01-20 Thread Brian Topping
dd4--00b6434c84d9-osd--block--4672bb90--8cea--4580--85f2--1e692811a05a >(253:3) How can I debug this? I suspect this is just some kind of a UID swap that that happened somewhere, but I don’t know what the chain of truth is through the database files to connect the two together and make sure I have the correct OSD blocks where the mon expects to find them. Thanks! Brian ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Boot volume on OSD device

2019-01-19 Thread Brian Topping
> On Jan 18, 2019, at 10:58 AM, Hector Martin wrote: > > Just to add a related experience: you still need 1.0 metadata (that's > the 1.x variant at the end of the partition, like 0.9.0) for an > mdadm-backed EFI system partition if you boot using UEFI. This generally > works well, except on some

Re: [ceph-users] Today's DocuBetter meeting topic is... SEO

2019-01-18 Thread Brian Topping
g math ("daylight savings time” is a pet peeve, please don’t get me started! :)) Hope this provides some value! Brian > On Jan 18, 2019, at 11:37 AM, Noah Watkins wrote: > > 1 PM PST / 9 PM GMT > https://bluejeans.com/908675367 > > On Fri, Jan 18, 2019 at 10:31 AM Noah Wa

Re: [ceph-users] Boot volume on OSD device

2019-01-18 Thread Brian Topping
> On Jan 18, 2019, at 4:29 AM, Hector Martin wrote: > > On 12/01/2019 15:07, Brian Topping wrote: >> I’m a little nervous that BlueStore assumes it owns the partition table and >> will not be happy that a couple of primary partitions have been used. Will >> this be

Re: [ceph-users] Offsite replication scenario

2019-01-16 Thread Brian Topping
> On Jan 16, 2019, at 12:08 PM, Anthony Verevkin wrote: > > I would definitely see huge value in going to 3 MONs here (and btw 2 on-site > MGR and 2 on-site MDS) > However 350Kbps is quite low and MONs may be latency sensitive, so I suggest > you do heavy QoS if you want to use that link for AN

Re: [ceph-users] /var/lib/ceph/mon/ceph-{node}/store.db on mon nodes

2019-01-16 Thread Brian Topping
ote: > > > >> On 1/16/19 10:36 AM, Matthew Vernon wrote: >> Hi, >> >>> On 16/01/2019 09:02, Brian Topping wrote: >>> >>> I’m looking at writes to a fragile SSD on a mon node, >>> /var/lib/ceph/mon/ceph-{node}/store.db is the big

[ceph-users] /var/lib/ceph/mon/ceph-{node}/store.db on mon nodes

2019-01-16 Thread Brian Topping
there other options? Thanks, Brian ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Offsite replication scenario

2019-01-14 Thread Brian Topping
Ah! Makes perfect sense now. Thanks!! Sent from my iPhone > On Jan 14, 2019, at 12:30, Gregory Farnum wrote: > >> On Fri, Jan 11, 2019 at 10:07 PM Brian Topping >> wrote: >> Hi all, >> >> I have a simple two-node Ceph cluster that I’m comfortable wi

[ceph-users] Boot volume on OSD device

2019-01-11 Thread Brian Topping
partitions have been used. Will this be a problem? Thanks, Brian ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Offsite replication scenario

2019-01-11 Thread Brian Topping
frontier! Brian > [root@gw01 ~]# ceph -s > cluster: >id: >health: HEALTH_OK > > services: >mon: 1 daemons, quorum gw01 >mgr: gw01(active) >mds: cephfs-1/1/1 up {0=gw01=up:active} >osd: 8 osds: 8 up, 8 in > > data: >

Re: [ceph-users] list admin issues

2018-12-22 Thread Brian :
Sorry to drag this one up again. Just got the unsubscribed due to excessive bounces thing. 'Your membership in the mailing list ceph-users has been disabled due to excessive bounces The last bounce received from you was dated 21-Dec-2018. You will not get any more messages from this list until y

Re: [ceph-users] JBOD question

2018-07-20 Thread Brian :
Hi Satish You should be able to choose different modes of operation for each port / disk. Most dell servers will let you do RAID and JBOD in parallel. If you can't do that and can only either turn RAID on or off then you can use SW RAID for your OS On Fri, Jul 20, 2018 at 9:01 PM, Satish Patel

Re: [ceph-users] Ceph snapshots

2018-06-27 Thread Brian :
Hi John Have you looked at ceph documentation? RBD: http://docs.ceph.com/docs/luminous/rbd/rbd-snapshot/ The ceph project documentation is really good for most areas. Have a look at what you can find then come back with more specific questions! Thanks Brian On Wed, Jun 27, 2018 at 2:24 PM

Re: [ceph-users] Ceph Mimic on CentOS 7.5 dependency issue (liboath)

2018-06-23 Thread Brian :
Hi Stefan $ sudo yum provides liboath Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.strencom.net * epel: mirror.sax.uk.as61049.net * extras: mirror.strencom.net * updates: mirror.strencom.net liboath-2.4.1-9.el7.x86_64 : Library for OATH handling Repo

Re: [ceph-users] HDD-only performance, how far can it be sped up ?

2018-06-20 Thread Brian :
ng to give great results. Brian On Wed, Jun 20, 2018 at 8:28 AM, Wladimir Mutel wrote: > Dear all, > > I set up a minimal 1-node Ceph cluster to evaluate its performance. We > tried to save as much as possible on the hardware, so now the box has Asus > P10S-M WS motherboard, Xe

Re: [ceph-users] PM1633a

2018-06-18 Thread Brian :
Thanks Paul Wido and Konstantin! If we give them a go I'll share some test results. On Sat, Jun 16, 2018 at 12:09 PM, Konstantin Shalygin wrote: > Hello List - anyone using these drives and have any good / bad things > to say about them? > > > A few moths ago I was asking about PM1725 > http://

[ceph-users] PM1633a

2018-06-15 Thread Brian :
Hello List - anyone using these drives and have any good / bad things to say about them? Thanks! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] CephFS Single Threaded Performance

2018-02-26 Thread Brian Woods
I have a small test cluster (just two nodes) and after rebuilding it several times I found my latest configuration that SHOULD be the fastest is by far the slowest (per thread). I have around 10 spinals that I have an erasure encoded CephFS on. When I installed several SSDs and recreated it with

Re: [ceph-users] mon service failed to start

2018-02-21 Thread Brian :
Hello Wasn't this originally an issue with mon store now you are getting a checksum error from an OSD? I think some hardware here in this node is just hosed. On Wed, Feb 21, 2018 at 5:46 PM, Behnam Loghmani wrote: > Hi there, > > I changed SATA port and cable of SSD disk and also update ceph t

[ceph-users] data_digest_mismatch_oi with missing object and I/O errors (repaired!)

2018-01-17 Thread Brian Andrus
n again it wasn't prior to this either. Seems the right data is in place and the PG is consistent after a deep-scrub. Pretty standard stuff, but might help with alternative ways of dumping byte data in the future as long as others don't see an issue with this. I see at least one other

[ceph-users] Ceph not reclaiming space or overhead?

2017-12-21 Thread Brian Woods
I will start with I am very new to ceph and am trying to teach myself the ins and outs. While doing this I have been creating and destroying pools as I experiment on some test hardware. Something I noticed was that when a pool is deleted, the space is not always freed 100%. This is true even aft

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Brian Andrus
tools, the reserved cells in all drives is nearly 100%. > > Restarting the OSDs minorly improved performance. Still betting on > hardware issues that a firmware upgrade may resolve. > > -RG > > > On Oct 27, 2017 1:14 PM, "Brian Andrus" > wrote: > > @Russel,

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Brian Andrus
t;> I have an LVM image on a local RAID of spinning disks. >> I have an RBD image on in a pool of SSD disks. >> Both disks are used to run an almost identical CentOS 7 >> system. >> Both systems were installed with the same kickstart, though >> the di

Re: [ceph-users] Why size=3

2017-10-25 Thread Brian Andrus
Apologies, corrected second link: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-March/016663.html On Wed, Oct 25, 2017 at 9:44 AM, Brian Andrus wrote: > Please see the following mailing list topics that have covered this topic > in detail: > > "2x replication: A BIG

Re: [ceph-users] Why size=3

2017-10-25 Thread Brian Andrus
, min_size=1. > > Can someone help me articulate why we should be keeping 3 copies, beyond > "it's the default"? > > -- Ian > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.

Re: [ceph-users] Blocked requests

2017-09-07 Thread Brian Andrus
m/listinfo.cgi/ceph-users-ceph.com > > > -- > > > CONFIDENTIALITY NOTICE: This message is intended only for the use and > review of the individual or entity to which it is addressed and may contain > information that is pri

[ceph-users] Installing ceph on Centos 7.3

2017-07-18 Thread Brian Wallis
I’m failing to get an install of ceph to work on a new Centos 7.3.1611 server. I’m following the instructions at http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to no avail. First question, is it possible to install ceph on Centos 7.3 or should I choose a different version or differen

Re: [ceph-users] Adding storage to exiting clusters with minimal impact

2017-07-06 Thread Brian Andrus
> STFC Rutherford Appleton Laboratory >> >> Harwell Oxford >> >> Didcot >> >> OX11 0QX >> >> Tel. +44 ((0)1235) 446621 >> >> >> ___ >> ceph-users mailing list >> c

Re: [ceph-users] corrupted rbd filesystems since jewel

2017-05-04 Thread Brian Andrus
pri...@profihost.ag> wrote: > Hello Brian, > > this really sounds the same. I don't see this on a cluster with only > images created AFTER jewel. And it seems to start happening after i > enabled exclusive lock on all images. > > Did just use feature disable, exclusive-lock,fast-d

Re: [ceph-users] corrupted rbd filesystems since jewel

2017-05-04 Thread Brian Andrus
> ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Brian Andrus | Cloud Systems Engineer | DreamHost brian.and...@dreamhost.com | www.dreamhost.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Client's read affinity

2017-04-04 Thread Brian Andrus
> > > -- > > Alejandro Comisario > > CTO | NUBELIU > > E-mail: alejandro@nubeliu.comCell: +54 9 11 3770 1857 > > _ > > www.nubeliu.com > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.

Re: [ceph-users] Flapping OSDs

2017-04-03 Thread Brian :
and issue hasn't come up since. Brian On Mon, Apr 3, 2017 at 8:03 AM, Vlad Blando wrote: > Most of the time random and most of the time 1 at a time, but I also see > 2-3 that are down at the same time. > > The network seems fine, the bond seems fine, I just don't know where

Re: [ceph-users] disk timeouts in libvirt/qemu VMs...

2017-03-28 Thread Brian Andrus
0x100/0x100 > [Fri Mar 24 20:30:40 2017] [] ? commit_timeout+0x10/0x10 > [Fri Mar 24 20:30:40 2017] [] kthread+0xd2/0xf0 > [Fri Mar 24 20:30:40 2017] [] ? > kthread_create_on_node+0x1c0/0x1c0 > [Fri Mar 24 20:30:40 2017] [] ret_from_fork+0x7c/0xb0 > [Fri Mar 24 20:30:40 2017

Re: [ceph-users] osds down after upgrade hammer to jewel

2017-03-28 Thread Brian Andrus
Well, you said you were running v0.94.9, but are there any OSDs running pre-v0.94.4 as the error states? On Tue, Mar 28, 2017 at 6:51 AM, Jaime Ibar wrote: > > > On 28/03/17 14:41, Brian Andrus wrote: > > What does > # ceph tell osd.* version > > ceph tell osd.21 versio

Re: [ceph-users] osds down after upgrade hammer to jewel

2017-03-28 Thread Brian Andrus
have to upgrade all the osds to jewel first? >>> Any help as I'm running out of ideas? >>> >>> Thanks >>> Jaime >>> >>> -- >>> >>> Jaime Ibar >>> High Performance & Research Computing, IS Services >>>

Re: [ceph-users] active+clean+inconsistent and pg repair

2017-03-17 Thread Brian Andrus
> > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Brian Andrus | Cloud Systems Engineer | DreamHost brian.and...@dreamhost.com | www.dreamhost.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph on XenServer

2017-02-25 Thread Brian :
Hi Max, Have you considered Proxmox at all? Nicely integrates with Ceph storage. I moved from Xenserver longtime ago and have no regrets. Thanks Brians On Sat, Feb 25, 2017 at 12:47 PM, Massimiliano Cuttini wrote: > Hi Iban, > > you are running xen (just the software) not xenserver (ad hoc lin

Re: [ceph-users] help with crush rule

2017-02-21 Thread Brian Andrus
ng list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Brian Andrus | Cloud Systems Engineer | DreamHost brian.and...@dreamhost.com | www.dreamhost.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Adding multiple osd's to an active cluster

2017-02-17 Thread Brian Andrus
> the cluster > > Thanks > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Brian Andrus | Cloud Systems Engineer | DreamHost brian.and...@dreamhost.com | www.dreamhost.com _

Re: [ceph-users] crushtool mappings wrong

2017-02-16 Thread Brian Andrus
Ds in the > result mappings that are not even in this hierarchy... > > (this is on a 10.2.2 install) > > -- > Cheers, > ~Blairo > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.c

Re: [ceph-users] extending ceph cluster with osds close to near full ratio (85%)

2017-02-14 Thread Brian Andrus
osd-k5-36-fresh weight 72.800 > item osd-k7-41-fresh weight 72.800 > item osd-l4-36-fresh weight 72.800 > } > > Then, by steps of 6 OSDs (2 OSDs from each new host), we move OSDs from > the "fresh-install" to the "sas" bucket. > &g

Re: [ceph-users] would people mind a slow osd restart during luminous upgrade?

2017-02-09 Thread Brian Andrus
l client >> having >> > a hard time locating the objects it needs from a Luminous cluster. >> >> In this case the change would be internal to a single OSD and have no >> effect on the client/osd interaction or placement of objects. >>

Re: [ceph-users] Latency between datacenters

2017-02-08 Thread Brian Andrus
tions, as well as the effects of latency on your monitors. In some cases I'd consider trying to source another MON and running two separate clusters, but simply put, YMMV. > > Thanks in advance > Daniel -- Brian Andrus | Cloud Systems Engineer | DreamHost brian.and...@dreamhost.com |

Re: [ceph-users] Running 'ceph health' as non-root user

2017-02-01 Thread Brian ::
This is great - had no idea you could have this level of control with Ceph authentication. On Wed, Feb 1, 2017 at 12:29 PM, John Spray wrote: > On Wed, Feb 1, 2017 at 8:55 AM, Michael Hartz > wrote: >> I am running ceph as part of a Proxmox Virtualization cluster, which is >> doing great. >>

Re: [ceph-users] Running 'ceph health' as non-root user

2017-02-01 Thread Brian ::
And left out - your command line for the ceph checks in nagios should be prefixed by sudo 'sudo ceph health' server# su nagios $ ceph health Error initializing cluster client: Error('error calling conf_read_file: errno EACCES',) $sudo ceph health HEALTH_OK On Wed, Feb 1, 20

Re: [ceph-users] Running 'ceph health' as non-root user

2017-02-01 Thread Brian ::
Hi Michael, Install sudo on proxmox server and add an entry for nagios like: nagios ALL=(ALL) NOPASSWD:/usr/bin/ceph in a file in /etc/sudoers.d Brian On Wed, Feb 1, 2017 at 8:55 AM, Michael Hartz wrote: > I am running ceph as part of a Proxmox Virtualization cluster, which is doing >

Re: [ceph-users] Unique object IDs and crush on object striping

2017-01-31 Thread Brian Andrus
spXfLKKVU?t=9m14s [2] https://youtu.be/lG6eeUNw9iI?t=18m49s -- Brian Andrus Cloud Systems Engineer DreamHost, LLC ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] is docs.ceph.com down?

2017-01-19 Thread Brian Andrus
___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > I think your DNS cache may be preventing you from seeing the site at this point, as it appears the Ceph project guys have got the site back

Re: [ceph-users] Problems with http://tracker.ceph.com/?

2017-01-19 Thread Brian Andrus
aybe an issue with the ceph.com and tracker.ceph.com > website at the moment > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Brian An

Re: [ceph-users] Calamari or Alternative

2017-01-13 Thread Brian Godette
We're using: https://github.com/rochaporto/collectd-ceph for time-series, with a slightly modified Grafana dashboard from the one referenced. https://github.com/Crapworks/ceph-dash for quick health status. Both took a small bit of modification to make them work with Jewel at the time, not

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-10 Thread Brian Andrus
On Mon, Jan 9, 2017 at 3:33 PM, Willem Jan Withagen wrote: > On 9-1-2017 23:58, Brian Andrus wrote: > > Sorry for spam... I meant D_SYNC. > > That term does not run any lights in Google... > So I would expect it has to O_DSYNC. > (https://www.sebastien-han.fr/blog/2014/10/1

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-09 Thread Brian Andrus
Sorry for spam... I meant D_SYNC. On Mon, Jan 9, 2017 at 2:56 PM, Brian Andrus wrote: > Hi Willem, the SSDs are probably fine for backing OSDs, it's the O_DSYNC > writes they tend to lie about. > > They may have a failure rate higher than enterprise-grade SSDs, but are > ot

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-09 Thread Brian Andrus
t a very appealing lookout?? > > --WjW > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Brian Andrus Cloud Systems Engineer DreamHost, LLC ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cluster pause - possible consequences

2017-01-04 Thread Brian Andrus
gt; > ___ > > > ceph-users mailing list > > > ceph-users@lists.ceph.com > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > > -- > > > Questo messaggio e' stato analizzat

Re: [ceph-users] Pool Sizes

2017-01-04 Thread Brian Andrus
sses putting all his data in a single xattr or single > RADOS object would be the wrong way. > > P.S. Happy New Year! > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Brian Andr

Re: [ceph-users] radosgw setup issue

2017-01-04 Thread Brian Andrus
Numerical result out of range > >> 2016-12-22 17:36:47.055876 7f084beeb9c0 0 failure in zonegroup > create_default: ret -34 (34) Numerical result out of range > >> 2016-12-22 17:36:47.055970 7f084beeb9c0 1 -- 39.0.16.9:0/1011033520 > mark_down 0x7f084c8e9480 -- 0x7f084c8ec0f0 > >> 2016-12-22 17:36:47.056169 7f084beeb9c0 1 -- 39.0.16.9:0/1011033520 > mark_down_all > >> 2016-12-22 17:36:47.056263 7f084beeb9c0 1 -- 39.0.16.9:0/1011033520 > shutdown complete. > >> 2016-12-22 17:36:47.056426 7f084beeb9c0 -1 Couldn't init storage > provider (RADOS) > >> > >> > >> > >> I did not create the pools for rgw, as they get created automatically. > few weeks back, I could setup RGW on jewel successfully. But this time I am > not able to see any obvious issues which I can fix. > >> > >> > >> [0] http://docs.ceph.com/docs/jewel/radosgw/config/ > >> > >> Thanks in advance, > >> Nitin > >> > >> ___ > >> ceph-users mailing list > >> ceph-users@lists.ceph.com > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Brian Andrus Cloud Systems Engineer DreamHost, LLC ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Unbalanced OSD's

2017-01-03 Thread Brian Andrus
On Mon, Jan 2, 2017 at 4:25 AM, Jens Dueholm Christensen wrote: > On Friday, December 30, 2016 07:05 PM Brian Andrus wrote: > > > We have a set it and forget it cronjob setup once an hour to keep things > a bit more balanced. > > > > 1 * * * * /bin/bash /home/briana/

Re: [ceph-users] Unbalanced OSD's

2016-12-30 Thread Brian Andrus
weight-by-utilization' command or do it manually with 'ceph osd reweight > X 0-1' > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Brian Andrus Cloud Systems Engineer D

Re: [ceph-users] How can I ask to Ceph Cluster to move blocks now when osd is down?

2016-12-27 Thread Brian Andrus
s, > Stéphane > -- > Stéphane Klein > blog: http://stephane-klein.info > cv : http://cv.stephane-klein.info > Twitter: http://twitter.com/klein_stephane > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://li

Re: [ceph-users] ceph and rsync

2016-12-16 Thread Brian ::
The fact that you are all SSD I would do exactly what Wido said - gracefully remove the OSD and gracefully bring up the OSD on the new SSD. Let Ceph do what its designed to do. The rsync idea looks great on paper - not sure what issues you will run into in practise. On Fri, Dec 16, 2016 at 12:38

Re: [ceph-users] CEPH mirror down again

2016-11-25 Thread Andrus, Brian Contractor
Hmm. Apparently download.ceph.com = us-west.ceph.com And there is no repomd.xml on us-east.ceph.com This seems to happen a little too often for something that is stable and released. Makes it seem like the old BBS days of “I want to play DOOM, so I’m shutting the services down” Brian Andrus

Re: [ceph-users] how possible is that ceph cluster crash

2016-11-19 Thread Brian ::
HI Lionel, Mega Ouch - I've recently seen the act of measuring power consumption in a data centre (they clamp a probe onto the cable for an AMP reading seemingly) take out a cabinet which had *redundant* power feeds - so anything is possible I guess. Regards Brian On Sat, Nov 19, 2016 at

Re: [ceph-users] how possible is that ceph cluster crash

2016-11-18 Thread Brian ::
these things happen >> >> http://www.theregister.co.uk/2016/11/15/memset_power_cut_service_interruption/ >> >> We had customers who had kit in this DC. >> >> To use your analogy, it's like crossing the road at traffic lights but not >> checking cars ha

Re: [ceph-users] how possible is that ceph cluster crash

2016-11-18 Thread Brian ::
This is like your mother telling not to cross the road when you were 4 years of age but not telling you it was because you could be flattened by a car :) Can you expand on your answer? If you are in a DC with AB power, redundant UPS, dual feed from the electric company, onsite generators, dual PSU

Re: [ceph-users] rgw print continue and civetweb

2016-11-14 Thread Brian Andrus
/12640 > > Can anyone please clarify whether civetweb support the default > 100-continue setting? thx will > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listin

Re: [ceph-users] Instance filesystem corrupt

2016-10-27 Thread Brian ::
What is the issue exactly? On Fri, Oct 28, 2016 at 2:47 AM, wrote: > I think this issue may not related to your poor hardware. > > > > Our cluster has 3 Ceph monitor and 4 OSD. > > > > Each server has > > 2 cpu ( Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz ) , 32 GB memory > > OSD nodes has 2 SSD

Re: [ceph-users] ceph website problems?

2016-10-11 Thread Brian ::
Looks like they are having major challenges getting that ceph cluster running again.. Still down. On Tuesday, October 11, 2016, Ken Dreyer wrote: > I think this may be related: > http://www.dreamhoststatus.com/2016/10/11/dreamcompute-us-east-1-cluster-service-disruption/ > > On Tue, Oct 11, 2016

Re: [ceph-users] too many PGs per OSD (326 > max 300) warning when ALL PGs are 256

2016-10-10 Thread Andrus, Brian Contractor
, then, it is wiser to have a very low default for those so the ceph-deploy tool doesn assign a large value to something that will merely hold control or other metadata? Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 From: David Turner

[ceph-users] too many PGs per OSD (326 > max 300) warning when ALL PGs are 256

2016-10-10 Thread Andrus, Brian Contractor
min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 6218 flags hashpspool stripe_width 0 so why would the warning show up, and how do I get it to go away and stay away? Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831

[ceph-users] The principle of config Federated Gateways

2016-10-05 Thread Brian Chang-Chien
Hi all I have a question about config federated gateway Why only sync data and metadata between zones in the same regions and only sync metadata between zones in the different regions In different regions, can't sync zone data , can tell me any concern? Thx

Re: [ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread Andrus, Brian Contractor
Hmm. Something happened then. I only have 20 OSDs. What may cause that? Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 From: David Turner [mailto:david.tur...@storagecraft.com] Sent: Thursday, September 22, 2016 10:04 AM To: Andrus

Re: [ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread Andrus, Brian Contractor
s.email 22 default.rgw.meta 23 default.rgw.buckets.index 24 default.rgw.buckets.data # ceph -s | grep -Eo '[0-9]+ pgs' 3520 pgs Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 From: David Turner [mailto:david.tur...@storagecraft.com] Sent:

[ceph-users] too many PGs per OSD when pg_num = 256??

2016-09-22 Thread Andrus, Brian Contractor
num pgp_num: 256 How does something like this happen? I did create a radosgw several weeks ago and have put a single file in it for testing, but that is it. It only started giving the warning a couple days ago. Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, Califor

  1   2   3   >