[ceph-users] kvm live migrate wil ceph

2013-10-14 Thread Jon
27;t think rbds are able to handle copy-on-write in the same way kvm does so maybe a clustered filesystem approach is the ideal way to go. Thanks for your input. I think I'm just missing some piece. .. I just don't grok... Bestv Regards, Jon A _

Re: [ceph-users] kvm live migrate wil ceph

2013-10-16 Thread Jon
ing from format 1 to format 2 images? (I think I read something about not being able to use both at the same time...) Thanks Again, Jon A On Mon, Oct 14, 2013 at 4:42 PM, Michael Lowe wrote: > I live migrate all the time using the rbd driver in qemu, no problems. > Qemu will issue a flush a

[ceph-users] Perl Bindings for Ceph

2013-10-20 Thread Jon
iple vms as readonly? I'm thinking like readonly iso images converted to rbds? (is it even possible to convert an iso to an image?) Thanks for your help. Best Regards, Jon A [1] http://www.spinics.net/lists/ceph-devel/msg04147.html [2] https://github.com/three18ti/PrepVM-App ___

Re: [ceph-users] Perl Bindings for Ceph

2013-10-20 Thread Jon
re the rbd was mounted. Guess there's only one way to find out. Thanks for your feedback! Best Regards, Jon A [3] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-May/001913.html On Sun, Oct 20, 2013 at 10:26 PM, Michael Lowe wrote: > 1. How about enabling trim/discard support

[ceph-users] Ceph::RADOS list_pools causes sefgault

2013-11-04 Thread Jon
his error is coming from the c function "list_pools_c" because of the nature of the error. I was hoping someone could help me debug the error and possibly point me in a direction for extending librbd to manage images. Thanks, Jon A [1] http://www.spinics.net/lists/ceph

[ceph-users] Convert iso to bootable rbd

2014-01-26 Thread Jon
Hello, I'm using kvm and libvirt with ceph. At the moment I'm attaching isos for the initial boot/install as a virtual cdrom. I it possible to convert an iso image to a rbd and have a vm boot from the rbd like a standard image/cd? Tha

[ceph-users] Mounting a shared block device on multiple hosts

2013-05-28 Thread Jon
e performance gains using more smaller RBDs vs fewer larger RBDs? Thanks for any feedback, Jon A ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Mounting a shared block device on multiple hosts

2013-05-29 Thread Jon
is running as a service? Thanks for your help, Jon A On May 29, 2013 12:47 AM, "Igor Laskovy" wrote: > Hi Jon, I already mentioned multiple times here - RBD just a block device. > You can map it to multiple hosts, but before doing dd if=/dev/zero > of=/media/tmp/test you have

Re: [ceph-users] Mounting a shared block device on multiple hosts

2013-05-29 Thread Jon
nology like glusterFS, or mount the rbd on one host and export it as an NFS Mount? (Would really like to avoid NFS if at all possible, but if that's the solution, then that's the solution). Thanks for your patience with me. I really feel like an idiot asking, but I really have no where els

Re: [ceph-users] Mounting a shared block device on multiple hosts

2013-05-29 Thread Jon
Awesome, thanks Florian! I think this is exactly the information I needed. Best Regards, Jon A On May 29, 2013 12:17 PM, "Smart Weblications GmbH - Florian Wiessner" < f.wiess...@smart-weblications.de> wrote: > Hi Jon, > > Am 29.05.2013 03:24, schrieb Jon: > >

Re: [ceph-users] Mounting a shared block device on multiple hosts

2013-05-31 Thread Jon
. My hypervisors have small disks. Maybe it makes sense to mount each ~/datastores as a unique RBD, at least in my mind it using the hypervisors to copy data from one rbd to another seems like it would be slow. CephFS loks like it might do exactly what I need, bit I'm certainly open to any sug

[ceph-users] Help Recovering Ceph cluster

2013-06-09 Thread Jon
in a directory where I ran ceph deploy, and I know about the /etc/ceph/ceph.conf file, but there seems to be some other config that the cluster is pulling from. Maybe I'm mistaken, but there are no osds in my ceph.conf. Thanks for all your help.

[ceph-users] Help Recovering Ceph cluster

2013-06-27 Thread Jon
file? based on my interpretation of the docs and upstart scripts, I don't think so; the respective daemons start on boot... Thanks for your time, Jon A -- message -- From: Jon Date: Sun, Jun 9, 2013 at 12:36 PM Subject: [ceph-users] Help Recovering Ceph cluster To: ceph-

Re: [ceph-users] Help Recovering Ceph cluster

2013-07-02 Thread Jon
running... Now if I could figure out the exact same issue on my other host... Thanks, Jon A On Thu, Jun 27, 2013 at 10:34 AM, Jon wrote: > Hello All, > > I've made some progress, but I'm still having a bit of difficulty. > > I've got all my monitors responding now,

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jon Meacham
If hammer and firefly bugfix releases will still be packaged for these distros, I don't see a problem with this. Anyone who is operating an existing LTS deployment on CentOS 6, etc. will continue to receive fixes for said LTS release. Jon From: ceph-users on behalf of Jan “Zviratko” Sch

Re: [ceph-users] [Ceph-community] Ceph t-shirts are available

2014-04-08 Thread Jon Mason
I can help, as I would be willing to do the US side of things. Thanks, Jon On Sat, Mar 29, 2014 at 7:35 AM, Loic Dachary wrote: > Hi Ceph, > > The Ceph User Committee is pleased to announce the availability of Ceph > T-Shirts at > >http://ceph.myshopify.com/products/ceph

[ceph-users] Problems with ceph_rest_api after update

2015-10-22 Thread Jon Heese
get_command_descriptions" actually does: ret, outbuf, outs = json_command(cluster, target, prefix='get_command_descriptions', timeout=30) Is this a known issue? If not, does anyone have any sugges

Re: [ceph-users] Problems with ceph_rest_api after update

2015-10-22 Thread Jon Heese
ould do that: public network = 10.197.5.0/24 # skinny pipe, mgmt & MON traffic cluster network = 10.174.1.0/24 # fat pipe, OSD traffic But that doesn't seem to be the case -- iftop and netstat show that little/no OSD communication is happening over the 10.174.1 network and it's all

[ceph-users] Proper Ceph network configuration

2015-10-23 Thread Jon Heese
raffic cluster network = 10.174.1.0/24 # fat pipe, OSD traffic But that doesn't seem to be the case -- iftop and netstat show that little/no OSD communication is happening over the 10.174.1 network and it's all happening over the 10.197.5 network. What configuration should we be ru

Re: [ceph-users] Proper Ceph network configuration

2015-10-23 Thread Jon Heese
aren’t currently on that network (we were treating it as an OSD/Client network). I guess I need to put them on that network…? Thanks. Jon Heese Systems Engineer INetU Managed Hosting P: 610.266.7441 x 261 F: 610.266.7434 www.inetu.net<https://www.inetu.net> ** This message contains confid

[ceph-users] FAILED assert(p.same_interval_since) and unusable cluster

2017-10-30 Thread Jon Light
Hello, I have three OSDs that are crashing on start with a FAILED assert(p.same_interval_since) error. I ran across a thread from a few days ago about the same issue and a ticket was created here: http://tracker.ceph.com/issues/21833. A very overloaded node in my cluster OOM'd many times which ev

Re: [ceph-users] FAILED assert(p.same_interval_since) and unusable cluster

2017-11-01 Thread Jon Light
I'm currently running 12.2.0. How should I go about applying the patch? Should I upgrade to 12.2.1, apply the changes, and then recompile? I really appreciate the patch. Thanks On Wed, Nov 1, 2017 at 11:10 AM, David Zafman wrote: > > Jon, > > If you are able please test

Re: [ceph-users] FAILED assert(p.same_interval_since) and unusable cluster

2017-11-02 Thread Jon Light
nt installation? Thanks On Wed, Nov 1, 2017 at 11:39 AM, Jon Light wrote: > I'm currently running 12.2.0. How should I go about applying the patch? > Should I upgrade to 12.2.1, apply the changes, and then recompile? > > I really appreciate the patch. > Thanks > > On Wed, Nov

Re: [ceph-users] FAILED assert(p.same_interval_since) and unusable cluster

2017-11-08 Thread Jon Light
Thanks for the instructions Michael, I was able to successfully get the patch, build, and install. Unfortunately I'm now seeing "osd/PG.cc: 5381: FAILED assert(info.history.same_interval_since != 0)". Then the OSD crashes. On Sat, Nov 4, 2017 at 5:51 AM, Michael wrote: > Jon

[ceph-users] Moving OSDs between hosts

2018-03-16 Thread Jon Light
Hi all, I have a very small cluster consisting of 1 overloaded OSD node and a couple MON/MGR/MDS nodes. I will be adding new OSD nodes to the cluster and need to move 36 drives from the existing node to a new one. I'm running Luminous 12.2.2 on Ubuntu 16.04 and everything was created with ceph-dep

[ceph-users] PGs stuck activating after adding new OSDs

2018-03-27 Thread Jon Light
Hi all, I'm adding a new OSD node with 36 OSDs to my cluster and have run into some problems. Here are some of the details of the cluster: 1 OSD node with 80 OSDs 1 EC pool with k=10, m=3 pg_num 1024 osd failure domain I added a second OSD node and started creating OSDs with ceph-deploy, one by

Re: [ceph-users] PGs stuck activating after adding new OSDs

2018-03-27 Thread Jon Light
, Mar 27, 2018 at 2:29 PM, Peter Linder wrote: > I've had similar issues, but I think your problem might be something else. > Could you send the output of "ceph osd df"? > > Other people will probably be interested in what version you are using as > well. > > Den 2

Re: [ceph-users] PGs stuck activating after adding new OSDs

2018-03-29 Thread Jon Light
u, Mar 29, 2018 at 2:50 AM, Jakub Jaszewski wrote: > Hi Jon, can you reweight one OSD to default value and share outcome of "ceph > osd df tree; ceph -s; ceph health detail" ? > > Recently I was adding new node, 12x 4TB, one disk at a time and faced > activating+remapp

Re: [ceph-users] ceph-users Digest, Vol 50, Issue 1

2017-03-01 Thread Jon Wright
ain; charset=windows-1252; format=flowed On 02/28/2017 09:53 PM, WRIGHT, JON R (JON R) wrote: >I currently have a situation where the monitors are running at 100% CPU, >and can't run any commands because authentication times out after 300 >seconds. > >I stopped the leader,

[ceph-users] ceph-mds failure replaying journal

2018-10-28 Thread Jon Morby
We accidentally found ourselves upgraded from 12.2.8 to 13.2.2 after a ceph-deploy install went awry (we were expecting it to upgrade to 12.2.9 and not jump a major release without warning) Anyway .. as a result, we ended up with an mds journal error and 1 daemon reporting as damaged Having g

Re: [ceph-users] ceph-mds failure replaying journal

2018-10-30 Thread Jon Morby
mds_wipe_sessions back to 0 Jon I can’t say a big enough thank you to @yanzheng for their assistance though! > On 29 Oct 2018, at 11:13, Jon Morby (Fido) wrote: > > I've experimented and whilst the downgrade looks to be working, you end up > with errors regarding unsupported feature "

Re: [ceph-users] ceph-mds failure replaying journal

2018-10-31 Thread Jon Morby
13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth()) shortly after I set max_mds back to 3 > On 30 Oct 2018, at 18:50, Jon Morby wrote: > > So a big thank you to @yanzheng for his help getting this back online > > The quick answer to what we did was downgrade to 13.2.1 as 13.2.

[ceph-users] Filestore update script?

2016-06-07 Thread WRIGHT, JON R (JON R)
I'm trying to recover an OSD after running xfs_repair on the disk. It seems to be ok now. There is a log message that includes the following: "Please run the FileStore update script before starting the OSD, or set filestore_update_to to 4" What is the FileStore update script? Google search d

Re: [ceph-users] Filestore update script?

2016-06-08 Thread WRIGHT, JON R (JON R)
Wido, Thanks for that advice, and I'll follow it. To your knowledge, is there a FileStore Update script around somewhere? Jon On 6/8/2016 3:11 AM, Wido den Hollander wrote: Op 7 juni 2016 om 23:08 schreef "WRIGHT, JON R (JON R)" : I'm trying to recover an OSD after r

[ceph-users] jewel blocked requests

2016-09-12 Thread WRIGHT, JON R (JON R)
Since upgrading to Jewel from Hammer, we're started to see HEALTH_WARN because of 'blocked requests > 32 sec'. Seems to be related to writes. Has anyone else seen this? Or can anyone suggest what the problem might be? Thanks! ___ ceph-users mailing

Re: [ceph-users] jewel blocked requests

2016-09-13 Thread WRIGHT, JON R (JON R)
Yes, I do have old clients running. The clients are all vms. Is it typical that vm clients have to be rebuilt after a ceph upgrade? Thanks, Jon On 9/12/2016 4:05 PM, Wido den Hollander wrote: Op 12 september 2016 om 18:47 schreef "WRIGHT, JON R (JON R)" : Since upgrading to

Re: [ceph-users] jewel blocked requests

2016-09-13 Thread WRIGHT, JON R (JON R)
Yes, vms and volumes existed across the ceph releases. But the vms were rebooted and the volumes reattached following the upgrade. The vms were all Ubuntu 14.04 before and after the upgrade. Thanks, Jon On 9/12/2016 8:28 PM, shiva rkreddy wrote: By saying "old clients" did yo

Re: [ceph-users] jewel blocked requests

2016-09-13 Thread WRIGHT, JON R (JON R)
VM Client OS: ubuntu 14.04 Openstack: kilo libvirt: 1.2.12 nova-compute-kvm: 1:2015.1.4-0ubuntu2 Jon On 9/13/2016 11:17 AM, Wido den Hollander wrote: Op 13 september 2016 om 15:58 schreef "WRIGHT, JON R (JON R)" : Yes, I do have old clients running. The clients are all v

Re: [ceph-users] jewel blocked requests

2016-09-19 Thread WRIGHT, JON R (JON R)
d and are replacing a disk, and I think the blocked requests may have all been associated with PGs that included the bad OSD/disk. Would this make sense? Jon On 9/15/2016 3:49 AM, Wido den Hollander wrote: Op 13 september 2016 om 18:54 schreef "WRIGHT, JON R (JON R)" : VM Cl

Re: [ceph-users] [EXTERNAL] Re: jewel blocked requests

2016-09-22 Thread WRIGHT, JON R (JON R)
rate. Most of the current messages are associated with two hosts. Jon On 9/19/2016 7:45 PM, Will.Boege wrote: Sorry make that 'ceph tell osd.* version' On Sep 19, 2016, at 2:55 PM, WRIGHT, JON R (JON R) wrote: When you say client, we're actually doing everything throu

[ceph-users] monitors at 100%; cluster out of service

2017-02-28 Thread WRIGHT, JON R (JON R)
r correcting the mtu value. Also, we are using a hyperconverged architecture where the same host runs a monitor and multiple OSDs. Any thoughts on recovery would be greatly appreciated. Jon ___ ceph-users mailing list ceph-users@lists.ceph.com h

[ceph-users] hb in and hb out from pg dump

2016-02-04 Thread WRIGHT, JON R (JON R)
New ceph user, so a basic question :) I have a newly setup Ceph cluster. Seems to be working ok. But . . . I'm looking at the output of ceph pg dump, and I see that in the osdstat list at the bottom of the output, there are empty brackets [] in the 'hb out' column for all of the OSDs. It

[ceph-users] pg dump question

2016-02-04 Thread WRIGHT, JON R (JON R)
New ceph user, so a basic question I have a newly setup Ceph cluster. Seems to be working ok. But . . . I'm looking at the output of ceph pg dump, and I see that in the osdstat list at the bottom of the output, there are empty brackets [] in the 'hb out' column for all of the OSDs. It seem

[ceph-users] erasure code backing pool, replication cache, and openstack

2016-02-09 Thread WRIGHT, JON R (JON R)
ckend configuration should reference the backing pool or the cache tier? Because of the redirected traffic, I'm not sure that it matters. Jon ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Adding another radosgw node

2014-09-22 Thread Jon Kåre Hellan
else? Regards Jon Jon Kåre Hellan, UNINETT AS, Trondheim, Norway ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] where to download 0.87 debs?

2014-10-30 Thread Jon Kåre Hellan
Will there be debs? On 30/10/14 10:37, Irek Fasikhov wrote: Hi. Use http://ceph.com/rpm-giant/ 2014-10-30 12:34 GMT+03:00 Kenneth Waegeman >: Hi, Will http://ceph.com/rpm/ also be updated to have the giant packages? Thanks Kenneth

[ceph-users] Stuck OSD

2014-11-19 Thread Jon Kåre Hellan
m/Y42GvGjr Can anybody help me understand what is going on? If the process had died instead, would a new one have been started automatically? Regards Jon Jon Kåre Hellan, UNINETT AS, Trondheim, Norway ___ ceph-users mailing list ceph-users@lists.cep

[ceph-users] OSD in uninterruptible sleep

2014-11-21 Thread Jon Kåre Hellan
now, but we needed a test load. We do not intend to use cephfs in production. Obviously, we would use physical OSD nodes if we were to decide to deploy ceph in production. Jon Jon Kåre Hellan, UNINETT AS, Trondheim, Norway ___ ceph-users mailing list c

Re: [ceph-users] Monitors repeatedly calling for new elections

2014-12-09 Thread Jon Kåre Hellan
managed to synch up. If not, NTP has had no effect on your clock. Jon Jon Kåre Hellan, UNINETT AS, Trondheim Norway Is this good enough resolution? $ for node in $nodes; do ssh tvsa${node} sudo date --rfc-3339=ns; done 2014-12-09 09:15:39.404292557-08:00 2014-12-09 09:15:39.521762397-08

[ceph-users] Ubuntu repo's broken

2016-10-16 Thread Jon Morby (FidoNet)
-assume-yes -q --no-install-recommends install -o Dpkg::Options::=--force-confnew ceph-osd ceph-mds ceph-mon radosgw Is there any eta for when this might be fixed? — Jon Morby FidoNet - the internet made simple! tel: 0345 004 3050 / fax: 0345 004 3051 twitter: @fido | skype://jmorby | web:

Re: [ceph-users] Ubuntu repo's broken

2016-10-17 Thread Jon Morby (Fido)
08B73419AC32B4E966C1A330E84AC2C0460F3994 uses weak digest algorithm (SHA1) - On 17 Oct, 2016, at 08:19, Wido den Hollander w...@42on.com wrote: >> Op 16 oktober 2016 om 11:57 schreef "Jon Morby (FidoNet)" : >> >> >> Morning >> >> It’s been a few days now

Re: [ceph-users] debian jewel jessie packages missing from Packages file

2016-10-17 Thread Jon Morby (FidoNet)
Hi Dan The repos do indeed seem to be messed up …. it’s been like it for at least 4 days now (since everything went offline) I raised it via IRC over the weekend and also on this list on Saturday … All the mirrors seem to be affected too (GiGo I guess) :( Jon > On 17 Oct 2016, at 11:33,

Re: [ceph-users] debian jewel jessie packages missing from Packages file

2016-10-17 Thread Jon Morby (FidoNet)
Thanks Yes … working again … *phew* :) > On 17 Oct 2016, at 14:01, Dan Milon wrote: > > debian/jessie/jewel is fine now. — Jon Morby FidoNet - the internet made simple! tel: 0345 004 3050 / fax: 0345 004 3051 twitter: @fido | skype://jmorby | web: https://www.fido.net sign

Re: [ceph-users] ceph-mds failure replaying journal

2018-10-29 Thread Jon Morby (Fido)
he best / recommended way of doing this downgrade across our estate? - On 29 Oct, 2018, at 08:19, Yan, Zheng wrote: > We backported a wrong patch to 13.2.2. downgrade ceph to 13.2.1, then run > 'ceph > mds repaired fido_fs:1" . > Sorry for the trouble > Yan,

Re: [ceph-users] ceph-mds failure replaying journal

2018-10-29 Thread Jon Morby (Fido)
memdb 1/ 5 kinetic 1/ 5 fuse 1/ 5 mgr 1/ 5 mgrc 1/ 5 dpdk 1/ 5 eventtrace 99/99 (syslog threshold) -1/-1 (stderr threshold) max_recent 1 max_new 1000 log_file /var/log/ceph/ceph-mds.mds04.log --- end dump of recent events --- - On 29 Oct, 2018, at 09:25, Jon Morby wrote

Re: [ceph-users] ceph-mds failure replaying journal

2018-10-29 Thread Jon Morby (Fido)
w viable it would be as an NFS replacement There's 26TB of data on there, so I'd rather not have to go off and redownload it all .. but losing it isn't the end of the world (but it will piss off a few friends) Jon - On 29 Oct, 2018, at 09:54, Zheng Yan wrote: > On Mon, Oct 2