Re: [ceph-users] pg 17.36 is active+clean+inconsistent head expected clone 1 missing?

2018-11-16 Thread Steve Anthony
ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://

Re: [ceph-users] unable to remove phantom snapshot for object, snapset_inconsistency

2018-07-06 Thread Steve Anthony
In case anyone else runs into this, I resolved using removeall on both bad OSDs and running ceph pg repair, which copied the good object back. -Steve On 06/27/2018 06:17 PM, Steve Anthony wrote: In the process of trying to repair snapshot inconsistencies associated with the issues in this

[ceph-users] unable to remove phantom snapshot for object, snapset_inconsistency

2018-06-27 Thread Steve Anthony
  "max": 0,     "pool": -9.2233720368548e+18,     "namespace": ""   }     },     "watchers": {       }   },   "snapset": {     "snap_context": {   "seq": 4896,   "snaps": [     4896   ]     },     "head_exists": 1,     "clones": [       ]   }     },     {   "osd": 313,   "primary": true,   "errors": [       ],   "size": 4194304,   "omap_digest": "0x",   "data_digest": "0x0d99bd77",   "object_info": {     "oid": {   "oid": "rb.0.2479b45.238e1f29.00125cbb",   "key": "",   "snapid": -2,   "hash": 2016338238,   "max": 0,   "pool": 2,   "namespace": ""     },     "version": "943431'2032262",     "prior_version": "942275'2030618",     "last_reqid": "osd.36.0:48196",     "user_version": 2024222,     "size": 4194304,     "mtime": "2018-05-13 08:58:21.359912",     "local_mtime": "2018-05-13 08:58:21.537637",     "lost": 0,     "flags": [   "dirty",   "data_digest",   "omap_digest"     ],     "legacy_snaps": [       ],     "truncate_seq": 0,     "truncate_size": 0,     "data_digest": "0x0d99bd77",     "omap_digest": "0x",     "expected_object_size": 4194304,     "expected_write_size": 4194304,     "alloc_hint_flags": 0,     "manifest": {   "type": 0,   "redirect_target": {     "oid": "",     "key": "",     "snapid": 0,     "hash": 0,     "max": 0,     "pool": -9.2233720368548e+18,     "namespace": ""   }     },     "watchers": {       }   },   "snapset": {     "snap_context": {   "seq": 4896,   "snaps": [     4896   ]     },     "head_exists": 1,     "clones": [       ]   }     }   ]     }   ] } -- Steve Anthony LTS HPC Senior Analyst Lehigh University sma...@lehigh.edu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end())

2018-06-27 Thread Steve Anthony
t;max":0,"pool":2,"namespace":"","max":0}' remove-clone-metadata 4896 Removal of clone 1320 complete Use pg repair after OSD restarted to correct stat information Once that's done, starting the OSD and repairing the PG finally marked it as clean.

Re: [ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end())

2018-06-14 Thread Steve Anthony
g_shard_t, PushOp const&, PushReplyOp*, > ObjectStore::Transaction*)+0x2da) [0x5574246715ca] > 4: (ReplicatedBackend::_do_push(boost::intrusive_ptr)+0x12e) > [0x5574246717fe] > 5: > (ReplicatedBackend::_handle_message(boost::intrusive_ptr)+0x2c1) > [0x557424680d71] > 6: (PGBackend::handle_message(boost::

Re: [ceph-users] OSD crash loop - FAILED assert(recovery_info.oi.snaps.size())

2017-06-02 Thread Steve Anthony
ks! -Steve On 05/18/2017 01:06 PM, Steve Anthony wrote: > > Hmmm, after crashing for a few days every 30 seconds it's apparently > running normally again. Weird. I was thinking since it's looking for a > snapshot object, maybe re-enabling snaptrimming and removing all the >

Re: [ceph-users] OSD crash loop - FAILED assert(recovery_info.oi.snaps.size())

2017-05-18 Thread Steve Anthony
that point this time, but I'm going to need to cycle more OSDs in and out of the cluster, so if it happens again I might try that and update. Thanks! -Steve On 05/17/2017 03:17 PM, Gregory Farnum wrote: > > > On Wed, May 17, 2017 at 10:51 AM Steve Anthony <mailto:sma...@lehigh

[ceph-users] OSD crash loop - FAILED assert(recovery_info.oi.snaps.size())

2017-05-17 Thread Steve Anthony
re or has any other ideas. Thanks for taking the time. -Steve -- Steve Anthony LTS HPC Senior Analyst Lehigh University sma...@lehigh.edu signature.asc Description: OpenPGP digital signature ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] download.ceph.com metadata problem?

2016-01-21 Thread Steve Anthony
ckage: radosgw-agent apt-cache policy ceph ceph: Installed: 0.87.2-1~bpo70+1 Candidate: 0.87.2-1~bpo70+1 Version table: *** 0.87.2-1~bpo70+1 0 100 /var/lib/dpkg/status 0.80.7-1~bpo70+1 0 100 http://debian.cc.lehigh.edu/debian/ wheezy-backports/main amd64 Packages --

Re: [ceph-users] nfs over rbd problem

2015-12-24 Thread Steve Anthony
ent: > Action p_rbd_map_1_start_0 (6) confirmed on node2 (rc=4) > Dec 18 17:22:39 [2695] node2 crmd: warning: update_failcount: > Updating failcount for p_rbd_map_1 on node2 after failed start: > rc=1 (update=INFINITY, time=1450430559) > Dec 18 17:22:39 [2695] node2 crmd:

Re: [ceph-users] Removing OSD - double rebalance?

2015-11-30 Thread Steve Anthony
ows it had that PG information. > >> My config is pretty vanilla, except for: >> [osd] >> osd recovery max active = 4 >> osd max backfills = 4 >> >> Thanks in advance, >> Carsten >> >> >> >> __________

Re: [ceph-users] upgrading 0.94.5 to 9.2.0 notes

2015-11-20 Thread Steve Anthony
l stop ceph.target stops everything, as expected :) > > I didn't tested everything thoroughly yet, but does someone has seen > the same issues? > > Thanks! > > Kenneth > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-u

Re: [ceph-users] Can't activate osd in infernalis

2015-11-20 Thread Steve Anthony
gt; >>> > drwxr-x---. 9 167 167 4,0K 19. Nov 10:32 . >>> >>> > drwxr-xr-x. 28 0 0 4,0K 19. Nov 11:14 .. >>> >>> > drwxr-x---. 2 167 1676 10. Nov 13:06 bootstrap-mds >>> >>> > drwxr-x-

[ceph-users] Interesting postmortem on SSDs from Algolia

2015-06-17 Thread Steve Anthony
s can be. Thought the list might find it interesting. https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/ -Steve -- Steve Anthony LTS HPC Support Specialist Lehigh University sma...@lehigh.edu signature.asc Description: OpenPGP digital sign

Re: [ceph-users] How to backup hundreds or thousands of TB

2015-05-06 Thread Steve Anthony
erium für Wissenschaft, Forschung und Kunst Baden-Württemberg >>>> >>>> Geschäftsführer: Prof. Thomas Schadt >>>> >>>> >>>> ___________ >>>> ceph-users mailing list >>>> ceph-users

Re: [ceph-users] Managing larger ceph clusters

2015-04-17 Thread Steve Anthony
ent of > this E-mail, you are hereby notified that any dissemination, > distribution, copying, or action taken in relation to the contents > of and attachments to this E-mail is strictly prohibited and may > be unlawful. If you have received this E-mail in error, please >

Re: [ceph-users] Replication question

2015-03-12 Thread Steve Anthony
list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Steve Anthony LTS HPC Support Specialist Lehigh University sma...@lehigh.edu signature.asc Description: OpenPGP digital signature ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] import-diff requires snapshot exists?

2015-03-03 Thread Steve Anthony
2 ./foo.diff > rbd import-diff ./foo.diff backup/small > > ** rbd/small and backup/small are now consistent through snap2. > import-diff automatically created backup/small@snap2 after importing all > changes. > > -- Jason Dillaman Red Hat dilla...@redhat.com http://ww

[ceph-users] import-diff requires snapshot exists?

2015-03-03 Thread Steve Anthony
apshot on the backup cluster is of no importance, which makes me wonder why it must exist at all. Any thoughts? Thanks! -Steve -- Steve Anthony LTS HPC Support Specialist Lehigh University sma...@lehigh.edu signature.asc Description: OpenPGP digital signature

Re: [ceph-users] OSD down

2015-02-05 Thread Steve Anthony
h.com <mailto:ceph-users@lists.ceph.com> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- > > logo Orange <http://www.orange.com/> > > *Alexis KOALLA* > > Orange/IMT/OLPS/ASE/DAPI/CSE > > Spécialiste en Techno

[ceph-users] 85% of the cluster won't start, or how I learned why to use disk UUIDs

2015-01-27 Thread Steve Anthony
ost) all the nodes. Finally, backups are important. Having that safety net helped me focus on the solution, rather than the problem since I knew that if none of my ideas worked, I'd be able to get the most critical data back. Hopefully this saves someone from making the same mistakes! -Ste

Re: [ceph-users] ceph as a primary storage for owncloud

2015-01-27 Thread Steve Anthony
any advice or can you indicate me some kind of > documentation/how-to? > > I know that maybe this is not the right place for this questions but I > also asked owncloud's community... in the meantime... > > Every answer is appreciated! > > Thanks > > Simone > --

Re: [ceph-users] osd troubleshooting

2014-11-04 Thread Steve Anthony
, > shiva > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Steve Anthony LTS HPC Support Specialist Lehigh University sma...@lehigh.edu _

Re: [ceph-users] journals relabeled by OS, symlinks broken

2014-10-27 Thread Steve Anthony
ve to a block-special device? > > On Mon Oct 27 2014 at 12:12:20 PM Steve Anthony <mailto:sma...@lehigh.edu>> wrote: > > Nice. Thanks all, I'll adjust my scripts to call ceph-deploy using > /dev/disk/by-id for future ODSs. > > I tried stopping an exist

Re: [ceph-users] journals relabeled by OS, symlinks broken

2014-10-27 Thread Steve Anthony
7;d be best off using /dev/disk/by-path/ or similar links; that way they >> follow the disks if they're renamed again. >> >> On Fri, Oct 24, 2014, 9:40 PM Steve Anthony wrote: >> >>> Hello, >>> >>> I was having problems with a node in my cluster (Ce

[ceph-users] journals relabeled by OS, symlinks broken

2014-10-24 Thread Steve Anthony
27;d check here first. Thanks! -Steve -- Steve Anthony LTS HPC Support Specialist Lehigh University sma...@lehigh.edu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] get amount of space used by snapshots

2014-09-22 Thread Steve Anthony
ping daily snapshots for a set of images, I'd like to be able to tell how much space those snapshots are using so I can determine how frequently I need to prune old snaps. Thanks! -Steve -- Steve Anthony LTS HPC Support Specialist Lehigh University sm

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-08-26 Thread Steve Anthony
alues they increase in that old post are already lower than the defaults set on my hosts. If anyone has any ideas or explanations, I'd appreciate it. Otherwise, I'll keep the list posted if I uncover a solution or make more progress. Thanks. -Steve On 07/28/2014 01:21 PM, Mark Nelson wr

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-28 Thread Steve Anthony
d be ready this week, so once it's online I'll move the cluster to that switch and re-test to see if this fixes the issues I've been experiencing. -Steve On 07/24/2014 05:59 PM, Steve Anthony wrote: > Thanks for the information! > > Based on my reading of http://ceph.com/doc

Re: [ceph-users] Optimal OSD Configuration for 45 drives?

2014-07-24 Thread Steve Anthony
ers mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Steve Anthony LTS HPC Support Specialist Lehigh University sma...@lehigh.edu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-24 Thread Steve Anthony
> > > osd_disk_threads = 4 > > > But I expect much more speed for an single thread... > > Udo > > On 23.07.2014 22:13, Steve Anthony wrote: >> Ah, ok. That makes sense. With one concurrent operation I see numbers >

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-23 Thread Steve Anthony
2014 03:11 PM, Sage Weil wrote: > On Wed, 23 Jul 2014, Steve Anthony wrote: > >> Hello, >> >> Recently I've started seeing very slow read speeds from the rbd images I >> have mounted. After some analysis, I suspect the root cause is related >> to krbd;

[ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-23 Thread Steve Anthony
raded from 0.79 to 0.80.1 and then to 0.80.4. The rbd clients, monitors, and osd hosts are all running Debian Wheezy with kernel 3.12. Any suggestions appreciated. Thanks! -Steve -- Steve Anthony LTS HPC Support Specialist Lehigh University sma...@lehigh.edu _