ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://
In case anyone else runs into this, I resolved using removeall on both
bad OSDs and running ceph pg repair, which copied the good object back.
-Steve
On 06/27/2018 06:17 PM, Steve Anthony wrote:
In the process of trying to repair snapshot inconsistencies associated
with the issues in this
"max": 0,
"pool": -9.2233720368548e+18,
"namespace": ""
}
},
"watchers": {
}
},
"snapset": {
"snap_context": {
"seq": 4896,
"snaps": [
4896
]
},
"head_exists": 1,
"clones": [
]
}
},
{
"osd": 313,
"primary": true,
"errors": [
],
"size": 4194304,
"omap_digest": "0x",
"data_digest": "0x0d99bd77",
"object_info": {
"oid": {
"oid": "rb.0.2479b45.238e1f29.00125cbb",
"key": "",
"snapid": -2,
"hash": 2016338238,
"max": 0,
"pool": 2,
"namespace": ""
},
"version": "943431'2032262",
"prior_version": "942275'2030618",
"last_reqid": "osd.36.0:48196",
"user_version": 2024222,
"size": 4194304,
"mtime": "2018-05-13 08:58:21.359912",
"local_mtime": "2018-05-13 08:58:21.537637",
"lost": 0,
"flags": [
"dirty",
"data_digest",
"omap_digest"
],
"legacy_snaps": [
],
"truncate_seq": 0,
"truncate_size": 0,
"data_digest": "0x0d99bd77",
"omap_digest": "0x",
"expected_object_size": 4194304,
"expected_write_size": 4194304,
"alloc_hint_flags": 0,
"manifest": {
"type": 0,
"redirect_target": {
"oid": "",
"key": "",
"snapid": 0,
"hash": 0,
"max": 0,
"pool": -9.2233720368548e+18,
"namespace": ""
}
},
"watchers": {
}
},
"snapset": {
"snap_context": {
"seq": 4896,
"snaps": [
4896
]
},
"head_exists": 1,
"clones": [
]
}
}
]
}
]
}
--
Steve Anthony
LTS HPC Senior Analyst
Lehigh University
sma...@lehigh.edu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t;max":0,"pool":2,"namespace":"","max":0}'
remove-clone-metadata 4896
Removal of clone 1320 complete
Use pg repair after OSD restarted to correct stat information
Once that's done, starting the OSD and repairing the PG finally marked
it as clean.
g_shard_t, PushOp const&, PushReplyOp*,
> ObjectStore::Transaction*)+0x2da) [0x5574246715ca]
> 4: (ReplicatedBackend::_do_push(boost::intrusive_ptr)+0x12e)
> [0x5574246717fe]
> 5:
> (ReplicatedBackend::_handle_message(boost::intrusive_ptr)+0x2c1)
> [0x557424680d71]
> 6: (PGBackend::handle_message(boost::
ks!
-Steve
On 05/18/2017 01:06 PM, Steve Anthony wrote:
>
> Hmmm, after crashing for a few days every 30 seconds it's apparently
> running normally again. Weird. I was thinking since it's looking for a
> snapshot object, maybe re-enabling snaptrimming and removing all the
>
that point this time, but I'm going to need to cycle more OSDs in
and out of the cluster, so if it happens again I might try that and update.
Thanks!
-Steve
On 05/17/2017 03:17 PM, Gregory Farnum wrote:
>
>
> On Wed, May 17, 2017 at 10:51 AM Steve Anthony <mailto:sma...@lehigh
re or has any other ideas. Thanks for taking the time.
-Steve
--
Steve Anthony
LTS HPC Senior Analyst
Lehigh University
sma...@lehigh.edu
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ckage: radosgw-agent
apt-cache policy ceph
ceph:
Installed: 0.87.2-1~bpo70+1
Candidate: 0.87.2-1~bpo70+1
Version table:
*** 0.87.2-1~bpo70+1 0
100 /var/lib/dpkg/status
0.80.7-1~bpo70+1 0
100 http://debian.cc.lehigh.edu/debian/ wheezy-backports/main
amd64 Packages
--
ent:
> Action p_rbd_map_1_start_0 (6) confirmed on node2 (rc=4)
> Dec 18 17:22:39 [2695] node2 crmd: warning: update_failcount:
> Updating failcount for p_rbd_map_1 on node2 after failed start:
> rc=1 (update=INFINITY, time=1450430559)
> Dec 18 17:22:39 [2695] node2 crmd:
ows it had that PG information.
>
>> My config is pretty vanilla, except for:
>> [osd]
>> osd recovery max active = 4
>> osd max backfills = 4
>>
>> Thanks in advance,
>> Carsten
>>
>>
>>
>> __________
l stop ceph.target stops everything, as expected :)
>
> I didn't tested everything thoroughly yet, but does someone has seen
> the same issues?
>
> Thanks!
>
> Kenneth
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-u
gt;
>>> > drwxr-x---. 9 167 167 4,0K 19. Nov 10:32 .
>>>
>>> > drwxr-xr-x. 28 0 0 4,0K 19. Nov 11:14 ..
>>>
>>> > drwxr-x---. 2 167 1676 10. Nov 13:06 bootstrap-mds
>>>
>>> > drwxr-x-
s can be. Thought the list might find it
interesting.
https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/
-Steve
--
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma...@lehigh.edu
signature.asc
Description: OpenPGP digital sign
erium für Wissenschaft, Forschung und Kunst Baden-Württemberg
>>>>
>>>> Geschäftsführer: Prof. Thomas Schadt
>>>>
>>>>
>>>> ___________
>>>> ceph-users mailing list
>>>> ceph-users
ent of
> this E-mail, you are hereby notified that any dissemination,
> distribution, copying, or action taken in relation to the contents
> of and attachments to this E-mail is strictly prohibited and may
> be unlawful. If you have received this E-mail in error, please
>
list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma...@lehigh.edu
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
2 ./foo.diff
> rbd import-diff ./foo.diff backup/small
>
> ** rbd/small and backup/small are now consistent through snap2.
> import-diff automatically created backup/small@snap2 after importing all
> changes.
>
> -- Jason Dillaman Red Hat dilla...@redhat.com http://ww
apshot on
the backup cluster is of no importance, which makes me wonder why it
must exist at all.
Any thoughts? Thanks!
-Steve
--
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma...@lehigh.edu
signature.asc
Description: OpenPGP digital signature
h.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> --
>
> logo Orange <http://www.orange.com/>
>
> *Alexis KOALLA*
>
> Orange/IMT/OLPS/ASE/DAPI/CSE
>
> Spécialiste en Techno
ost)
all the nodes. Finally, backups are important. Having that safety net
helped me focus on the solution, rather than the problem since I knew
that if none of my ideas worked, I'd be able to get the most critical
data back.
Hopefully this saves someone from making the same mistakes!
-Ste
any advice or can you indicate me some kind of
> documentation/how-to?
>
> I know that maybe this is not the right place for this questions but I
> also asked owncloud's community... in the meantime...
>
> Every answer is appreciated!
>
> Thanks
>
> Simone
>
--
,
> shiva
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma...@lehigh.edu
_
ve to a block-special device?
>
> On Mon Oct 27 2014 at 12:12:20 PM Steve Anthony <mailto:sma...@lehigh.edu>> wrote:
>
> Nice. Thanks all, I'll adjust my scripts to call ceph-deploy using
> /dev/disk/by-id for future ODSs.
>
> I tried stopping an exist
7;d be best off using /dev/disk/by-path/ or similar links; that way
they
>> follow the disks if they're renamed again.
>>
>> On Fri, Oct 24, 2014, 9:40 PM Steve Anthony wrote:
>>
>>> Hello,
>>>
>>> I was having problems with a node in my cluster (Ce
27;d check here first. Thanks!
-Steve
--
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma...@lehigh.edu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ping daily snapshots for a set of images, I'd like
to be able to tell how much space those snapshots are using so I can
determine how frequently I need to prune old snaps. Thanks!
-Steve
--
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sm
alues they increase in that old post are already
lower than the defaults set on my hosts.
If anyone has any ideas or explanations, I'd appreciate it. Otherwise,
I'll keep the list posted if I uncover a solution or make more progress.
Thanks.
-Steve
On 07/28/2014 01:21 PM, Mark Nelson wr
d be ready this week, so
once it's online I'll move the cluster to that switch and re-test to see
if this fixes the issues I've been experiencing.
-Steve
On 07/24/2014 05:59 PM, Steve Anthony wrote:
> Thanks for the information!
>
> Based on my reading of http://ceph.com/doc
ers mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma...@lehigh.edu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> osd_disk_threads = 4
>
>
> But I expect much more speed for an single thread...
>
> Udo
>
> On 23.07.2014 22:13, Steve Anthony wrote:
>> Ah, ok. That makes sense. With one concurrent operation I see numbers
>
2014 03:11 PM, Sage Weil wrote:
> On Wed, 23 Jul 2014, Steve Anthony wrote:
>
>> Hello,
>>
>> Recently I've started seeing very slow read speeds from the rbd images I
>> have mounted. After some analysis, I suspect the root cause is related
>> to krbd;
raded from 0.79 to 0.80.1 and then to 0.80.4.
The rbd clients, monitors, and osd hosts are all running Debian Wheezy
with kernel 3.12. Any suggestions appreciated. Thanks!
-Steve
--
Steve Anthony
LTS HPC Support Specialist
Lehigh University
sma...@lehigh.edu
_
33 matches
Mail list logo