uster. This cluster has been
operating well for the most part for about a year but I have noticed
this sort of behavior before. This is going to take many hours to
recover. Ceph 10.2.3.
Thanks for any insights you may be able to provide!
--
Tracy Reed
http://tracyreed.org
Digital signature attache
m 112243 up_thru 119571 down_at 112241
last_clean_interval [94396,112240) 10.0.5.16:6804/3196 10.0.5.16:6805/3196
10.0.5.16:6806/3196 10.0.5.16:6807/3196 exists,up
b79d7033-fdf6-4f4d-97bf-26a24f903b98
pg_temp 0.0 [52,16,73]
pg_temp 0.1 [61,26,77]
pg_temp 0.5 [84,48,29]
pg_temp 0.6 [77,70,46]
1.0
84 1.81850 osd.84 up 1.0 1.0
85 1.81850 osd.85 up 1.0 1.0
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
signature.asc
Descrip
weights back to 1 to make this
permanent.
That explains it. Thanks!
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://li
ng is left mounted on the
OSD but it still times out.
Please let me know if there is any other info I can provide which might help.
Any help you can offer is greatly appreciated! I've been stuck on this for
weeks. Thanks!
--
Tracy Reed
pgpmPpa4E7s3Y.pgp
Description: PGP
1.el7.noarch
python-cephfs-10.2.3-0.el7.x86_64
ceph-selinux-10.2.3-0.el7.x86_64
ceph-osd-10.2.3-0.el7.x86_64
ceph-mds-10.2.3-0.el7.x86_64
ceph-radosgw-10.2.3-0.el7.x86_64
ceph-base-10.2.3-0.el7.x86_64
ceph-10.2.3-0.el7.x86_64
On Mon, Oct 03, 2016 at 03:34:50PM PDT, Tracy Reed spake thusly:
> H
e for a few hours to get us over the initial roadblock and
advise us occasionally as we move forward. Probably just a few hours of work
but if there's an experienced ceph person out there looking to make a little
extra money please drop me a line.
Thanks!
--
Tracy Reed
pgpoAdnW9Acn4.pgp
D
1:15,698][ceph03][INFO ] Running command: sudo systemctl
enable ceph.target
More details in other thread.
Where am I going wrong here?
Thanks!
--
Tracy Reed
pgpf71_DOjtT2.pgp
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.c
rking. Sometimes it really helps to have
a second pair of eyes. So this wasn't a Ceph problem at all, really.
Thanks!
On Mon, Oct 03, 2016 at 03:39:45PM PDT, Tracy Reed spake thusly:
> Oops, I said CentOS 5 (old habit, ran it for years!). I meant CentOS 7. And
> I'm
> running th
2016-11-01 21:34:40.554165 7ff02a471700 0 mon.ceph01@0(probing).data_health(0)
update_stats avail 96% total 51175 MB, used 1850 MB, avail 49324 MB
while mon log on ceph02 contains repetitions of:
2016-11-01 21:34:11.327458 7f33f4284700 0 log_channel(audit) log [DBG] :
from='admin
On Tue, Nov 01, 2016 at 09:36:16PM PDT, Tracy Reed spake thusly:
> I initially setup my ceph cluster on CentOS 7 with just one monitor. The
> monitor runs on an osd server (not ideal, will change soon). I've
Sorry, forgot to add that I'm running the following ceph version fr
the mon about the OSDs?
Any pointers are greatly appreciated.
--
Tracy Reed
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
has slowly grown over time. I've already run a compact
on it which gained me only a few percent.
Thanks!
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
signature.asc
Description: PGP signature
___
ceph-users mailing
mons.
> For example, I'm working on a 2200 OSD cluster which has been doing a
> recovery operation for a week now and the MON DBs are about 50GB now.
Wow. My cluster is only around 70 OSDs.
Thanks!
--
Tracy Reed
http://tracyreed.org
Digital signature attached f
w says 30% of my objects are
misplaced so I'm looking at 24 hours of recovery. Maybe the store.db
will be smaller when it finally finishes.
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
signature.asc
Description: PGP signature
_
n CentOS 7. Almost all filestore
except for one OSD which recently had to be replaced which I made
bluestore. I plan to slowly migrate everything over to bluestore over
the course of the next month.
Thanks!
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
sig
re. I plan to slowly migrate everything over to bluestore over
the course of the next month.
Thanks!
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
signature.asc
Description: PGP signature
___
ceph-users mai
e:
ceph osd crush rule create-replicated
then:
ceph osd pool set crush_rule
but I'm not sure what the values of
would be in my situation. Maybe:
ceph osd crush rule create-replicated different-host default
but I don't know what fai
On Fri, May 04, 2018 at 12:08:35AM PDT, Tracy Reed spake thusly:
> I've been using ceph for nearly a year and one of the things I ran into
> quite a while back was that it seems like ceph is placing copies of
> objects on different OSDs but sometimes those OSDs can be on the same
>
On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly:
> https://jcftang.github.io/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/
> How can I tell which way mine is configured? I could post the whole
> crushmap if necessary but i
o do any debugging on it right now but will try
to look into it more in the morning.
Thanks for the feedback!
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
signature.asc
Description: PGP signature
___
ceph-users
check?
Thanks!
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ta.51f32238e1f29.13de
set-alloc-hint,write
34533241osd73 0.f0ae1f02 rbd_data.51f32238e1f29.13de
set-alloc-hint,write
/sys/kernel/debug/ceph/b2b00aae-f00d-41b4-a29b-58859aa41375.client31276017/monc
have osdmap 232455
want next osdmap
Thanks!
--
Trac
d
> come back up automatically) and see if that clears up this batch.
Thanks!
--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
22'998386622791680 watch
34919986osd12.ba8d973e rbd_header.dd3b556b8b4567
5305738'894263730634752 watch
/sys/kernel/debug/ceph/b2b00aae-f00d-41b4-a29b-58859aa41375.client31276017/monc
have osdmap 232501
want next osdmap
--
Tracy Reed
http://tra
mentioned mkfs discard starting a zero on the rbd image
which can take a long time but that should be doable in background and
not hang the whole VM forever, right?
Thanks for any insight you can provide!
--
Tracy Reed
signature.asc
Description: PGP signature
___
active+clean
2 active+clean+scrubbing+deep
1 active+clean+scrubbing
client io 839 kB/s wr, 0 op/s rd, 159 op/s wr
On Mon, Feb 06, 2017 at 06:57:23PM PST, Tracy Reed spake thusly:
> This is what I'm doing on my CentOS 7/KVM/virtlib server:
>
&
On Tue, Feb 07, 2017 at 12:25:08AM PST, koukou73gr spake thusly:
> On 2017-02-07 10:11, Tracy Reed wrote:
> > Weird. Now the VMs that were hung in interruptable wait state have now
> > disappeared. No idea why.
>
> Have you tried the same procedure but with local storage
ecause we forgot to tell the switch to use jumbo
frames and learned our lesson on that.
Not sure what else I can look at. I'm not seeing any clues.
--
Tracy Reed
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph
tice in the use of ceph repair?
Thanks!
--
Tracy Reed
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> then
> do
> ``ceph pg repair
>
>
> On Sat, Feb 18, 2017 at 10:02 AM, Tracy Reed wrote:
> > I have a 3 replica cluster. A couple times I have run into inconsistent
> > PGs. I googled it and ceph docs and various blogs say run a repair
> > first. But a couple
ugh as it could be.
> David knows more and correct if I'm missing something. He's also
> working on interfaces for scrub that are more friendly in general and
> allow administrators to make more fine-grained decisions about
> recovery in ways that c
32 matches
Mail list logo