host issues were observed in the rest of the cluster or at the
site.
Thank you for your replies and I'll gather better loggin next time.
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
virginpulse.com
|virginpulse.com/global-challenge
Australia | Bosnia and Herzeg
-01-08 13:33:29.541 7fec1a736700 1 mon.cephmon02@1(probing) e7
handle_auth_request failed to assign global_id
...
There is nothing in the logs of the two remaining/healthy monitors. What is my
best practice to get this host back in the cluster?
peter
Peter Eisch
Senior Site Reliability
of this
be willing to file it as a bug, please?
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
virginpulse.com
|virginpulse.com/global-challenge
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland
| United Kingdom | USA
Confidentiality Notice: The
could confer?
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
virginpulse.com
|virginpulse.com/global-challenge
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland
| United Kingdom | USA
Confidentiality Notice: The information contained in this e-mail
.
rgw relaxed s3 bucket names = true
rgw s3 auth use keystone = true
rgw thread pool size = 4096
rgw keystone revocation interval = 300
rgw keystone token cache size = 1
rgw swift versioning enabled = true
rgw log nonexistent bucket = true
All tips accepted…
peter
Peter Eisch
file is delete but all the segments
remain. Am I misconfigured or is this a bug where it won’t expire the actual
data? Shouldn’t RGW set the expiration on the uploaded segments too if they’re
managed separately?
Thanks,
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
> Restart of single module is: `ceph mgr module disable devicehealth ; ceph mgr
> module enable devicehealth`.
Thank you for your reply. The I receive an error as the module can't be
disabled.
I may have worked through this by restarting the nodes in a rapid succession.
peter
ns active (cephrgw-a01, cephrgw-a02, cephrgw-a03)
data:
pools: 18 pools, 4901 pgs
objects: 4.28M objects, 16 TiB
usage: 49 TiB used, 97 TiB / 146 TiB avail
pgs: 4901 active+clean
io:
client: 7.4 KiB/s rd, 24 MiB/s wr, 7 op/s rd, 628 op/s wr
Peter Eis
store OSD due to missing
devices')
RuntimeError: Unable to activate bluestore OSD due to missing devices
(this is repeated for each of the 16 drives)
Any other thoughts? (I’ll delete/create the OSDs with ceph-deply otherwise.)
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.6
e to activate bluestore OSD due to missing devices
#
Okay, this created /etc/ceph/osd/*.json. This is cool. Is there a command or
option which will read these files and mount the devices?
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
virginpulse.com
|virginpulse.com/g
x27;t do anything to specific commands for just updating the ceph RPMs in
this process.
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
virginpulse.com
|virginpulse.com/global-challenge
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland
|
[2019-07-24 13:40:49,602][ceph_volume.process][INFO ] Running command:
/bin/systemctl show --no-pager --property=Id --state=running ceph-osd@*
This is the only log event. At the prompt:
# ceph-volume simple scan
#
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
/ceph/osd/ceph-18
├─sdc2 8:34 0 1.7T 0 part
│ └─fdad7618-1234-4021-a63e-40d973712e7b 253:13 0 1.7T 0 crypt
...
Thank you for your time on this,
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
virginpulse.com
|virginpulse.com/g
x27;type' files but I'm unsure how to get the
lockboxes mounted to where I can get the OSDs running. The osd-lockbox
directory is otherwise untouched from when the OSDs were deployed.
Is there a way to run ceph-deploy or some other tool to rebuild the mounts for
the drives?
peter
Peter
earching a
resolution for this?
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.659.3228
virginpulse.com
|virginpulse.com/global-challenge
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland
| United Kingdom | USA
Confidentiality Notice: The information co
Hi,
Could someone be able to point me to a blog or documentation page which helps
me resolve the issues noted below?
All nodes are Luminous, 12.2.12; one realm, one zonegroup (clustered haproxies
fronting), two zones (three rgw in each); All endpoint references to each zone
go are an haproxy.
[replying to myself]
I set aside cephfs and created an rbd volume. I get the same splotchy
throughput with rbd as I was getting with cephfs. (image attached)
So, withdrawing this as a question here as a cephfs issue.
#backingout
peter
Peter Eisch
virginpulse.com
Thanks for the thought. It’s mounted with this entry in fstab (one line, if
email wraps it):
cephmon-s01,cephmon-s02,cephmon-s03:/ /loam ceph
noauto,name=clientname,secretfile=/etc/ceph/secret,noatime,_netdev 0 2
Pretty plain, but I'm open to tweaking!
peter
Peter
in bandwidth (MB/sec): 1084
Average IOPS: 279
Stddev IOPS:1
Max IOPS: 285
Min IOPS: 271
Average Latency(s): 0.057239
Stddev Latency(s): 0.0354817
Max latency(s): 0.367037
Min latency(s): 0.0120791
peter
Peter E
19 matches
Mail list logo