Hi Peter,
On the weekend another host was down due to power problems, which was
restarted. Therefore these outputs also include some "Degraded data
redundancy" messages. And one OSD crashed due to a disk error.
ceph -s: https://pastebin.com/Tm8QHp52
ceph health detail: https://pastebin.com/SrA7Bi
Here is the output with all OSD up and running.
ceph -s: https://pastebin.com/5tMf12Lm
ceph health detail: https://pastebin.com/avDhcJt0
ceph osd tree: https://pastebin.com/XEB0eUbk
ceph osd pool ls detail: https://pastebin.com/ShSdmM5a
On Mon, Aug 17, 2020 at 9:38 AM Martin Palma wrote:
>
> Hi
Yes thanks, that gives at least a 200 OK without xml.
However I am still getting these.
7f011d796700 1 == req done req=0x562a2a7f85f0 op status=0
http_status=200 latency=0s ==
7f7c2342a700 1 == starting new request req=0x55c6307765f0 =
7f7c2342a700 1 == req done req=0x5
Hi Dan,
I opened the ticket last friday:
https://tracker.ceph.com/issues/46978
Manuel
On Fri, 14 Aug 2020 17:49:55 +0200
Dan van der Ster wrote:
> I think the best course of action would be to open a tracker ticket
> with details about your environment and your observations, then the
> dev
Hi,
I’m suffering with our large omap object in the cluster since my colleague
updated from jewel to luminous 12.2.8. This warn is still here more than a year
ago.
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'default.rgw.log'
Search the cluster log for 'Large
After doing some research I suspect the problem is that during the
cluster was backfilling an OSD was removed.
Now the PGs which are inactive and incomplete have all the same
(removed OSD) in the "down_osds_we_would_probe" output and the peering
is blocked by "peering_blocked_by_history_les_bound"
Hi Dan,
I use the container
docker.io/ceph/daemon:v3.2.10-stable-3.2-mimic-centos-7-x86_64. As far as I can
see, it uses the packages from http://download.ceph.com/rpm-mimic/el7, its a
Centos 7 build. The version is:
# ceph -v
ceph version 13.2.8 (5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0) mimi
Hi all,
I would like to make a follow up note to the below question about Nautilus
packages for Ubuntu Focal 20.04.
The Ceph Repo ( https://download.ceph.com/debian-nautilus/dists/focal/main/ )
only holds ceph-deploy packages for Nautilus on Focal. Is there a plan to
upload other packages as we
Hello all,
Thanks for the help. I believe we traced this down to be an issue with the
crush rules. It seems somehow osd_crush_chooseleaf_type = 0 got placed into
our configuration. This caused ceph osd crush rule dump to include this line '
"op": "choose_firstn",' instead of 'chooseleaf_firstn
Hi everyone,
We're looking for presentations for the upcoming Ceph Tech Talk dates:
* September 24th @ 17:00 UTC
* October 22nd @ 17:00 UTC
If you're interested or know someone who can present, please let me know!
--
Mike Perez
he/him
Ceph Community Manager
M: +1-951-572-2633
494C 5D25 2
Hi,
Do you have scsi errors around the time of the crash?
`journalctl -k` and look for scsi medium errors.
Cheers, Dan
On Mon, Aug 17, 2020 at 3:50 PM EDH - Manuel Rios
wrote:
>
> Hi , Today one of our SSD dedicated to RGW index crashed, maybe a bug or just
> osd crashed.
>
> Our current vers
I would expect that most S3 compatible clients would work with RadosGW.
As to adding it to the Ceph dashboard, I don't think that's a good idea. A
bucket is a flat namespace. Amazon (and others then did also) added semantics
that allow for a pseudo-hierarchical behavior, but it's still based o
Hi , Today one of our SSD dedicated to RGW index crashed, maybe a bug or just
osd crashed.
Our current versión 14.2.11, today we're under heavy object process... aprox
60TB data.
ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus
(stable)
1: (ceph::__ceph_assert_fail(char
Hi Dan,
Mmm looks like a megaraid stuck/ hw failure, curious because today we're under
heavy deleting buckets... and today fail the disk... welcome our luck
Aug 17 15:44:12 CEPH003 kernel: sd 0:2:8:0: [sdi] tag#0 FAILED Result:
hostbyte=DID_ERROR driverbyte=DRIVER_OK
Aug 17 15:44:12 CEPH003 ker
Hi all,
We have a bonus Ceph Tech Talk for August. Join us August 20th at 17:00
UTC to hear Neeha Kompala and Jason Weng present on Edge Application -
Streaming Multiple Video Sources.
Don't forget on August 27th at 17:00 UTC, Pritha Srivastava will also be
presenting on this month's Ceph Te
We are seeking information on configuring Ceph to work with Noobaa and
NextCloud.
Randy
--
Randy Morgan
CSR
Department of Chemistry/BioChemistry
Brigham Young University
ran...@chem.byu.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscr
Hi,
I have a 3 node cluster of mimic with 9 osds (3 osds on each node).
I use this cluster to test integration of an application with S3 api.
The problem is that after a few days all OSD starts filling up with
bluestore logs and goes down and out one by one!
I cannot stop the logs and I cannot fi
Hi all,
*** Short version ***
Is there a way to repair a rocksdb from errors "Encountered error while
reading data from compression dictionary block Corruption: block
checksum mismatch" and "_open_db erroring opening db" ?
*** Long version ***
We operate a nautilus ceph cluster (with 100 dis
Hi, thought i might ask here since i was unable to find anything similar
to my issue.Perhaps someone might have an idea.
In our org we are running currently few nautilus clusters (14.2.2 ,
14.2.4, 14.2.8) But strangely enough the clusters with 14.2.8 are
reporting weird metrics of pool R/W. To b
Hi Ceph folks,
I am relatively new to Ceph cluster and I hope I can quickly receive some help
here.
I would like to recover files from cephfs data pool. Someone wrote that inode
linkage and file names are stored in omap data of objects in metadata pool.
I cant find any information about the str
Hi,
I just installed a new cluster with cephadm
(https://docs.ceph.com/docs/master/cephadm/install/), all was working fine
until I disabled cephx using the following commands:
ceph config set global auth_cluster_required none
ceph config set global auth_service_required none
ceph config set glo
Configuring it with respect to what about these applications? What are you
trying to do? Do you have existing installations of any of these? We need a
little more about your requirements.
> On Apr 17, 2020, at 1:14 PM, Randy Morgan wrote:
>
> We are seeking information on configuring Ceph to w
Randy;
Nextcloud is easy, it has a "standard" S3 client capability, though it also has
Swift client capability. As a S3 client, it does look for the older path style
(host/bucket), rather than Amazons newer DNS style (bucket.host).
You can find information on configuring Nextcloud's primary st
I think I can remember reading somewhere that every radosgw is required
to run with their own clientid. Is this still necessary? Or can I run
multiple instances of radosgw with the same clientid?
So can have something like
rgw: 2 daemons active (rgw1, rgw1, rgw1)
___
• Email Size Limit of 20 MB otherwise total storage of 15 GB.
• Sending Limit to Contacts up to 100 Contacts at the same time.
• Webmail Send Limit of up to 100 / hour.
• SMTP Send Limit of up to 500 / hour.
• IP Based Send Limit of up to 50 / 5 Minutes.
• The us
Have you tried restarting the ceph services? If those config options
are not in ceph.conf and you removed them live they may reconnect
after a restart. Or you put the config into ceph.conf and restart
services and see if that helps to recover.
Zitat von Tom Verhaeg :
Hi,
I just installe
You are printing structure Epson printer and getting lines on the print, to
determine this issue contact Epson Customer Service. On the off chance that you
are not happy with the data gave on the web, don't hesitate to get in touch
with us. We are consistently there to keep you grinning. You can
27 matches
Mail list logo