Today I noticed that all ceph octopus packages are missing from
download.ceph.com. Is this intentional? Was this an accident? I'm unable to
find any announcement or existing issue tracking this...
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
Hi,
Recently added one disk in Ceph cluster using "ceph-volume lvm create
--data /dev/sdX" but the new OSD didn't start. After some rest of the other
nodes OSD service also stopped. So, I restarted all nodes in the cluster
now after restart.
MON, MDS, MGR and OSD services are not starting. Could
Hello there,
I'm running Ceph 15.2.17 (Octopus) on Debian Buster and I'm starting an
upgrade but I'm seeing a problem and I wanted to ask how best to proceed
in case I make things worse by mucking with it without asking experts.
I've moved an rbd image to the trash without clearing the snapsh
Cheers everybody,
I had this issue some time ago, and we though it was fixed, but it seems to
happen again.
We have files, that get uploaded by one of our customer, only available in
the index, but not in the rados.
At first we thought this might be a bug (
https://tracker.ceph.com/issues/54528)
Hi everybody,
are there other ways for rados objects to get removed, other than "rados -p
POOL rm OBJECT"?
We have a customer who got objects in the bucket index, but can't download
it. After checking it seems like the rados object is gone.
Ceph cluster is running ceph octopus 15.2.16
"radosgw-a
Hi,
Looking to take our Octopus Ceph up to Pacific in the coming days.
All the machines (physical - osd,mon,admin,meta) are running Debian
'buster' and the setup was done originally with cephdeploy (~2016).
Previously I've been able to upgrade the core OS, keeping the ceph
packages at the sa
Does the v15.2.15-20220216 container include backports published since the
release of v15.2.15-20211027 ?
I'm interested in BACKPORT #53392 https://tracker.ceph.com/issues/53392,
which was merged into the ceph:octopus branch on February 10th.
___
ceph-use
I'm running a 11 node Ceph cluster running octopus (15.2.8) I mainly run this
as a RGW cluster so had 8 RGW daemons on 8 nodes. Currently I got 1 PG degraded
and some misplaced objects as I added a temporary node.
Today I tried and expanded the RGW cluster from 8 to 10, this didn't work as
one
Hello,
I have installed and bootsraped a Ceph manager node via cephadm and the
options:
--initial-dashboard-user admin --initial-dashboard-password
[PASSWORD] --dashboard-password-noupdate
Everything works fine. I also have the Grafana Board to monitor my
cluster. But the access to Gra
Hi,
Has something change with 'rbd diff' in Octopus or have I hit a bug? I am no
longer able to obtain the list of objects that have changed between two
snapshots of an image, it always lists all allocated regions of the RBD image.
This behaviour however only occurs when I add the '--whole-obje
I've been banging on my ceph octopus test cluster for a few days now.
8 nodes. each node has 2 SSDs and 8 HDDs.
They were all autoprovisioned so that each HDD gets an LVM slice of an SSD as a
db partition.
service_type: osd
service_id: osd_spec_default
placement:
host_pattern: '*'
data_devices
I've inherited a Ceph Octopus cluster that seems like it needs urgent
maintenance before data loss begins to happen. I'm the guy with the most Ceph
experience on hand and that's not saying much. I'm experiencing most of the ops
and repair tasks for the first time here.
Ceph health output looks
Hey all.
I was wondering if Ceph Octopus is capable of automating/managing snapshot
creation/retention and then replication? Ive seen some notes about it, but
can't seem to find anything solid.
Open to suggestions as well. Appreciate any input!
___
I am running Nautilus on centos7. Does octopus run similar as nautilus
thus:
- runs on el7/centos7
- runs without containers by default
- runs without cephadm by default
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
Hi,
I have installed Ceph Octopus cluster using cephadm with a single network
now I want to add a second network and configure it as a cluster address.
How do I configure ceph to use second Network as cluster network?.
Amudhan
___
ceph-users mailing li
Hello,
MDS process crashed suddently. After trying to restart it, it failed to replay
journal and started to restart continually.
Just to summarize, here is what happened :
1/ The cluster is up and running with 3 nodes (mon and mds in the same nodes)
and 3 OSD.
2/ After a few days, 2 (standby
Hi,
I made a fresh install of Ceph Octopus 15.2.3 recently.
And after a few days, the 2 standby MDS suddenly crashed with segmentation
fault error.
I try to restart it but it does not start.
Here is the error :
-20> 2020-07-17T13:50:27.888+ 7fc8c6c51700 10 monclient: _renew_subs
-19> 2020
I have a seemingly strange situation. I have three OSDs that I created with
Ceph Octopus using the `ceph orch daemon add :device` command. All three
were added and everything was great. Then I rebooted the host. Now the daemon’s
won’t start via Docker. When I attempt to run the `docker` command
18 matches
Mail list logo