What is your cluster status (ceph -s)? I assume that either your
cluster is not healthy or your crush rules don't cover an osd failure.
Sometimes it helps to fail the active mgr (ceph mgr fail). Can you
also share your 'ceph osd tree'? Do you use the default
replicated_rule or any additiona
Hi Stefan,
Thanks for your feedback!
On Thu, Sep 29, 2022 at 10:28 AM Stefan Kooman wrote:
> On 9/26/22 18:04, Gauvain Pocentek wrote:
>
> >
> >
> > We are running a Ceph Octopus (15.2.16) cluster with similar
> > configuration. We have *a lot* of slow ops when starting OSDs. Also
> >
I used to create Bluestore OSDs using commands such as this one:
ceph-volume lvm create --bluestore --data ceph-block-50/block-50 --block.db
ceph-db-50-54/db-50
with the goal of having block.db and wal.db co-located on the same LV
(ceph-db-50-54/db-5 in my example, which is on a SSD device).
Is
Where is your ceph.conf file?
ceph_volume.exceptions.ConfigurationError: Unable to load expected
Ceph config at: /etc/ceph/ceph.conf
---
Alvaro Soto.
Note: My work hours may not be your work hours. Please do not feel the need
to respond during a time that is not convenient for you.
---
Bump! Any suggestions?
On Wed, Sep 28, 2022 at 4:26 PM Satish Patel wrote:
> Folks,
>
> I have 15 nodes for ceph and each node has a 160TB disk attached. I am
> using cephadm quincy release and all 14 nodes have been added except one
> node which is giving a very strange error during adding it.
Den tors 29 sep. 2022 kl 17:57 skrev Matt Vandermeulen :
>
> I think you're likely to get a lot of mixed opinions and experiences
> with this question. I might suggest trying to grab a few samples from
> different vendors, and making sure they meet your needs (throw some
> workloads at them, quali
On Mon, Sep 19, 2022 at 9:38 AM Yuri Weinstein wrote:
> Update:
>
> Remaining =>
> upgrade/octopus-x - Neha pls review/approve
>
Both the failures in
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_16:33:35-upgrade:octopus-x-quincy-release-distro-default-smithi/
seem related to RGW. Casey,
I think you're likely to get a lot of mixed opinions and experiences
with this question. I might suggest trying to grab a few samples from
different vendors, and making sure they meet your needs (throw some
workloads at them, qualify them), then make sure your vendors have a
reasonable lead ti
Hello,
We had been using Intel SSD D3 S4610/20 SSDs but Solidigm is... having
problems Bottom line is they haven't shipped an order in a year.
Does anyone have any recommendations on SATA SSDs that have a fairly good mix
of performance/endurance/cost?
I know that they should all just work
Please see my original post/answer...
missing ceph-volume.noarch packet causes the problem!
Thanks,
Christoph
Am Do., 29. Sept. 2022 um 16:37 Uhr schrieb Marc :
> >
> > > Many thanks for any hint helping to get missing 7 OSDs up ASAP.
> >
> > Not sure if it "helps", but I would try "ceph-volume
Please see my original post/answer...
missing ceph-volume.noarch packet causes the problem!
Thanks,
Christoph
Am Do., 29. Sept. 2022 um 16:36 Uhr schrieb Marc :
> >
> > > Many thanks for any hint helping to get missing 7 OSDs up ASAP.
> >
> > Not sure if it "helps", but I would try "ceph-volume
Janne,
LVM looks fine so far. Please se below...
BUT. It seems that after upgrade from Octopus to Quincy yesterday the
standalone packet "ceph-volume.noarch" won't updated/installed. So after
re-installation of ceph-volume and activation i got all the tmpfs mounts
under /var/lib/ceph again and wo
>
> > Many thanks for any hint helping to get missing 7 OSDs up ASAP.
>
> Not sure if it "helps", but I would try "ceph-volume lvm activate
> --all" if those were on lvm, I guess ceph-volume simple and raw might
> have similar command to search for and start everything that looks
> like a ceph OS
> Many thanks for any hint helping to get missing 7 OSDs up ASAP.
Not sure if it "helps", but I would try "ceph-volume lvm activate
--all" if those were on lvm, I guess ceph-volume simple and raw might
have similar command to search for and start everything that looks
like a ceph OSD.
Perhaps the
Dear All,
I'm testing ceph quincy and I have problems using the cephadm ochestrator
backend. When I'm trying to use it to start/stop osd daemons nothing happens.
I have a "brand new" cluster deployed with cephadm. So far everything else that
I tried worked just like in Pacific, but the ceph or
On Thu, Sep 29, 2022 at 7:52 AM Dominique Ramaekers
wrote:
> Is it possible these pulls aren’t jet included in Quincy Stable?
>
> I can't find a notice in my syslog about the mount syntax I use being
> deprecated.
Those PRs are in Quincy. However, there are no syslog warnings about
deprecating t
Hey guys,
we are using ceph-iscsi and want to update our configuration to serve iSCSI to
an additional network. I did set up everything via the gwcli comman. Originally
i created the gateway with „create gw-a 192.168.100.4". Now i want to add an
additional IP to the existing gateway, but i don’
Hello list member,
after upgrading from Octopus to Quincy yesterday, now we have a problem
starting OSDs on the newest Rocky 8.6 4.18.0-372.26.1.el8_6.x86_64.
This is a non-cephadm Cluster. All nodes running Rocky with Kernel
4.18.0-372.19.1.el8_6.x86_64 except this one host (ceph1n012) i restar
You understood my question correctly, thanks for the explanation.
Boris, I was able to force the traffic to go out only through the cluster
network by making the first machine have OSDs only of the Primary type and
the other machines only Secondary. It worked as intended on writing, but
reading on
Hi,
Unfortunately it was a wrong track. The problem remains the same, with
the same error messages, on another host with only one network address
in the Ceph cluster public network. BTW, "ceph shell --name rgw_daemon"
works and from the shell I can use radosgw-admin and ceph command,
suggesti
Thanks Ken for the info.
Is it possible these pulls aren’t jet included in Quincy Stable?
I can't find a notice in my syslog about the mount syntax I use being
deprecated.
> -Oorspronkelijk bericht-
> Van: Ken Dreyer
> Verzonden: woensdag 28 september 2022 16:50
> Aan: Sagittarius-
On 9/26/22 18:04, Gauvain Pocentek wrote:
We are running a Ceph Octopus (15.2.16) cluster with similar
configuration. We have *a lot* of slow ops when starting OSDs. Also
during peering. When the OSDs start they consume 100% CPU for up to
~ 10
seconds, and after that consum
Hi Murilo,
as far as I understand ceph:
You connect via NFS to a radosgw. When sending data to the rgw instance
(uploading files via NFS), the RGW instance talks to the primary OSDs for
the required placementgroups through the public network. To primary OSDs
talk to their replicas via the cluster
23 matches
Mail list logo