Good morning,
I wanted to inquire about the status of the Ceph iSCSI gateway service. We
currently have several machines installed with this technology that are working
correctly,
although I have seen that it appears to be discontinued since 2022. My question
is whether to continue down th
I was reading on the Ceph site that iSCSI is no longer under active development
since November 2022. Why is that?
https://docs.ceph.com/en/latest/rbd/iscsi-overview/
-- Michael
This message and its attachments are from Data Dimensions and are intended only
for the use of the individual or enti
Hi All,
Just successfully(?) completed a "live" update of the first node of a
Ceph Quincy cluster from RL8 to RL9. Everything "seems" to be working -
EXCEPT the iSCSI Gateway on that box.
During the update the ceph-iscsi package was removed (ie
`ceph-iscsi-3.6-2.g97f5b02.el8.noarch.rpm` - th
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.
We started noticing some unexpected performance issues with iSCSI. I mean,
an SSD pool is reaching 100MB of write speed for an
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.
Recently, we had the need to add some VMWare clusters as clients for the
iSCSI GW and also Windows systems with the use of Clus
I am looking at using an iscsi gateway in front of a ceph setup. However
the warning in the docs is concerning:
The iSCSI gateway is in maintenance as of November 2022. This means that
it is no longer in active development and will not be updated to add new
features.
Does this mean I should
Hi, please see the output below.
ceph-iscsi-gw-1.ipa.pthl.hklocalhost.localdomain is the one who is being
messed up with a wrong hostname. I want to delete it.
/iscsi-target...-igw/gateways> ls
o- gateways
...
Hi guys,
we are using ceph-iscsi to provide block storage for Microsoft Exchange and
vmware vsphere. Ceph docs state that you need to configure Windows iSCSI
Initatior for fail-over-only but there is no such point for vmware. In my
tcmu-runner logs on both ceph-iscsi gateways I see the followin
Hi Everybody (Hi Dr. Nick),
I'm attacking this issue from both ends (ie from the Ceph-end and from
the oVirt-end - I've posted questions on both mailing lists to ensure we
capture the required knowledge-bearer(s)).
We've got a Ceph Cluster set up with three iSCSI Gateways configured,
and we
Hi All,
I've followed the instructions on the CEPH Doco website on Configuring
the iSCSI Target. Everything went AOK up to the point where I try to
start the rbd-target-api service, which fails (the rbd-target-gw service
started OK).
A `systemctl status rbd-target-api` gives:
~~~
rbd-target
Hello team
I have a problem which I want the team to help me on.
I have ceph cluster with Health OK which is running in testing environment
with 3 nodes with 4 osds each ,and 3 mons plus 2 managers, deployed using
ansible. the purpose of the cluster is to work as backend of openstack as
storage an
Hi Guys.
I have a doubt about CEPH working with iSCSI Gateways.
Today we have a cluster with 10 OSD Nodes, 3 Monitors and 02 iSCSI Gateways. We
are planning to expand de gateways up to 04 machines. We understood the process
to do this, but we would like to know if it's necessary to adjust some
c
Hello together,
i need some help on our ceph 16.2.5 cluster as iscsi target with esxi nodes
background infos:
- we have build 3x osd nodes with 60 bluestore osd with and 60x6TB
spinning disks, 12 ssd´s and 3nvme.
- osd nodes have 32cores and 256gb Ram
- the osd disk are connected to
Hi,
I had several clusters running as nautilus and pending upgrading to
octopus.
I am now testing the upgrade steps for ceph cluster from nautilus
to octopus using cephadm adopt in lab referred to below link:
- https://docs.ceph.com/en/octopus/cephadm/adoption/
Lab environment:
3 all-in-one node
Hi All,
I2d like to install a Ceph Nautilus on Ubuntu 18.04 LTS and give the storage to
2 windows server via ISCSI. I choose the Nautilus because of the deploy
function I don't want to another VM to cephadm. So I can isntall the ceph and
it is working properly but can't setup the icsi gateway.
I have an issue on ceph-iscsi ( ubuntu 20 LTS and Ceph 15.2.6) after I
restart rbd-target-api, it fails and not starting again:
```
sudo systemctl status rbd-target-api.service
● rbd-target-api.service - Ceph iscsi target configuration API
Loaded: loaded (/lib/systemd/system/rbd-target-a
All;
I've finally gotten around to setting up iSCSI gateways on my primary
production cluster, and performance is terrible.
We're talking 1/4 to 1/3 of our current solution.
I see no evidence of network congestion on any involved network link. I see no
evidence CPU or memory being a problem o
Hi,
does anyone here use CEPH iSCSI with VMware ESXi? It seems that we are hitting
the 5 second timeout limit on software HBA in ESXi. It appears whenever there
is increased load on the cluster, like deep scrub or rebalance. Is it normal
behaviour in production? Or is there something special we
Dear All,
a week ago we had to reboot our ESXi nodes since our CEPH cluster sudennly
stopped serving all I/O. We have identified a VM (vCenter appliance) which was
swapping heavily and causing heavy load. However, since then we are
experiencing strange issues, as if the cluster cannot handle an
All;
We've used iSCSI to support virtualization for a while, and have used
multi-pathing almost the entire time. Now, I'm looking to move from our single
box iSCSI hosts to iSCSI on Ceph.
We have 2 independent, non-routed, subnets assigned to iSCSI (let's call them
192.168.250.0/24 and 192.16
Is it possible to create an EC backed RBD via ceph-iscsi tools (gwcli,
rbd-target-api)? It appears that a pre-existing RBD created with the rbd
command can be imported, but there is no means to directly create an EC
backed RBD. The API seems to expect a single pool field in the body to work
with.
Hi,
In the Documentation on
https://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/ it is stated
that you need at least CentOS 7.5 with at least kernel 4.16 and to
install tcmu-runner and ceph-iscsi "from your Linux distribution's
software repository".
CentOS does not know about tcmu-runner nor
22 matches
Mail list logo