Hi Lucian,
> On 29 Jun 2021, at 17:02, Lucian Petrut
> wrote:
>
> It’s a compatibility issue, we’ll have to update the Windows Pacific build.
>
> Sorry for the delayed reply, hundreds of Ceph ML mails ended up in my spam
> box. Ironically, I’ll have to thank Office 365 for that :).
Can you p
sorry,a point was left out yesterday. Currently, the .index pool with that
three OSD(.18,.19,.29) is not in use and nearly has no any data.
| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
签名由网易邮箱大师定制
On 7/21/2021 14:46, wrote:
Send ceph-users mailing list submissions to
ceph-users@ceph.io
To
Hi Mahnoosh,
you might want to set bluefs_buffered_io to true for every OSD.
It looks it's false by default in v15.2.12
Thanks,
Igor
On 7/18/2021 11:19 PM, mahnoosh shahidi wrote:
We have a ceph cluster with 408 osds, 3 mons and 3 rgws. We updated our
cluster from nautilus 14.2.14 to octopu
I just brought up a new Octopus cluster (because I want to run it on centos 7
for now)
Everything looks fairly nice on the ceph side.
Running FIO on a gateway pulls up some respectable IO/s on an rbd mapped image.
I can use targetcli to iscsi share it out to a VMware node (cant use gwcli on
Dear Patrick,
Thanks a lot for pointing out the HSM ticket. We will see whether we have the
resource to do something with the ticket.
I am thinking of a temporary solution for HSM using cephfs client commands. The
following command
'setfattr -n ceph.dir.layout.pool -v NewPool Fold
Hi,
we recently set up a new pacific cluster with cephadm.
Deployed nfs on two hosts and ingress on two other hosts. (ceph orch
apply for nfs and ingress like on the docs page)
So far so good. ESXi with NFS41 connects, but the way ingress works
confuses me.
It distributes clients static to
Hello samuel,
On Mon, Jul 19, 2021 at 2:28 PM huxia...@horebdata.cn
wrote:
>
> Dear Cepher,
>
> I have a requirement to use CephFS as a tiered file system, i.e. the data
> will be first stored onto an all-flash pool (using SSD OSDs), and then
> automatically moved to an EC coded pool (using HDD
Hi,
> On 21 Jul 2021, at 10:53, Burkhard Linke
> wrote:
>
> One client with special needs is openstack cinder. The database entries
> contain the mon list for volumes
Another question: do you know where is saved this list? I mean, how to see the
current records via cinder command?
Thanks,
The IO500 Foundation requests your help with determining the future
direction for the IO500 lists and data repositories. We ask you complete
a short survey that will take less than 5 minutes.
The survey is here: https://forms.gle/cFMV4sA3iDUBuQ73A
Deadline for responses is 27 August 2021 to al
Hi Manuel,
I was the one that did Red Hat's IO500 CephFS submission. Feel free to
ask any questions you like. Generally speaking I could achieve 3GB/s
pretty easily per kernel client and up to about 8GB/s per client with
libcephfs directly (up to the aggregate cluster limits assuming enough
Hello,
no experience yet, but we are planning to do the same (although partly
NVME, partly spinning disks) for our upcoming cluster.
It's going to be rather focused on AI and ML applications that use
mainly GPUs, so the actual number of nodes is not going to be
overwhelming, probably around 40
Hi ceph users,
Can someone share some comments on the below query.
Regards
Ram.
On Mon, 12 Jul, 2021, 3:16 pm Ramanathan S,
wrote:
> Hi Abdelillah,
>
> We use the below link to install Ceph in containerized deployment using
> ansible.
>
>
> https://access.redhat.com/documentation/en-us/red_hat
On Tue, Jul 20, 2021 at 11:49 PM Robert W. Eckert wrote:
>
> The link in the ceph documentation
> (https://docs.ceph.com/en/latest/install/windows-install/) is
> https://cloudbase.it/ceph-for-windows/ is https://cloudba.se/ceph-win-latest
> the same?
Yes. https://cloudba.se/ceph-win-latest
Crappy code continues to live on?
This issue has been automatically marked as stale because it has not had recent
activity. It will be closed in a week if no further activity occurs. Thank you
for your contributions.
___
ceph-users mailing list -- c
Hi,
is it possible to limit access of the subuser that he sees (read, write)
only "his" bucket? And also be able to create a bucket inside that bucket?
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to c
Dear all,
we are looking towards setting up an all-NVME CephFS instance in our
high-performance compute system. Does anyone have any experience to share
in a HPC setup or an NVME setup mounted by dozens of nodes or more?
I've followed the impressive work done at CERN on Youtube but otherwise
ther
On 7/20/21 5:23 PM, [AR] Guillaume CephML wrote:
Hello,
On 20 Jul 2021, at 17:48, Daniel Gryniewicz wrote:
That's probably this one: https://tracker.ceph.com/issues/49892 Looks like we
forgot to mark it for backport. I've done that now, so it should be in the
next Pacific.
I’m not sure
Good evening,
On 7/21/21 10:44 AM, Lokendra Rathour wrote:
Hello Everyone,
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/operations_guide/index#handling-a-node-failure
* refer to the section * "Replacing the node, reinstalling the operating
system, and using t
‐‐‐ Original Message ‐‐‐
On Wednesday, July 21st, 2021 at 9:53 AM, Burkhard Linke
wrote:
> You need to ensure that TCP traffic is routeable between the networks
>
> for the migration. OSD only hosts are trivial, an OSD updates its IP
>
> information in the OSD map on boot. This should al
Hello,
I need to relocate an Octopus (15.2.13) ceph cluster of 8 nodes to another
internal network. This means that the IP address of each nodes as well as the
domain name will change. The hostname itself will stay the same.
What would be the best steps in order to achieive that from the ceph s
I'm running a 11 node Ceph cluster running octopus (15.2.8) I mainly run this
as a RGW cluster so had 8 RGW daemons on 8 nodes. Currently I got 1 PG degraded
and some misplaced objects as I added a temporary node.
Today I tried and expanded the RGW cluster from 8 to 10, this didn't work as
one
Hello Everyone,
We have Ceph Based three Node setup. In this Setup, we want to test the
Complete Node failover and reuse the old OSD Disk from the failed node.
we are referring to the Red-Hat based document:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/operation
Hi,
It’s only compatible with S3 and swift. You could also use object storage with
rados, bypassing rgw.
But it’s not user friendly and don’t provide the same features.
Which kind of access/API were you expecting?
Étienne
> On 21 Jul 2021, at 09:43, Michel Niyoyita wrote:
>
> Dear Ceph users
Good morning everybody,
we've dug further into it but still don't know how this could happen.
What we ruled out for now:
* Orphan objects cleanup process.
** There is only one bucket with missing data (I checked all other
buckets yesterday)
** The "keep this files" list is generated by radosgw-adm
Hi,
On 7/21/21 9:40 AM, mabi wrote:
Hello,
I need to relocate an Octopus (15.2.13) ceph cluster of 8 nodes to another
internal network. This means that the IP address of each nodes as well as the
domain name will change. The hostname itself will stay the same.
What would be the best steps in
Dear Ceph users,
I would like to ask if ceph object storage (RGW) is compatible only with
Amazon S3 and Openstack Swift . is any other way it can be used apart of
those 2 services?
kindly help me to understand , because in training the offer is for S3 and
SWIFT only .
Best Regards
26 matches
Mail list logo