Hello Everyone,
We are going to install ceph object storage with ldap autenticazione.
We would like to know if acl and quotas on objects and buckets work fine
with ldap users.
Thanks
Ignazio
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
ons/stretch-mode/
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Vice President - Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
> -Original Message-
> From: Ignazio Cassano [mailto:ignaziocass...@gmail.com]
> Se
Hello Arthur,
I meant :
Cluster B pool name pool-clutser-B mirrored on Cluster A
Cluster C pool name pool-clutser-C mirrored on Cluster A
So on cluster A I should have two rbd-mirror daemons
Ignazio
Il giorno ven 1 ott 2021 alle ore 13:35 Arthur Outhenin-Chalandre <
arthur.outhenin-chalan...@ce
ur example the third cluster
> would then require two mirror daemons which is not possible AFAIK. I
> can't tell if there's any development going on in that direction, so
> my answer would be "no, you can't do that".
>
> Regards,
> Eugen
>
>
> Zitat von
Hello All,
Please I would like to know if it is possibile two clusters can mirror rbd
to a third cluster.
In other words I have 3 separated ceph cluster : A B C.
I would like cluster A and cluster B can mirror some pools on cluster C.
Is it possible ?
Thanks
___
Hello All,
Please I would like to know if it is possibile two clusters can mirror rbd
to a third cluster.
In other words I have 3 separated ceph cluster : A B C.
I would like cluster A and cluster B can mirror some pools on cluster C.
Is it possible ?
Thanks
Ignazio
___
ing ownership of
'/var/lib/cinder/mnt/b36168a69e993d67635862db6ab238d1/.snapshot': Read-only
file system
Of course it cannot do it.
Could be this the problem ?
Ignazio
Il giorno lun 26 lug 2021 alle ore 19:36 Ignazio Cassano <
ignaziocass...@gmail.com> ha scritto:
> Hello All,
&
Hello All,
I am playing witk kolla wallaby on ubuntu 20.04.
When I add a new backend type, volume container stop to work and continue
to restarting and all instances are stopped.
I can solve only restarting one controller at a time.
This morning I had cinder configurated for nfs netapp with 24 inst
We solved our issuewe got a dirty lvm configuration and cleaned it.
Now it is working fine
Ignazio
Il giorno lun 26 lug 2021 alle ore 13:25 Ignazio Cassano <
ignaziocass...@gmail.com> ha scritto:
> Hello, I want to add further information I found for the issue described
&g
Hello, I want to add further information I found for the issue described
by Andrea:
ephadm.log:2021-07-26 13:07:11,281 DEBUG /usr/bin/docker: stderr Error: No
such object: ceph-be115adc-edf0-11eb-8509-c5c80111fd98-osd.11
cephadm.log:2021-07-26 13:07:11,654 DEBUG /usr/bin/docker: stderr Error: No
s
thout the underscore).
> And I'm pretty sure this applies to all config options.
>
> [1]
> https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/#id3
>
> Regards,
>
> Dimitri
>
> On Fri, Jul 23, 2021 at 12:03 PM Ignazio Cassano
> wrote:
>
&
Hello, I want to ask if the correct config in ceph.conf for cluster network
is:
cluster network =
Or
cluster_network =
Thanks
Il Ven 23 Lug 2021, 17:36 Dimitri Savineau ha scritto:
> Hi,
>
> This looks similar to https://tracker.ceph.com/issues/46687
>
> Since you want to use hdd devices to b
Yes, I noted that more bandwidth is required with this kind of servers. I
must reconsider my network infrastructure.
Many tanks
Ignazio
Il giorno ven 12 mar 2021 alle ore 09:26 Robert Sander <
r.san...@heinlein-support.de> ha scritto:
> Hi,
>
> Am 10.03.21 um 17:43 schrieb
r workload long term.
>
> Reed
>
> > On Mar 10, 2021, at 1:12 PM, Stefan Kooman wrote:
> >
> > On 3/10/21 5:43 PM, Ignazio Cassano wrote:
> >> Hello, what do you think about of ceph cluster made up of 6 nodes each
> one
> >> with the following configu
Sorry I forgot to mention I will not use cephfs
Il Mer 10 Mar 2021, 20:44 Ignazio Cassano ha
scritto:
> Hello , non and osd.
> 1 small ssd is for operations system and 1 is for mon.
> I am agree to increase the ram.
> As far as nvme size it is true that more osd little disks
:
> On 3/10/21 8:12 PM, Stefan Kooman wrote:
> > On 3/10/21 5:43 PM, Ignazio Cassano wrote:
> >> Hello, what do you think about of ceph cluster made up of 6 nodes each
> >> one
> >> with the following configuration ?
>
> I forgot to ask: Are you planning on o
Hello, what do you think about of ceph cluster made up of 6 nodes each one
with the following configuration ?
A+ Server 1113S-WN10RT
Barebone
Supermicro A+ Server 1113S-WN10RT - 1U - 10x U.2 NVMe - 2x M.2 - Dual
10-Gigabit LAN - 750W Redundant
Processor
AMD EPYC™ 7272 Processor 12-core 2.90GHz 64M
Hello, I am testing ceph from croit and it works fine: very easy web
interface for installing and managing ceph and very clear support pricing.
Ignazio
Il Mar 2 Giu 2020, 19:36 ha scritto:
> and theres
>
> https://croit.io/consulting
>
> best regards
> Kevin M
>
> - Original Message -
>
Many thanks, Janne
Ignazio
Il giorno mer 20 mag 2020 alle ore 12:32 Janne Johansson <
icepic...@gmail.com> ha scritto:
> Den ons 20 maj 2020 kl 12:14 skrev Ignazio Cassano <
> ignaziocass...@gmail.com>:
>
>> Hello Janne, so do you think we must move from 10Gbs to 40
Hello Janne, so do you think we must move from 10Gbs to 40 or 100GBs to
to make the most of nvme ?
Thanks
Ignazio
Il giorno mer 20 mag 2020 alle ore 12:06 Janne Johansson <
icepic...@gmail.com> ha scritto:
> Den ons 20 maj 2020 kl 12:00 skrev Ignazio Cassano <
> ignazioca
Hello All,
We have 6 servers.
Configuration for each server:
1 ssd for mon (only on three servers)
1 ssd 1.9 TB for db/wal
1 nvme 1.6 TB for db/wal
10 SAS hdd 3.6 TB for osd
We decided to create a pool of 30 osd (5x6) with db/wal on ssd and a pool
of 30 (5x6) osd with db/wal on nvme.
S
Hello All,
I just installed ceph iscsi target following "manual installation" at
https://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli-manual-install/
When the api server starts it seems to work but in service status it
reports:
apr 30 07:22:51 lab-ceph-01 rbd-target-api[58740]: Processing osd
Hello Casey,
I solved going on primary site ad executing:
radosgw-admin zone modify --rgw-zone nivolazonegroup-primarysite
--access-key=blablabla --secret=blablabla
Ignazio
Il giorno mer 15 apr 2020 alle ore 19:45 Casey Bodley
ha scritto:
> On Wed, Apr 15, 2020 at 12:06 PM Ignazio Cass
...Must I do something on master after creating
the secondary ?
Thanks
Ignazio
Il Mer 15 Apr 2020, 19:45 Casey Bodley ha scritto:
> On Wed, Apr 15, 2020 at 12:06 PM Ignazio Cassano
> wrote:
> >
> > Hello All,
> > Reading the documentation I created a multisite with lum
Hello All,
Reading the documentation I created a multisite with luminous version.
I would like to know if it sync in one way only.
Using s3cmd if I put a file in a bucket on the primary zone I can see the
file in the same bucket on the secondary zone.
If I put a file in the bucket on secondary zon
gt; remote site?
>
> Does the output of
>
> radosgw-admin zonegroup get
> radosgw-admin zone get
>
> reflect those changes?
>
> Regards,
> Eugen
>
>
> Zitat von Ignazio Cassano :
>
> > Hello,
> > I have configured a multisite ceph.
>
Hello,
I have configured a multisite ceph.
The master zone has not changed but on the destination zone I had some
problems.
On the destination zone I cleaned and reinstalled the radosgw, but trying
to assing the same zone name it had before reinstallation does not work
(radosgw does not start).
I c
I am sorry.
The problem was the http_proxy
Ignazio
Il giorno ven 27 mar 2020 alle ore 11:24 Ignazio Cassano <
ignaziocass...@gmail.com> ha scritto:
> Hello , I am trying to initializing the secondary zone pulling the realm
> define in the primary zone:
>
> radosgw-admin real
Hello , I am trying to initializing the secondary zone pulling the realm
define in the primary zone:
radosgw-admin realm pull --rgw-realm=nivola --url=http://10.102.184.190:8080
--access-key=access --secret=secret
The following errors appears:
request failed: (16) Device or resource busy
Could
Hello All,
I am going to test rbd mirroring and object storage multisite.
I would like to know which network is used by rbd mirror ( ceph public or
cluster network ?)
Same question for object storage multisite
What about firewall ?
What about bandwidth ?
Our sites are connected with 1Gbs networ
Thanks Konstantin. I presume owncloud solution can use ceph as storage
backend.
Ignazio
Il Sab 21 Mar 2020, 05:45 Konstantin Shalygin ha scritto:
> On 3/18/20 7:06 PM, Ignazio Cassano wrote:
> > Hello All,
> > I am looking for object storage freee/opensource client gui (linux
Hello, I have two openstack installation on different sites.
They do not share any service : each one have its keystone repository, and
ceph cluster with object storage and block storage.
I read about object storage multisite and I could modify my object storages
to enable multisite active-active.
Hello All,
I am looking for object storage freee/opensource client gui (linux and
windows) for end users .
I tried swiftstack but it is only for personal use.
Help, please ?
Ignazio
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
33 matches
Mail list logo