Hi Gilles,
Yes, your configuration works with Netplan on Ubuntu 18 as well. However, this
would use only one of the physical interfaces (the current active interface for
the bond) for both networks.
The reason I want to create two bonds is to have enp179s0f0 as active for the
public network,
Thanks for the suggestion, Paul. I renamed “bond0" to “zbond0” but
unfortunately this did not solve the problem in our Ubuntu 18 environment.
There is still an issue during boot adding the vlan interfaces to the bond.
Regards,
James.
> On 31 Mar 2020, at 16:08, Paul Mezzanini wrote:
>
> We r
Thanks. I’m now trying to figure out how to get Proxmox to pass the “-o
mds_namespace=otherfs” option to its mounting of the filesystem, but that’s a
bit out of scope for this list (though if anyone has done this please let me
know!).
> On Mar 31, 2020, at 2:15 PM, Nathan Fish wrote:
>
> Yes,
Hi there
I have a fairly simple ceph multisite configuration with 2 ceph clusters
in 2 different datacenters in the same city
The rgws have this config for ssl:
rgw_frontends = civetweb port=7480+443s
ssl_certificate=/opt/ssl/ceph-bundle.pem
The certificate is a real issued certificate, not
It works well for me, been running a couple clusters for 1-2 years where all
OSD hosts (~200) have no system disks and instead netboot from PXE.
No NFS server involved, each host loads the same system image (Debian Live
squashfs) into memory on boot and runs independently from there on out. Take
Hi Victor,
that's true for Ceph releases prior to Octopus. The latter has some
improvements in this area..
There is pending backport PR to fix that in Nautilus as well:
https://github.com/ceph/ceph/pull/33889
AFAIR this topic has been discussed in this mailing list multiple times.
Thanks,
Yes, standby (as opposed to standby-replay) MDS' form a shared pool
from which the mons will promote an MDS to the required role.
On Tue, Mar 31, 2020 at 12:52 PM Jarett DeAngelis wrote:
>
> So, for the record, this doesn’t appears to work in Nautilus.
>
>
>
> Does this mean that I should just co
So, for the record, this doesn’t appears to work in Nautilus.
Does this mean that I should just count on my standby MDS to “step in” when a
new FS is created?
> On Mar 31, 2020, at 3:19 AM, Eugen Block wrote:
>
>> This has changed in Octopus. The above config variables are removed.
>> Instea
Hello,
I don't use netplan, and still on Ubuntu 16.04.
But I use VLAN? on the bond, not directly on the interfaces :
bond0 :
- enp179s0f0
- enp179s0f1
Then I use bond0.323 and bond0.324.
(I use a bridge on top to be more like my OpenStack cluster, and with more
friendly names : br-mgmt, br-sto
You can adjust the Primary Affinity down on the larger drives so they’ll get
less read load. In one test I’ve seen this result in a 10-15% increase in read
throughout but it depends on your situation.
Optimal settings would require calculations that make my head hurt, maybe
someone has a too
Hello,
More info about this. Am running ubuntu 18.04. Looks all the servers with
the osd the issue has double entry of ceph-volume in systemd. I dont know
what caused double entry in some servers
Eg
/etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-30-4589d067-cdb7-44de-a484-497ca13cc32d
> How to get rid of this logging??
>
> Mar 31 13:40:03 c01 ceph-mgr: 2020-03-31 13:40:03.521 7f554edc8700 0
> log_channel(cluster) log [DBG] : pgmap v672067: 384 pgs: 384
> active+clean;
Why?
>
> I already have the time logged, I do not need it a second time.
>
> Mar 31 13:39:59 c01 ceph-
Hi Andras,
El 31/3/20 a las 16:42, Andras Pataki escribió:
I'm looking for some advice on what to do about drives of different
sizes in the same cluster.
We have so far kept the drive sizes consistent on our main ceph
cluster (using 8TB drives). We're getting some new hardware with
larger,
Hi cephers,
I'm looking for some advice on what to do about drives of different
sizes in the same cluster.
We have so far kept the drive sizes consistent on our main ceph cluster
(using 8TB drives). We're getting some new hardware with larger, 12TB
drives next, and I'm pondering on how best
Hello,
I have upgraded nautilus cluster to octopus few days ago. the cluster was
running ok and even after to octopus everything was running ok
the issue came when i rebooted the servers for updating the kernel. Two
servers out of 6 osd's servers osd cant start. No error reported in
ceph-volume.l
We run this exact style of setup on our OSD ceph nodes (RH7 based).
The one really _really_ silly thing we noticed is that the network interfaces
tended to be brought up in alphabetical order no matter what. We needed our
bond interfaces (frontnet and backnet) to come up after the physical vlan
Hi,
I am currently building a 10 node Ceph cluster, each OSD node has 2x 25 Gbit/s
nics, and I have 2 TOR switches (mlag not supported).
enp179s0f0 -> sw1
enp179s0f1 -> sw2
vlan 323 is used for ‘public network’
vlan 324 is used for ‘cluster network’
My desired configuration is to create two bon
I already have the time logged, I do not need it a second time.
Mar 31 13:39:59 c01 ceph-mgr: 2020-03-31 13:39:59.518 7f554edc8700 0
log_channel(cluster) log [DBG] : pgmap v672065: 384 pgs: 384
active+clean;
I already have the time logged, I do not need it a second time.
Mar 31 13:39:59
How to get rid of this logging??
Mar 31 13:40:03 c01 ceph-mgr: 2020-03-31 13:40:03.521 7f554edc8700 0
log_channel(cluster) log [DBG] : pgmap v672067: 384 pgs: 384
active+clean;
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hello Eugen, sorry but I reinstalled rasosgw on both zones.
Anycase when I faced the issue my situation was as follows:
- site A zonegroup master
- site B replicated
- removed zone, radosgw,zonegroup and rgw pools on site B
- installed again rgw on site B
- pulling from from site A, has created
Hi Davod,
Hi Jeff,
Indeed!
The files and directories group permissions are set to the “users” group. If
the the primary group of the samba user is set to users group it works as
expected, elsewise not.
I’m using Ubuntu 18.04 on the samba server with samba from the district repo.
Cheers, Marco
This has changed in Octopus. The above config variables are removed.
Instead, follow this procedure.:
https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity
Thanks for the clarification, IIRC I had troubles applying the
mds_standby settings in Nautilus already
Hi,
could you share the exact steps you took to change the configuration?
Did you clean up the previous configuration before reconfiguring the
remote site?
Does the output of
radosgw-admin zonegroup get
radosgw-admin zone get
reflect those changes?
Regards,
Eugen
Zitat von Ignazio Cass
Thanks for this. Still on Nautilus here because this is a Proxmox cluster
but good for folks tracking master to know.
J
On Tue, Mar 31, 2020, 3:14 AM Patrick Donnelly wrote:
> On Mon, Mar 30, 2020 at 11:57 PM Eugen Block wrote:
> > For the standby daemon you have to be aware of this:
> >
> > >
On Mon, Mar 30, 2020 at 11:57 PM Eugen Block wrote:
> For the standby daemon you have to be aware of this:
>
> > By default, if none of these settings are used, all MDS daemons
> > which do not hold a rank will
> > be used as 'standbys' for any rank.
> > [...]
> > When a daemon has entered the sta
25 matches
Mail list logo