One strategy is to use VLANs.  It's been a while since I've had to measure 
client and replication traffic at host granularity, but you make a sound point.

> On Nov 14, 2025, at 9:23 AM, Joachim Kraftmayer 
> <[email protected]> wrote:
> 
> Hi,
> 
> Anthony and Robert, how can you monitor the OSD client and replication 
> traffic separately, e.g. in grafana?
> It has often helped me with error analysis. 
> Joachim
> 
>   [email protected] <mailto:[email protected]>
>   www.clyso.com <http://www.clyso.com/>
>   Hohenzollernstr. 27, 80801 Munich
> Utting | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE275430677
> 
> 
> 
> 
> Am Fr., 14. Nov. 2025 um 14:13 Uhr schrieb Anthony D'Atri 
> <[email protected] <mailto:[email protected]>>:
>> 
>> 
>> > On Nov 14, 2025, at 4:37 AM, Robert Sander <[email protected] 
>> > <mailto:[email protected]>> wrote:
>> > 
>> > Hi,
>> > 
>> > Am 14.11.25 um 9:44 AM schrieb Alexander Leutz:
>> > 
>> >> Ceph is my storage, also I name the communication as storage network.
>> >> The proxmox ceph servers has 2 1Gb management ports an 2 10 Gb ports for 
>> >> storage network.
>> >> I set the mon public and the osd cluster networks to the privat IP range 
>> >> 10.10.x.x:
>> >> ceph config set mon public_network 10.10.x.x/24
>> >> ceph config set osd cluster_network 10.10.x.x/24
>> > 
>> > In your case I would not setup a separate cluster network.
>> 
>> Indeed.  To be clear, Alexander, do you have those two 10GE ports bonded? If 
>> so, then I agree with Herr Sander that there is no benefit to defining the 
>> cluster_network, and indeed you may be confusing code by doing so.  The 
>> cluster network should only be defined if it's *different* from the public 
>> network.  The public network is how clients communicate with the Ceph 
>> cluster.  If there's a cluster_network, it is only used for internal 
>> replication and heartbeating within Ceph. 
>> 
>> > It is optional and only recommended if you have a network that is at least 
>> > twice as fast as the public network.
>> 
>> Well, say someone has two dual-port 10GE NICs in each system.  They could 
>> either bond all four together (which I *think* works but haven't tried) for 
>> a public network, or bond one port on each NIC together for the public, and 
>> one on each for the private network.
>> 
>> I do usually recommend that people not bother with a cluster network if 
>> they're using 25GE or better, especially with very dense nodes.
>> 
>> 
>> 
>> > 
>> >> After this I only see this as paramter in the GUI screen, under host / 
>> >> ceph / configuration in the section Configuration Database
>> >> Und [global] I still see a second entry named public_network = 
>> >> 192.168.x.x (this is my management network of the proxmox host).
>> > 
>> > Change in the config db do not get written to the ceph.conf file.
>> 
>> I think he's writing about the Dashboard, not ceph.conf
>> 
>> > 
>> > You can remove this setting from the ceph.conf file by using an editor as 
>> > it is now in the config db.
>> 
>> If it's in ceph.conf, then absolutely.  All you need in there is the mon 
>> information, maybe the fsid.
>> 
>> > 
>> > Regards
>> > -- 
>> > Robert Sander
>> > Linux Consultant
>> > 
>> > Heinlein Consulting GmbH
>> > Schwedter Str. 8/9b, 10119 Berlin
>> > 
>> > https://www.heinlein-support.de <https://www.heinlein-support.de/>
>> > 
>> > Tel: +49 30 405051 - 0
>> > Fax: +49 30 405051 - 19
>> > 
>> > Amtsgericht Berlin-Charlottenburg - HRB 220009 B
>> > Geschäftsführer: Peer Heinlein - Sitz: Berlin
>> > _______________________________________________
>> > ceph-users mailing list -- [email protected] <mailto:[email protected]>
>> > To unsubscribe send an email to [email protected] 
>> > <mailto:[email protected]>
>> _______________________________________________
>> ceph-users mailing list -- [email protected] <mailto:[email protected]>
>> To unsubscribe send an email to [email protected] 
>> <mailto:[email protected]>

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to