Hi, Tom:
Model driven technology has laid foundation for operators and service provides 
to build their Digital twin platform, one more concrete reference can be 
RFC8969, which not only looks into top down service fulfillment, but also
Bottom up service assurance or performance monitoring, in my understanding 
digital twin is more related to telemetry, service assurance, network 
visibility, the reason I quote IRTF work is to provide an example on what 
digital twin will look like.

In IETF, we already see a lot of digital twin like model that has been 
standardized, e.g., RFC8345, RFC8346, RFC8795, etc, hope this clarifies.

-Qin
-----邮件原件-----
发件人: tom petch [mailto:ie...@btconnect.com] 
发送时间: 2022年12月15日 20:30
收件人: Qin Wu <bill...@huawei.com>; Vasilenko Eduard 
<vasilenko.edu...@huawei.com>; Jan Lindblad <j...@tail-f.com>
抄送: opsawg@ietf.org; net...@ietf.org; Paolo Volpato <paolo.volp...@huawei.com>; 
Xipengxiao <xipengx...@huawei.com>
主题: Re: [netmod] How many "digital twins" every single network should have? Who 
would map between "twins"?

From: netmod <netmod-boun...@ietf.org> on behalf of Qin Wu 
<bill.wu=40huawei....@dmarc.ietf.org>
Sent: 15 December 2022 08:07

Hi, Eduard:
Thank for bring this issue up, I think digital twins is more related to network 
topology model, network monitoring model rather than LxSM service delivery 
model or LxNM network model

<tp>
I was going to ask for an explanation of a digital twin and then saw that it is 
a research topic of the IRSG about which consensus has not been achieved.  As 
such, I think it wrong for us to try and introduce it to YANG at this point in 
time

Tom Petch


First, As described in RFC8969 or RFC8309, LxSM service delivery model or LxNM 
network model focuses on service level abstraction and resource level 
abstraction respectively,  we expect Service level abstraction Can be 
decomposed into network/resource level abstraction, and resource level 
abstraction to be translated into a bunches of device level models, but we 
don’t expect translation across level can be standardized in the same way.
Service mapping process, configuration translation process should play the key 
roles, this provide flexibility and meet requirements of various difference use 
cases.
Another reason is not to choose use augmentation of LxSM model is mapping 
service parameter into configuration parameters is not always 1 to 1 mapping, 
in many cases,it is N to M mapping, domain specific level controller needs to 
allocate additional resource level parameters. Therefore I feel augmentation is 
more suitable device level, but not suggested to be used in top down service 
mapping.
But To reduce the mapping,cost, What we consider in LxNM model design is to try 
to reuse many data type that are also applied to LxSM. Please refer to RFC9181

Secondly, I disagree LxNM is designed for single vendor scenarios, the reason 
to introduce network level model, is to address multi-vendor interoperability 
issue since different vendor control can be deployed in various different 
domain, sometime it is IP domain, sometime it is optical domain.

Third, for service mapping, there are some other work to be developed in TEAS 
working group, e.g., 
https://datatracker.ietf.org/doc/draft-ietf-teas-te-service-mapping-yang/
https://datatracker.ietf.org/doc/draft-dhody-teas-ietf-network-slice-mapping/
They solve different jigsaw composing or decomposing. So you don’t need too 
much worry about the issue you raised.

Regarding How many "digital twins" every single network, I think you should 
more look into how to network topology model developed by IETF, e.g., Network 
Topology model
L3 Topology Model
L2 Topology model
DC Fabric Model
TE Topology model
L3 TE topology model., etc
To build digital twin model, I believe these topology related model are a good 
basis, in addition, you should get KPI data, Log Data, flow statistics data, 
Alarm data, other inventory related data, correlate them with topology data to 
build digital twin network, In IRTF NMRG, we have digital twin network 
architecture, try to build a whole picture, tackle key challenges to build 
digital twin network 
https://datatracker.ietf.org/doc/draft-irtf-nmrg-network-digital-twin-arch/
also there are many other digital twin related work in NMRG, which can be a 
good input to this architecture.
Hope this helps.

-Qin
发件人: netmod [mailto:netmod-boun...@ietf.org] 代表 Vasilenko Eduard
发送时间: 2022年12月8日 2:42
收件人: Jan Lindblad <j...@tail-f.com>
抄送: opsawg@ietf.org; net...@ietf.org; Paolo Volpato <paolo.volp...@huawei.com>; 
Xipengxiao <xipengx...@huawei.com>
主题: Re: [netmod] How many "digital twins" every single network should have? Who 
would map between "twins"?

Hi Jan,
Thanks for you your answer.
Reading more I have found more evidence that the “digital network twin” should 
be single/unified for a network of any size:
- I did mention below OpenConfig that had merged configuration and operational 
data models many years ago,
- RFC8342 is asking for the same☺,
- RFC8969 is going a step further and asking to have the cross-layer 
convergence to the unified YANG model.
It looks obvious – all relationships are better to have in the single 
consistent YANG data storage.
It is slowly happening in the IETF (examples above).

Yet RFC9182 has many disclaimers like:
“the L3NM is not defined as an augment to the L3SM, because a specific 
structure is required to meet network-oriented L3 needs”
“initial design was interpreted as if the deployment of the L3NM depends on the 
L3SM, while this is not the case”.
I see a couple of reasons for it:
- to minimize the disruption for many available implementations,
- to trade off unification and interoperability for functionality and 
flexibility.

After L3NM was disconnected from L3SM, many mapping between these models should 
be done in a proprietary way by coders.
Imagine that one product could map an event “A” in the L3NM to be related to 
the configuration XpathA in the L3SM, But the different product would map an 
event “A” to be related to the configuration XpathB in the L3SM.
Then they would act inconsistently.
The other example could be when provisioning something in L3SM would be mapped 
to different Xpath in L3NM by different products.
It is again a problem if L3NMs from different vendors should support the same 
VPN.

If humans (with really the best knowledge and intelligence in respective IETF 
WGs) were not capable of automatically mapping L3SM to L3NM, Then no hope that 
the algorithm developed by coders could.
It would for sure break the “closed loop control” cross-vendor.
In reality, it would create challenges even for a single-vendor solution 
because coders developing it may be not strong enough.
I suspect that this problem has to be revisited after some additional years of 
not satisfactory automation progress.
Not many would agree to single-vendor where it is not a critical roadblock.

I agree that for the single vendor environment just L2NM and completely 
disconnected L2SM are already big progress (teaching people how to develop big 
systems, a sort of educational value). It is much better than no 
recommendations at all.

When the design started top-down from L3SM – it was right. But when automatic 
mapping between L2NM and L3SM was lost – it was a mistake.
It did break top-down design that is very needed here.

Eduard
From: Jan Lindblad [mailto:j...@tail-f.com]
Sent: Wednesday, December 7, 2022 8:35 PM
To: Vasilenko Eduard 
<vasilenko.edu...@huawei.com<mailto:vasilenko.edu...@huawei.com>>
Cc: opsawg@ietf.org<mailto:opsawg@ietf.org>; 
net...@ietf.org<mailto:net...@ietf.org>; Paolo Volpato 
<paolo.volp...@huawei.com<mailto:paolo.volp...@huawei.com>>; Xipengxiao 
<xipengx...@huawei.com<mailto:xipengx...@huawei.com>>
Subject: Re: [netmod] How many "digital twins" every single network should 
have? Who would map between "twins"?

Hi Eduard,

Hi Automation Gurus,
YANG modules may be treated like a “digital twin” of the network with different 
resolution/accuracy (depending on Module details).
It looks like RFC 8969 is discussing that different YANG models (for different 
layers or functions) of the same network should be the clarification of the 
same “digital twin”.
Below are some excerpts from RFC 8969 that make me believe in the common Data 
Model after all YANG modules clarification for the same network.

But comparing RFC 8299 (L3SM) with RFC 9182 (L3NM) I conclude that “Data 
Models” are different (could not be automatically mapped).
Yet they should describe/represent the same network.

That is right. There are multiple attempts at modeling the use cases at each 
level of the management stack. This is not unlike how standards develop in 
other areas. Initially, and sometimes even after a long time, there are often 
competing standards. Sometimes even from within the same SDO.

It is evident in this situation that a big job for the vendor is needed to 
*map* Data Model of L3SM to the Data Model of L3NM.

I think you should be careful with the word "vendor", here as we're talking 
about an entire vendor eco-system. It is not typical that a router product 
would contain this mapping, but you are right that an NMS or OSS product from 
some vendor might. The mapping from network use cases to network device 
configuration is happening widely today, and a fair portion of all that is 
using YANG in some way.

It is not just a cost/time, additionally, it is a big source of 
interoperability issues. Engineers from different vendors would never map it in 
the same way.
I could pose similar examples for the other RFCs (like L2SM and L2NM, and many 
more).

Of course. Just like two router vendors would not implement a given IETF 
routing YANG model the same way, NMS/OSS vendors and any service providers that 
choose to do this on their own, will have the same freedom of implementation at 
their level. This freedom does not remove the value of standardized service 
YANG models in any way.

Why is IETF not following RFC 8969? It looks pretty evident. Why “Data Models” 
for the same network are not automatically mapped?!?

How could they be automatically mapped? Such mappings necessarily depend on use 
case, network circumstances and operator traditions/preferences, so I can't see 
any one-size-fits-all mapping here. Sure, you can make one mapping and declare 
it the one and only. But others may not agree and prefer to go with their own 
mapping.

It was logical to define initially top-level approximation for the network (the 
service model is probably the loosest one), Then extend Data Model (augment in 
RFC 7950 terminology) to the network model and so on (continue to clarify more 
details).
As it is rightfully stated in RFC 8969: only a top-down approach permits 
resolving the challenge of “closed loop control”. I would add “in the 
multivendor environment”.

If I understand right (not sure): it was the primary idea of OpenConfig to have 
the common Data Model for Configuration and Assurance at every layer (the 
unified “Digital twin” for the network).

The value of hundreds of already developed YANG modules looks questionable 
because vendor mapping by different vendors between functional and layered YANG 
modules could produce m*n^2 permutations.
It may not permit interoperability in the multi-vendor environment.

We certainly experience the concrete value of the many thousands of device 
level YANG modules out there when implementing NMS/OSS type of functionality. 
Anyone in that business should come prepared to navigate combinatorial 
explosions, but I can't say I have seen any traces of the specific m*n^2 
permutations you speak of above, relating to combinations of device level YANGs 
and service level YANGs.

I could imagine some reasons why it may not be possible in some cases but the 
general rule should be to always use “augment” of the parent YANG model.

I'm afraid I can't decipher this statement. Feel free to elaborate.

Best Regards,
/jan
_______________________________________________
OPSAWG mailing list
OPSAWG@ietf.org
https://www.ietf.org/mailman/listinfo/opsawg

Reply via email to