-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathauth48-rfc8313-reply.xml
2090 lines (1704 loc) · 93.8 KB
/
auth48-rfc8313-reply.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version='1.0' encoding='US-ASCII'?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<rfc submissionType="IETF" number="8313" category="bcp" seriesNo="213"
consensus="yes" ipr="trust200902">
<?rfc toc="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<?rfc sortrefs="no"?>
<?rfc symrefs="yes"?>
<?rfc strict="yes"?>
<front>
<title abbrev="Multicast for Inter-domain Peering Points">Use of Multicast across Inter-domain Peering Points</title>
<author role="editor" fullname="Percy S. Tarapore" initials="P.S." surname="Tarapore">
<organization>AT&T</organization>
<address><phone>1-732-420-4172</phone><email>[email protected]</email>
</address>
</author>
<author fullname="Robert Sayko" initials="R." surname="Sayko">
<organization>AT&T</organization>
<address><phone>1-732-420-3292</phone><email>[email protected]</email>
</address>
</author>
<author fullname="Greg Shepherd" initials="G." surname="Shepherd">
<organization>Cisco</organization>
<address><email>[email protected]</email>
</address>
</author>
<author role="editor" fullname="Toerless Eckert" initials="T."
surname="Eckert">
<organization abbrev="Huawei">Huawei USA - Futurewei Technologies Inc.</organization>
<address><email>[email protected]</email>
</address>
</author>
<author fullname="Ram Krishnan" initials="R." surname="Krishnan">
<organization>SupportVectors</organization>
<address><email>[email protected]</email>
</address>
</author>
<date month="December" year="2017"/>
<keyword>multicast security</keyword>
<keyword>multicast troubleshooting</keyword>
<keyword>multicast routing</keyword>
<keyword>multicast tunneling</keyword>
<keyword>PIM</keyword>
<keyword>PIM-SSM</keyword>
<keyword>SSM</keyword>
<keyword>Source Specific Multicast</keyword>
<keyword>AMT</keyword>
<keyword>GRE</keyword>
<keyword>Automatic Multicast Tunneling</keyword>
<keyword>BGP</keyword>
<keyword>MBGP</keyword>
<keyword>M-BGP</keyword>
<keyword>MP-BGP</keyword>
<keyword>exchange</keyword>
<keyword>exchange point</keyword>
<keyword>NNI</keyword>
<keyword>content distribution</keyword>
<keyword>video streaming</keyword>
<keyword>anycast</keyword>
<abstract><t>
This document examines the use of Source-Specific Multicast (SSM)
across inter-domain peering points for a specified set of deployment
scenarios. The objectives are to (1) describe the setup process for
multicast-based delivery across administrative domains for these
scenarios and (2) document supporting functionality to enable this
process.</t>
</abstract>
</front>
<middle>
<section title="Introduction" anchor="section-1"><t>
Content and data from several types of applications (e.g., live
video streaming, software downloads) are well suited for delivery
via multicast means. The use of multicast for delivering such
content or other data offers significant savings in terms of utilization of
resources in any given administrative domain. End User (EU) demand for such
content or other data is growing. Often, this requires transporting the
content or other data across administrative domains via inter&nbhy;domain
peering points.</t>
<t>The objectives of this document are twofold:
<list style="symbols">
<t>Describe the technical process and establish guidelines for
setting up multicast-based delivery of application content or other data
across inter&nbhy;domain peering points via a set of use cases
(where "Use Case 3.1" corresponds to <xref target="section-3.1"/>,
"Use Case 3.2" corresponds to <xref target="section-3.2"/>,
etc.).</t>
<t>Catalog all required exchanges of information between the
administrative domains to support multicast-based delivery.
This enables operators to initiate necessary processes to
support inter&nbhy;domain peering with multicast.</t>
</list></t>
<t>The scope and assumptions for this document are as follows:
<list style="symbols">
<t>Administrative Domain 1 (AD-1) sources content to one or more EUs in one or
more Administrative Domain 2 (AD&nbhy;2) entities. AD&nbhy;1 and AD&nbhy;2
want to use IP multicast to allow support for large and growing EU populations,
with a minimum amount of duplicated traffic to send across network links.
<list style="symbols">
<t>This document does not detail the case where EUs are originating
content. To support that additional service, it is recommended that
some method (outside the scope of this document) be used by which the
content from EUs is transmitted to the application in AD&nbhy;1 and
AD&nbhy;1 can send out the traffic as IP multicast. From that point on,
the descriptions in this document apply, except that they are not complete
because they do not cover the transport or operational aspects of the leg
from the EU to AD&nbhy;1.</t>
<t>This document does not detail the case where AD-1 and AD&nbhy;2 are
not directly connected to each other and are instead connected via
one or more other ADs (as opposed to a peering point) that serve as
transit providers. The cases described in this document where
tunnels are used between AD&nbhy;1 and AD&nbhy;2 can be applied to such
scenarios, but SLA ("Service Level Agreement") control, for example, would
be different. Additional issues will likely exist as well in such
scenarios. This topic is left for further study.</t>
</list></t>
<t>For the purposes of this document, the term "peering point"
refers to a network connection ("link") between two administrative
network domains over which traffic is exchanged between them. This is
also referred to as a Network-to-Network Interface (NNI). Unless
otherwise noted, it is assumed that the peering point is a
private peering point, where the network connection is a physically or
virtually isolated network connection solely between AD&nbhy;1 and
AD&nbhy;2. The other case is that of a broadcast peering point,
which is a common option in public Internet Exchange Points (IXPs).
See <xref target="section-4.2.4"/> for more details.</t>
<t>AD-1 is enabled with native multicast. A peering point exists between
AD&nbhy;1 and AD&nbhy;2.</t>
<t>It is understood that several protocols are available for this
purpose, including Protocol-Independent Multicast - Sparse Mode
(PIM&nbhy;SM) and Protocol-Independent Multicast -
Source&nbhy;Specific Multicast (PIM-SSM) <xref target="RFC7761"/>,
the Internet Group Management Protocol (IGMP) <xref target="RFC3376"/>,
and Multicast Listener Discovery (MLD) <xref target="RFC3810"/>.</t>
<t>As described in <xref target="section-2"/>, the source IP address of
the (so&nbhy;called "(S,G)") multicast stream in the originating AD
(AD&nbhy;1) is known. Under this condition, using PIM-SSM is beneficial,
as it allows the receiver's upstream router to send a join message
directly to the source without the need to invoke an intermediate
Rendezvous Point (RP). The use of SSM also presents an improved threat
mitigation profile against attack, as described in <xref
target="RFC4609"/>. Hence, in the case of inter&nbhy;domain peering, it
is recommended that only SSM protocols be used; the setup of
inter&nbhy;domain peering for ASM (Any&nbhy;Source Multicast) is
out of scope for this document.</t>
<t>The rest of this document assumes that PIM-SSM and BGP are used
across the peering point, plus Automatic Multicast Tunneling (AMT) <xref
target="RFC7450"/> and/or Generic Routing Encapsulation (GRE),
according to the scenario in question. The use of other protocols is
beyond the scope of this document.</t>
<t>AMT is set up at the peering point if either the peering point or
AD&nbhy;2 is not multicast enabled. It is assumed that an AMT relay will
be available to a client for multicast delivery. The selection of an
optimal AMT relay by a client is out of scope for this document. Note
that using AMT is necessary only when native multicast is unavailable
in the peering point (Use Case 3.3) or in the downstream
administrative domain (Use Cases 3.4 and 3.5).</t>
<t>It is assumed that the collection of billing data is done at the
application level and is not considered to be a networking
issue. The settlements process for EU billing and/or
inter&nbhy;provider billing is out of scope for this document.</t>
<t>Inter-domain network connectivity troubleshooting is only
considered within the context of a cooperative process between
the two domains.</t>
</list>
</t>
<t>
This document also attempts to identify ways by which the peering
process can be improved. Development of new methods for improvement
is beyond the scope of this document.</t>
</section>
<section title="Overview of Inter-domain Multicast Application Transport" anchor="section-2">
<t>A multicast-based application delivery scenario is as follows:
<list style="symbols">
<t>Two independent administrative domains are interconnected via a
peering point.</t>
<t>The peering point is either multicast enabled (end-to-end native
multicast across the two domains) or connected by one of two possible
tunnel types:
<list style="symbols">
<t>A GRE tunnel <xref target="RFC2784"/> allowing multicast
tunneling across the peering point, or</t>
<t>AMT <xref target="RFC7450"/>.</t>
</list>
</t>
<t>A service provider controls one or more application sources in
AD&nbhy;1 that will send multicast IP packets via one or more
(S,G)s (multicast traffic flows; see <xref target="section-4.2.1"/> if
you are unfamiliar with IP multicast). It is assumed that the service
being provided is suitable for delivery via multicast (e.g., live video
streaming of popular events, software downloads to many devices) and
that the packet streams will be carried by a suitable multicast
transport protocol.</t>
<t>An EU controls a device connected to AD-2, which runs an application
client compatible with the service provider's application source.</t>
<t>The application client joins appropriate (S,G)s in order to
receive the data necessary to provide the service to the EU.
The mechanisms by which the application client learns the
appropriate (S,G)s are an implementation detail of the
application and are out of scope for this document.</t>
</list>
</t>
<t>
The assumption here is that AD-1 has ultimate responsibility for
delivering the multicast-based service on behalf of the content
source(s). All relevant interactions between the two domains
described in this document are based on this assumption.</t>
<t>
Note that AD-2 may be an independent network domain (e.g., a Tier 1
network operator domain). Alternately, AD&nbhy;2 could also be an
enterprise network domain operated by a single customer of AD&nbhy;1. The
peering point architecture and requirements may have some unique aspects
associated with enterprise networks; see <xref target="section-3"/>.</t>
<t>
The use cases describing various architectural configurations for
multicast distribution, along with associated requirements, are
described in <xref target="section-3"/>. <xref target="section-4"/>
contains a comprehensive list of pertinent information that needs to
be exchanged between the two domains in order to support functions
to enable application transport.</t>
</section>
<section title="Inter-domain Peering Point Requirements for Multicast" anchor="section-3">
<t>
The transport of applications using multicast requires that the
inter&nbhy;domain peering point be enabled to support such a process.
This section presents five use cases for consideration.</t>
<section title="Native Multicast" anchor="section-3.1">
<t>
This use case involves end-to-end native multicast between the two
administrative domains, and the peering point is also native multicast
enabled. See <xref target="native-pic"/>.</t>
<figure anchor="native-pic" title="Content Distribution via End-to-End Native Multicast">
<artwork><![CDATA[
------------------- -------------------
/ AD-1 \ / AD-2 \
/ (Multicast Enabled) \ / (Multicast Enabled) \
/ \ / \
| +----+ | | |
| | | +------+ | | +------+ | +----+
| | AS |------>| BR |-|---------|->| BR |-------------|-->| EU |
| | | +------+ | I1 | +------+ |I2 +----+
\ +----+ / \ /
\ / \ /
\ / \ /
------------------- -------------------
AD = Administrative Domain (independent autonomous system)
AS = multicast (e.g., content) Application Source
BR = Border Router
I1 = AD-1 and AD-2 multicast interconnection (e.g., MP-BGP)
I2 = AD-2 and EU multicast connection
]]></artwork>
</figure>
<t>Advantages of this configuration:
<list style="symbols">
<t>Most efficient use of bandwidth in both domains.</t>
<t>Fewer devices in the path traversed by the multicast stream when
compared to an AMT-enabled peering point.</t>
</list>
</t>
<t>
From the perspective of AD-1, the one disadvantage associated with
native multicast to AD&nbhy;2 instead of individual unicast to every EU
in AD&nbhy;2 is that it does not have the ability to count the number of
EUs as well as the transmitted bytes delivered to them. This
information is relevant from the perspective of customer billing and
operational logs. It is assumed that such data will be collected by
the application layer. The application-layer mechanisms for
generating this information need to be robust enough so that all
pertinent requirements for the source provider and the AD operator
are satisfactorily met. The specifics of these methods are beyond
the scope of this document.</t>
<t>Architectural guidelines for this configuration are as follows:
<list style="letters">
<t>Dual homing for peering points between domains is recommended as a
way to ensure reliability with full BGP table visibility.</t>
<t>If the peering point between AD-1 and AD-2 is a controlled
network environment, then bandwidth can be allocated accordingly by the
two domains to permit the transit of non&nbhy;rate-adaptive multicast
traffic. If this is not the case, then the multicast traffic must
support congestion control via any of the mechanisms described in
Section 4.1 of <xref target="BCP145"/>.</t>
<t>The sending and receiving of multicast traffic between two
domains is typically determined by local policies associated
with each domain. For example, if AD&nbhy;1 is a service provider
and AD&nbhy;2 is an enterprise, then AD&nbhy;1 may support local policies
for traffic delivery to, but not traffic reception from, AD&nbhy;2.
Another example is the use of a policy by which AD&nbhy;1 delivers
specified content to AD&nbhy;2 only if such delivery has been
accepted by contract.</t>
<t>It is assumed that relevant information on multicast streams
delivered to EUs in AD&nbhy;2 is collected by available
capabilities in the application layer. The precise nature and
formats of the collected information will be determined by
directives from the source owner and the domain operators.</t>
</list>
</t>
</section>
<section title="Peering Point Enabled with GRE Tunnel" anchor="section-3.2">
<t>
The peering point is not native multicast enabled in this use case.
There is a GRE tunnel provisioned over the peering point. See
<xref target="gre-pic"/>.</t>
<figure anchor="gre-pic" title="Content Distribution via GRE Tunnel">
<artwork><![CDATA[
------------------- -------------------
/ AD-1 \ / AD-2 \
/ (Multicast Enabled) \ / (Multicast Enabled) \
/ \ / \
| +----+ +---+ | (I1) | +---+ |
| | | +--+ |uBR|-|--------|-|uBR| +--+ | +----+
| | AS |-->|BR| +---+-| | +---+ |BR| -------->|-->| EU |
| | | +--+<........|........|........>+--+ |I2 +----+
\ +----+ / I1 \ /
\ / GRE \ /
\ / Tunnel \ /
------------------- -------------------
AD = Administrative Domain (independent autonomous system)
AS = multicast (e.g., content) Application Source
uBR = unicast Border Router - not necessarily multicast enabled;
may be the same router as BR
BR = Border Router - for multicast
I1 = AD-1 and AD-2 multicast interconnection (e.g., MP-BGP)
I2 = AD-2 and EU multicast connection
]]></artwork>
</figure>
<t>In this case, interconnection I1 between AD-1 and AD&nbhy;2 in
<xref target="gre-pic"/> is multicast enabled via a GRE tunnel
<xref target="RFC2784"/> between the two BRs and encapsulating
the multicast protocols across it.</t>
<t>Normally, this approach is chosen if the uBR physically connected to the
peering link cannot or should not be enabled for IP multicast. This
approach may also be beneficial if the BR and uBR are the same device but
the peering link is a broadcast domain (IXP); see
<xref target="section-4.2.4"/>.</t>
<t>The routing configuration is basically unchanged: instead of
running BGP (SAFI&nbhy;2) ("SAFI" stands for "Subsequent Address Family
Identifier") across the native IP multicast link between AD&nbhy;1 and
AD&nbhy;2, BGP (SAFI&nbhy;2) is now run across the GRE tunnel.</t>
<t>Advantages of this configuration:
<list style="symbols">
<t>Highly efficient use of bandwidth in both domains, although not
as efficient as the fully native multicast use case
(<xref target="section-3.1"/>).</t>
<t>Fewer devices in the path traversed by the multicast stream
when compared to an AMT-enabled peering point.</t>
<t>Ability to support partial and/or incremental IP multicast deployments
in AD&nbhy;1 and/or AD&nbhy;2: only the path or paths between the
AS&wj;/BR (AD&nbhy;1) and the BR/EU (AD&nbhy;2) need to be multicast
enabled. The uBRs may not support IP multicast or enabling it could be
seen as operationally risky on that important edge node, whereas dedicated
BR nodes for IP multicast may (at least initially) be more acceptable. The
BR can also be located such that only parts of the domain may need to
support native IP multicast (e.g., only the core in AD&nbhy;1 but not
edge networks towards the uBR).</t>
<t>GRE is an existing technology and is relatively simple to implement.</t>
</list>
</t>
<t>Disadvantages of this configuration:
<list style="symbols">
<t>Per Use Case 3.1, current router technology cannot count the
number of EUs or the number of bytes transmitted.</t>
<t>The GRE tunnel requires manual configuration.</t>
<t>The GRE tunnel must be established prior to starting the stream.</t>
<t>The GRE tunnel is often left pinned up.</t>
</list>
</t>
<t>
Architectural guidelines for this configuration include the
following:</t>
<t>
Guidelines (a) through (d) are the same as those described in
Use Case 3.1. Two additional guidelines are as follows:</t>
<t><list hangIndent="4" style="hanging">
<t hangText="e.">
GRE tunnels are typically configured manually between peering
points to support multicast delivery between domains.</t>
<t hangText="f.">
It is recommended that the GRE tunnel (tunnel server) configuration in
the source network be such that it only advertises the routes to the
application sources and not to the entire network. This practice will
prevent unauthorized delivery of applications through the tunnel (for
example, if the application (e.g., content) is not part of an
agreed&nbhy;upon inter&nbhy;domain partnership).</t>
</list>
</t>
</section>
<section title="Peering Point Enabled with AMT - Both Domains Multicast Enabled" anchor="section-3.3">
<t>
It is assumed that both administrative domains in this use case are
native multicast enabled here; however, the peering point is not.</t>
<t>
The peering point is enabled with AMT. The basic configuration is
depicted in <xref target="ref-amt-interconnection-between-ad-1-and-ad-2"/>.
</t>
<figure title="AMT Interconnection between AD-1 and AD-2"
anchor="ref-amt-interconnection-between-ad-1-and-ad-2">
<artwork><![CDATA[
------------------- -------------------
/ AD-1 \ / AD-2 \
/ (Multicast Enabled) \ / (Multicast Enabled) \
/ \ / \
| +----+ +---+ | I1 | +---+ |
| | | +--+ |uBR|-|--------|-|uBR| +--+ | +----+
| | AS |-->|AR| +---+-| | +---+ |AG| -------->|-->| EU |
| | | +--+<........|........|........>+--+ |I2 +----+
\ +----+ / AMT \ /
\ / Tunnel \ /
\ / \ /
------------------- -------------------
AD = Administrative Domain (independent autonomous system)
AS = multicast (e.g., content) Application Source
AR = AMT Relay
AG = AMT Gateway
uBR = unicast Border Router - not multicast enabled;
also, either AR = uBR (AD-1) or uBR = AG (AD-2)
I1 = AMT interconnection between AD-1 and AD-2
I2 = AD-2 and EU multicast connection
]]></artwork>
</figure>
<t>Advantages of this configuration:
<list style="symbols">
<t>Highly efficient use of bandwidth in AD-1.</t>
<t>AMT is an existing technology and is relatively simple to
implement. Attractive properties of AMT include the following:
<list style="symbols">
<t>Dynamic interconnection between the gateway-relay pair across the
peering point.</t>
<t>Ability to serve clients and servers with differing policies.</t>
</list></t>
</list>
</t>
<t>Disadvantages of this configuration:
<list style="symbols">
<t>Per Use Case 3.1 (AD-2 is native multicast), current router
technology cannot count the number of EUs or the number of bytes
transmitted to all EUs.</t>
<t>Additional devices (AMT gateway and relay pairs) may be
introduced into the path if these services are not incorporated
into the existing routing nodes.</t>
<t>Currently undefined mechanisms for the AG to automatically
select the optimal AR.</t>
</list>
</t>
<t>Architectural guidelines for this configuration are as follows:</t>
<t>
Guidelines (a) through (d) are the same as those described in
Use Case 3.1. In addition,</t>
<t><list hangIndent="4" style="hanging">
<t hangText="e.">
It is recommended that AMT relay and gateway pairs be configured at the
peering points to support multicast delivery between domains. AMT tunnels
will then configure dynamically across the peering points once the
gateway in AD&nbhy;2 receives the (S,G) information from the EU.</t>
</list>
</t>
</section>
<section title="Peering Point Enabled with AMT - AD-2 Not Multicast Enabled" anchor="section-3.4">
<t>
In this AMT use case, AD-2 is not multicast enabled. Hence, the
interconnection between AD&nbhy;2 and the EU is also not multicast
enabled. This use case is depicted in <xref target="amt-pic"/>.</t>
<figure anchor="amt-pic" title="AMT Tunnel Connecting AD-1 AMT Relay and EU Gateway">
<artwork><![CDATA[
------------------- -------------------
/ AD-1 \ / AD-2 \
/ (Multicast Enabled) \ / (Not Multicast \
/ \ / Enabled) \ N(large)
| +----+ +---+ | | +---+ | # EUs
| | | +--+ |uBR|-|--------|-|uBR| | +----+
| | AS |-->|AR| +---+-| | +---+ ................>|EU/G|
| | | +--+<........|........|........... |I2 +----+
\ +----+ / N x AMT\ /
\ / Tunnel \ /
\ / \ /
------------------- -------------------
AS = multicast (e.g., content) Application Source
uBR = unicast Border Router - not multicast enabled;
otherwise, AR = uBR (in AD-1)
AR = AMT Relay
EU/G = Gateway client embedded in EU device
I2 = AMT tunnel connecting EU/G to AR in AD-1 through
non-multicast-enabled AD-2
]]></artwork>
</figure>
<t>This use case is equivalent to having unicast distribution of the
application through AD&nbhy;2. The total number of AMT tunnels would be
equal to the total number of EUs requesting the application.
The peering point thus needs to accommodate the total number of AMT
tunnels between the two domains. Each AMT tunnel can provide the
data usage associated with each EU.</t>
<t>Advantages of this configuration:
<list style="symbols">
<t>Efficient use of bandwidth in AD-1 (the closer the AR is to the uBR,
the more efficient).</t>
<t>Ability of AD-1 to introduce content delivery based on IP multicast,
without any support by network devices in AD&nbhy;2: only the application
side in the EU device needs to perform AMT gateway library functionality
to receive traffic from the AMT relay.</t>
<t>Allows AD-2 to "upgrade" to Use Case 3.5 (see
<xref target="section-3.5"/>) at a later time, without any change in
AD&nbhy;1 at that time.</t>
<t>AMT is an existing technology and is relatively simple to
implement. Attractive properties of AMT include the following:
<list style="symbols">
<t>Dynamic interconnection between the AMT gateway-relay pair across
the peering point.</t>
<t>Ability to serve clients and servers with differing policies.</t>
</list> </t>
<t>Each AMT tunnel serves as a count for each EU and is also able to
track data usage (bytes) delivered to the EU.</t>
</list>
</t>
<t>Disadvantages of this configuration:
<list style="symbols">
<t>Additional devices (AMT gateway and relay pairs) are introduced into
the transport path.</t>
<t>Assuming multiple peering points between the domains, the EU gateway
needs to be able to find the "correct" AMT relay in AD&nbhy;1.</t>
</list>
</t>
<t>Architectural guidelines for this configuration are as follows:</t>
<t>
Guidelines (a) through (c) are the same as those described in
Use Case 3.1. In addition,</t>
<t><list hangIndent="4" style="hanging">
<t hangText="d.">
It is necessary that proper procedures be implemented such that
the AMT gateway at the EU device is able to find the correct AMT relay
for each (S,G) content stream. Standard mechanisms for that selection
are still subject to ongoing work. This includes the use of anycast
gateway addresses, anycast DNS names, or explicit configuration that
maps (S,G) to a relay address; or letting the application in the
EU/G provide the relay address to the embedded AMT gateway function.</t>
<t hangText="e.">
The AMT tunnel's capabilities are expected to be sufficient for
the purpose of collecting relevant information on the multicast
streams delivered to EUs in AD&nbhy;2.</t>
</list>
</t>
</section>
<section title="AD-2 Not Multicast Enabled - Multiple AMT Tunnels through AD-2" anchor="section-3.5">
<t><xref target="amt-pic2"/> illustrates a variation of
Use Case 3.4:</t>
<figure anchor="amt-pic2" title="AMT Tunnel Connecting AMT Gateways and Relays">
<artwork><![CDATA[
------------------- -------------------
/ AD-1 \ / AD-2 \
/ (Multicast Enabled) \ / (Not Multicast \
/ +---+ \ (I1) / +---+ Enabled) \
| +----+ |uBR|-|--------|-|uBR| |
| | | +--+ +---+ | | +---+ +---+ | +----+
| | AS |-->|AR|<........|.... | +---+ |AG/|....>|EU/G|
| | | +--+ | ......|.|AG/|..........>|AR2| |I3 +----+
\ +----+ / I1 \ |AR1| I2 +---+ /
\ / Single \+---+ /
\ / AMT Tunnel \ /
------------------- -------------------
uBR = unicast Border Router - not multicast enabled;
also, either AR = uBR (AD-1) or uBR = AGAR1 (AD-2)
AS = multicast (e.g., content) Application Source
AR = AMT Relay in AD-1
AGAR1 = AMT Gateway/Relay node in AD-2 across peering point
I1 = AMT tunnel connecting AR in AD-1 to gateway in AGAR1 in AD-2
AGAR2 = AMT Gateway/Relay node at AD-2 network edge
I2 = AMT tunnel connecting relay in AGAR1 to gateway in AGAR2
EU/G = Gateway client embedded in EU device
I3 = AMT tunnel connecting EU/G to AR in AGAR2
]]></artwork>
</figure>
<t>
Use Case 3.4 results in several long AMT tunnels crossing the entire
network of AD&nbhy;2 linking the EU device and the AMT relay in AD&nbhy;1
through the peering point. Depending on the number of EUs, there is a
likelihood of an unacceptably high amount of traffic due to the large
number of AMT tunnels -- and unicast streams -- through the peering
point. This situation can be alleviated as follows:</t>
<t><list style="symbols">
<t>Provisioning of strategically located AMT nodes in AD-2. An
AMT node comprises co-location of an AMT gateway and an AMT relay. No
change is required by AD&nbhy;1, as compared to
Use Case 3.4. This can be done whenever AD&nbhy;2 sees fit
(e.g., too much traffic across the peering point).</t>
<t>One such node is on the AD-2 side of the peering point
(AMT node AGAR1 in <xref target="amt-pic2"/>).</t>
<t>A single AMT tunnel established across the peering point linking
the AMT relay in AD&nbhy;1 to the AMT gateway in AMT node AGAR1
in AD&nbhy;2.</t>
<t>AMT tunnels linking AMT node AGAR1 at the peering point in AD&nbhy;2
to other AMT nodes located at the edges of AD&nbhy;2: e.g., AMT
tunnel I2 linking the AMT relay in AGAR1 to the AMT gateway in
AMT node AGAR2 (<xref target="amt-pic2"/>).</t>
<t>AMT tunnels linking an EU device (via a gateway client embedded in
the device) and an AMT relay in an appropriate AMT node at the edge
of AD&nbhy;2: e.g., I3 linking the EU gateway in the device to
the AMT relay in AMT node AGAR2.</t>
<t>In the simplest option (not shown), AD-2 only deploys a single
AGAR1 node and lets the EU/G build AMT tunnels directly to it. This
setup already solves the problem of replicated traffic across
the peering point. As soon as there is a need to support more
AMT tunnels to the EU/G, then additional AGAR2 nodes can be deployed
by AD&nbhy;2.</t>
</list>
</t>
<t>
The advantage of such a chained set of AMT tunnels is that the total
number of unicast streams across AD&nbhy;2 is significantly reduced, thus
freeing up bandwidth. Additionally, there will be a single unicast stream
across the peering point instead of, possibly, an unacceptably large
number of such streams per Use Case 3.4. However, this implies
that several AMT tunnels will need to be dynamically configured by the
various AMT gateways, based solely on the (S,G) information received from
the application client at the EU device. A suitable mechanism for such
dynamic configurations is therefore critical.</t>
<t>Architectural guidelines for this configuration are as follows:</t>
<t>
Guidelines (a) through (c) are the same as those described in
Use Case 3.1. In addition,</t>
<t><list hangIndent="4" style="hanging">
<t hangText="d.">
It is necessary that proper procedures be implemented such that
the various AMT gateways (at the EU devices and the AMT nodes in
AD&nbhy;2) are able to find the correct AMT relay in other AMT nodes
as appropriate. Standard mechanisms for that selection are still
subject to ongoing work. This includes the use of anycast gateway
addresses, anycast DNS names, or explicit configuration that maps
(S,G) to a relay address. On the EU/G, this mapping information
may come from the application.</t>
<t hangText="e.">
The AMT tunnel's capabilities are expected to be sufficient for
the purpose of collecting relevant information on the multicast
streams delivered to EUs in AD&nbhy;2.</t>
</list>
</t>
</section>
</section>
<section title="Functional Guidelines" anchor="section-4">
<t>Supporting functions and related interfaces over the peering point
that enable the multicast transport of the application are listed in
this section. Critical information parameters that need to be
exchanged in support of these functions are enumerated, along with
guidelines as appropriate. Specific interface functions for
consideration are as follows.</t>
<section title="Network Interconnection Transport Guidelines" anchor="section-4.1">
<t>
The term "network interconnection transport" refers to the
interconnection points between the two administrative domains. The
following is a representative set of attributes that the two
administrative domains will need to agree on to support multicast
delivery.</t>
<t><list style="symbols">
<t>Number of peering points.</t>
<t>Peering point addresses and locations.</t>
<t>Connection type - Dedicated for multicast delivery or shared
with other services.</t>
<t>Connection mode - Direct connectivity between the two ADs or
via another ISP.</t>
<t>Peering point protocol support - Multicast protocols that will
be used for multicast delivery will need to be supported at
these points. Examples of such protocols include
External BGP (EBGP) <xref target="RFC4760"/> peering via
MP&nbhy;BGP (Multiprotocol BGP) SAFI&nbhy;2
<xref target="RFC4760"/>.</t>
<t>Bandwidth allocation - If shared with other services, then
there needs to be a determination of the share of bandwidth
reserved for multicast delivery. See
<xref target="section-4.1.1"/> below for more details.</t>
<t>QoS requirements - Delay and/or latency specifications that need to
be specified in an SLA.</t>
<t>AD roles and responsibilities - The role played by each AD for
provisioning and maintaining the set of peering points to support
multicast delivery.</t>
</list>
</t>
<section title="Bandwidth Management" anchor="section-4.1.1">
<t>Like IP unicast traffic, IP multicast traffic carried across
non&nbhy;controlled networks must comply with congestion control
principles as described in <xref target="BCP41"/> and as explained
in detail for UDP IP multicast in <xref target="BCP145"/>.</t>
<t>Non-controlled networks (such as the Internet) are networks where
there is no policy for managing bandwidth other than best effort with a
fair share of bandwidth under congestion. As a simplified rule of thumb,
complying with congestion control principles means reducing bandwidth
under congestion in a way that is fair to competing (typically TCP)
flows ("rate adaptive").</t>
<t>In many instances, multicast content delivery evolves from
intra&nbhy;domain deployments where it is handled as a controlled
network service and does not comply with congestion control
principles. It was given a reserved amount of bandwidth and admitted to
the network so that congestion never occurs. Therefore, the congestion
control issue should be given specific attention when evolving to an
inter&nbhy;domain peering deployment.</t>
<t>In the case where end-to-end IP multicast traffic passes across the
network of two ADs (and their subsidiaries/customers), both ADs must
agree on a consistent traffic-management policy. If, for example,
AD&nbhy;1 sources non&nbhy;congestion&nbhy;aware IP multicast traffic
and AD&nbhy;2 carries it as best&nbhy;effort traffic across links shared
with other Internet traffic (subject to congestion), this will not
work: under congestion, some amount of that traffic will be dropped,
often rendering the remaining packets as undecodable garbage clogging
up the network in AD&nbhy;2; because this traffic is not
congestion aware, the loss does not reduce this rate. Competing
traffic will not get their fair share under congestion, and EUs will be
frustrated by the extremely bad quality of both their IP multicast
traffic and other (e.g., TCP) traffic. Note that this is not an IP
multicast technology issue but is solely a
transport&nbhy;layer / application&nbhy;layer issue: the problem
would just as likely happen if AD&nbhy;1 were to send
non&nbhy;rate-adaptive unicast traffic -- for example, legacy IPTV
video&nbhy;on&nbhy;demand traffic, which is typically also
non&nbhy;congestion aware. Note that because rate adaptation in IP
unicast video is commonplace today due to the availability of ABR
(Adaptive Bitrate) video, it is very unlikely that this will happen in
reality with IP unicast.</t>
<t>While the rules for traffic management apply whether IP multicast
is tunneled or not, the one feature that can make AMT tunnels more
difficult is the unpredictability of bandwidth requirements across
underlying links because of the way they can be used: with native IP
multicast or GRE tunnels, the amount of bandwidth depends on the amount
of content -- not the number of EUs -- and is therefore easier to plan
for. AMT tunnels terminating in the EU/G, on the other hand, scale with
the number of EUs. In the vicinity of the AMT relay, they can introduce
a very large amount of replicated traffic, and it is not always feasible
to provision enough bandwidth for all possible EUs to get the highest
quality for all their content during peak utilization in such setups --
unless the AMT relays are very close to the EU edge. Therefore, it is
also recommended that IP multicast rate adaptation be used, even inside
controlled networks, when using AMT tunnels directly to the EU/G.</t>
<t>Note that rate-adaptive IP multicast traffic in general does not mean
that the sender is reducing the bitrate but rather that the EUs that
experience congestion are joining to a lower-bitrate (S,G) stream of the
content, similar to ABR streaming over TCP. Therefore, migration from a
non&nbhy;rate&nbhy;adaptive bitrate to a rate&nbhy;adaptive bitrate in
IP multicast will also change the dynamic (S,G) join behavior in the
network, resulting in potentially higher performance requirements for IP
multicast protocols (IGMP/PIM), especially on the last hops where
dynamic changes occur (including AMT gateways/relays): in
non&nbhy;rate-adaptive IP multicast, only "channel change" causes state
change, but in rate-adaptive multicast, congestion also causes state
change.</t>
<t>Even though not fully specified in this document, peerings that rely
on GRE/AMT tunnels may be across one or more transit ADs instead of an
exclusive (non&nbhy;shared, L1/L2) path. Unless those transit ADs are
explicitly contracted to provide other than "best effort" transit
for the tunneled traffic, the tunneled IP multicast traffic must be
rate adaptive in order to not violate BCP 41 across those
transit ADs.</t>
</section>
</section>
<section title="Routing Aspects and Related Guidelines" anchor="section-4.2">
<t>
The main objective for multicast delivery routing is to ensure that
the EU receives the multicast stream from the "most optimal"
source <xref target="INF_ATIS_10"/>, which typically:</t>
<t><list style="symbols">
<t>Maximizes the multicast portion of the
transport and minimizes any unicast portion of the delivery, and</t>
<t>Minimizes the overall combined route distance of the network(s).</t>
</list>
</t>
<t>
This routing objective applies to both native multicast and AMT; the
actual methodology of the solution will be different for each. Regardless,
the routing solution is expected to:</t>
<t><list style="symbols"><t>Be scalable,</t>
<t>Avoid or minimize new protocol development or modifications,
and</t>
<t>Be robust enough to achieve high reliability and to
automatically adjust to changes and problems in the multicast
infrastructure.</t>
</list>
</t>
<t>
For both native and AMT environments, having a source as close as
possible to the EU network is most desirable; therefore, in some
cases, an AD may prefer to have multiple sources near different
peering points. However, that is entirely an implementation issue.</t>
<section title="Native Multicast Routing Aspects" anchor="section-4.2.1">
<t>
Native multicast simply requires that the administrative domains
coordinate and advertise the correct source address(es) at their
network interconnection peering points (i.e., BRs). An example of
multicast delivery via a native multicast process across
two administrative domains is as follows, assuming that the
interconnecting peering points are also multicast enabled:</t>
<t><list style="symbols">
<t>Appropriate information is obtained by
the EU client, who is a subscriber to AD&nbhy;2 (see
Use Case 3.1). This information is in the form of metadata,
and it contains instructions directing the EU client to launch an
appropriate application if necessary, as well as additional information
for the application about the source location and the group (or stream)
ID in the form of (S,G) data. The "S" portion provides the name or IP
address of the source of the multicast stream. The metadata may also
contain alternate delivery information, such as specifying the unicast
address of the stream.</t>
<t>The client uses the join message with (S,G) to join the multicast
stream <xref target="RFC4604"/>. To facilitate this process, the two
ADs need to do the following:
<list style="symbols">
<t>Advertise the source ID(s) over the peering points.</t>
<t>Exchange such relevant peering point information as capacity
and utilization.</t>
<t>Implement compatible multicast protocols to ensure proper
multicast delivery across the peering points.</t>
</list>
</t>
</list>
</t>
</section>
<section title="GRE Tunnel over Interconnecting Peering Point" anchor="section-4.2.2">
<t>
If the interconnecting peering point is not multicast enabled and
both ADs are multicast enabled, then a simple solution is to
provision a GRE tunnel between the two ADs; see Use Case 3.2
(<xref target="section-3.2"/>). The termination points of the tunnel will