From 14614c2f7f3fdccb539fd5742de253576fe5d013 Mon Sep 17 00:00:00 2001 From: Andras Varga Date: Thu, 16 May 2024 10:01:20 +0200 Subject: [PATCH] showcases: proofreading of rst files using llm --- .../emulation/videostreaming/doc/index.rst | 26 +-- showcases/emulation/voip/doc/mininet.rst | 2 +- showcases/emulation/webserver/doc/index.rst | 10 +- showcases/general/diffserv/doc/index.rst | 36 ++-- showcases/index.rst | 7 +- showcases/measurement/datarate/doc/index.rst | 8 +- .../measurement/endtoenddelay/doc/index.rst | 13 +- showcases/measurement/flow/doc/index.rst | 20 +-- showcases/measurement/jitter/doc/index.rst | 2 +- .../measurement/propagationtime/doc/index.rst | 6 +- .../measurement/queueingtime/doc/index.rst | 4 +- .../measurement/relationships/doc/index.rst | 6 +- .../measurement/residencetime/doc/index.rst | 4 +- .../measurement/throughput/doc/index.rst | 3 +- .../transmissiontime/doc/index.rst | 6 +- showcases/mobility/basic/doc/index.rst | 30 ++-- showcases/mobility/spatial/doc/index.rst | 20 +-- showcases/routing/manet/doc/index.rst | 56 +++--- .../frerandtas/doc/index.rst | 3 +- .../gptpandtas/doc/index.rst | 28 +-- .../combiningfeatures/invehicle/doc/index.rst | 10 +- .../tsn/cutthroughswitching/doc/index.rst | 12 +- showcases/tsn/framepreemption/doc/index.rst | 118 ++++++------ .../automaticfailureprotection/doc/index.rst | 6 +- .../doc/index.rst | 10 +- showcases/tsn/framereplication/index.rst | 4 +- .../manualconfiguration/doc/index.rst | 10 +- .../multicastfailureprotection/doc/index.rst | 5 +- .../tsn/gatescheduling/eager/doc/index.rst | 6 +- showcases/tsn/gatescheduling/index.rst | 2 +- showcases/tsn/index.rst | 8 +- .../streamfiltering/statistical/doc/index.rst | 11 +- .../streamfiltering/tokenbucket/doc/index.rst | 18 +- .../clockdrift/doc/index.rst | 52 +++--- .../asynchronousshaper/doc/index.rst | 54 +++--- .../creditbasedshaper/doc/index.rst | 6 +- .../trafficshaping/underthehood/doc/index.rst | 8 +- .../canvas/datalinkactivity/doc/index.rst | 96 +++++----- .../canvas/instrumentfigures/doc/index.rst | 2 +- .../canvas/packetdrop/doc/index.rst | 38 ++-- .../canvas/routingtable/doc/index.rst | 4 +- .../visualizer/canvas/spectrum/doc/index.rst | 44 ++--- .../canvas/transportconnection/doc/index.rst | 13 +- .../transportpathactivity/doc/index.rst | 18 +- showcases/visualizer/osg/earth/doc/index.rst | 16 +- .../visualizer/osg/environment/doc/index.rst | 8 +- showcases/wireless/aggregation/doc/index.rst | 16 +- showcases/wireless/analogmodel/doc/index.rst | 118 ++++++------ showcases/wireless/blockack/doc/index.rst | 64 +++---- showcases/wireless/coexistence/doc/index.rst | 2 +- showcases/wireless/crosstalk/doc/index.rst | 28 +-- .../directionalantennas/doc/index.rst | 8 +- showcases/wireless/errorrate/doc/index.rst | 34 ++-- showcases/wireless/handover/doc/index.rst | 2 +- showcases/wireless/hiddennode/doc/index.rst | 4 +- showcases/wireless/ieee802154/doc/index.rst | 6 +- showcases/wireless/multiradio/doc/index.rst | 2 +- showcases/wireless/power/doc/index.rst | 8 +- showcases/wireless/qos/doc/index.rst | 6 +- showcases/wireless/ratecontrol/doc/index.rst | 8 +- .../wireless/sensornetwork/doc/index.rst | 168 +++++++++--------- showcases/wireless/throughput/doc/index.rst | 10 +- showcases/wireless/txop/doc/index.rst | 13 +- 63 files changed, 677 insertions(+), 689 deletions(-) diff --git a/showcases/emulation/videostreaming/doc/index.rst b/showcases/emulation/videostreaming/doc/index.rst index d333e34a527..cdb09b294f6 100644 --- a/showcases/emulation/videostreaming/doc/index.rst +++ b/showcases/emulation/videostreaming/doc/index.rst @@ -8,7 +8,7 @@ This showcase demonstrates the use of real applications in a simulated network, of application behavior without having to set up a physical network. This technique is also known as software-in-the-loop (SIL). -.. TODO this paragraph is too generic, it barely mean anything +.. TODO this paragraph is too generic, it barely means anything The simulation can be easily configured for various topologies and behaviors to test a variety of cases. Using INET's emulation feature, the real world (host OS environment) is interfaced with the simulation. INET offers various modules that make this interfacing possible, and more information about these modules @@ -28,16 +28,16 @@ The simulation scenario is illustrated with the following diagram: :width: 50% :align: center -In this scenario, a VLC instance in a sender host streams a video file to another VLC instance -in a receiver host over the network. The hosts from the link-layer up are real; parts of the +In this scenario, a VLC instance on a sender host streams a video file to another VLC instance +on a receiver host over the network. The hosts from the link-layer up are real; parts of the link-layer, as well as the physical layer and the network are simulated. We'll use the :ned:`ExtUpperEthernetInterface` to connect the real and simulated parts of the scenario. -The lower part of this interface is present in the simulation, and uses TAP interfaces in the host OS +The lower part of this interface is present in the simulation and uses TAP interfaces in the host OS to send and receive packets to and from the upper layers of the host OS. Note that the real and simulated parts can be separated at other levels of the protocol stack, using -other, suitable EXT interface modules, such as at the transport layer (:ned:`ExtLowerUdp`), and the +other suitable EXT interface modules, such as at the transport layer (:ned:`ExtLowerUdp`) and the network layer (:ned:`ExtUpperIpv4`, :ned:`ExtLowerIpv4`). In fact, the real parts of the sender and receiver hosts are running on the same machine, as both use @@ -47,12 +47,12 @@ the protocol stack of the host OS (even though in this scenario logically they a :width: 50% :align: center -We'll use a VLC instance in the sender host to stream a video file. The packets created by VLC go +We'll use a VLC instance on the sender host to stream a video file. The packets created by VLC go down the host OS protocol stack and enter the simulation at the Ethernet interface. Then they traverse the simulated network, enter the receiver host's Ethernet interface, and are injected -into the host OS protocol stack, and go up to another VLC instance which plays the video. +into the host OS protocol stack and go up to another VLC instance, which plays the video. -The network for the simulation is the following: +The network for the simulation is as follows: .. figure:: media/Network2.png :width: 90% @@ -78,20 +78,20 @@ The ``teardown.sh`` script does the opposite; it destroys the TAP interfaces whe .. literalinclude:: ../teardown.sh :language: bash -The ``run.sh`` script starts the simulation, and both video applications: +The ``run.sh`` script starts the simulation and both video applications: .. literalinclude:: ../run.sh :language: bash In the configuration in omnetpp.ini, the scheduler class is set to ``RealTimeScheduler`` so that -the simulation can run in the real time of the host OS: +the simulation can run in the real-time of the host OS: .. literalinclude:: ../omnetpp.ini :language: ini :end-at: sim-time-limit -The hosts are configured to have an :ned:`ExtUpperEthernetInterface`, and to use the TAP devices -which were created by the setup script. The setup script assigned IP addresses to the TAP interfaces; +The hosts are configured to have an :ned:`ExtUpperEthernetInterface` and to use the TAP devices +that were created by the setup script. The setup script assigned IP addresses to the TAP interfaces; the EXT interfaces are configured to copy the addresses from the TAP interfaces: .. literalinclude:: ../omnetpp.ini @@ -141,7 +141,7 @@ The received video stream is played by the other VLC instance. The received vide the original video file, because it's downscaled, and the bitrate is reduced, so that the playback is smooth. -.. note:: Emulating the network is CPU-intensive. The downscaling and bitrate settings were chosen to lead to smooth playback on the PC we tested the showcase on. However, it might be able to work in higher quality on a faster machine; the user can experiment with different encoding settings for the VLC streaming instance by editing them in the run script. +.. note:: Emulating the network is CPU-intensive. The downscaling and bitrate settings were chosen to lead to smooth playback on the PC we tested the showcase on. However, it might be able to work at higher quality on a faster machine; the user can experiment with different encoding settings for the VLC streaming instance by editing them in the run script. Here are some of the packets captured in Wireshark: diff --git a/showcases/emulation/voip/doc/mininet.rst b/showcases/emulation/voip/doc/mininet.rst index 5ce5975f38e..95aceb993eb 100644 --- a/showcases/emulation/voip/doc/mininet.rst +++ b/showcases/emulation/voip/doc/mininet.rst @@ -43,7 +43,7 @@ The Setup To recap, the goal of this showcase is to simulate transmission of a VoIP stream over a virtual network, where both the VoIP sender and receiver are simulations -that are run in realtime. We use Mininet to a similar, but slightly extended +that are run in real-time. We use Mininet to set up a similar, but slightly extended network topology than in the previous showcase: two hosts connected via a switch. The simulations that run the VoIP sender and receiver are the same as in the previous showcase. diff --git a/showcases/emulation/webserver/doc/index.rst b/showcases/emulation/webserver/doc/index.rst index d31389eea98..542ceda9736 100644 --- a/showcases/emulation/webserver/doc/index.rst +++ b/showcases/emulation/webserver/doc/index.rst @@ -7,7 +7,7 @@ Goals The goal of this showcase is to demonstrate the integration and operation of a real-world application within an OMNeT++/INET simulation environment. Specifically, it involves running a Python-based webserver and executing several -wget commands from host operating system environments. These are interfaced with +`wget` commands from host operating system environments. These are interfaced with simulated network components, illustrating the capabilities of OMNeT++ in hybrid simulations that involve both emulated and simulated network elements. @@ -31,7 +31,7 @@ Here is the NED definition of the network: This network consists of a server connected to an Ethernet switch, along with a configurable number of clients also connected to the switch. The server will run a real Python webserver, and the clients will perform HTTP GET requests using -wget. +`wget`. Configuration @@ -52,16 +52,16 @@ Results ------- Upon running the simulation, the webserver on the host OS will respond to HTTP -GET requests initiated by the wget commands from the simulated clients. The +GET requests initiated by the `wget` commands from the simulated clients. The interaction can be observed in the simulation's event log, and network performance metrics such as response time and throughput can be analyzed based on the simulation results. -The following terminal screenshot shows the running webserver and wget processes: +The following terminal screenshot shows the running webserver and `wget` processes: .. figure:: media/ps.png -The output of the processes appears in the Qtenv log window. Here is the output of one of the wget commands: +The output of the processes appears in the Qtenv log window. Here is the output of one of the `wget` commands: .. figure:: media/wget_module_log.png diff --git a/showcases/general/diffserv/doc/index.rst b/showcases/general/diffserv/doc/index.rst index b09f4b9041f..1935049b209 100644 --- a/showcases/general/diffserv/doc/index.rst +++ b/showcases/general/diffserv/doc/index.rst @@ -34,15 +34,15 @@ In theory, a network could have up to 64 (i.e. 2^6) different traffic classes using different DSCPs. In practice, however, most networks use the following commonly defined per-hop behaviors: -- *Default PHB* typically maps to best-effort traffic. -- *Expedited Forwarding (EF) PHB* is dedicated to low-loss, low-latency - traffic. -- *Assured Forwarding (AF) PHBs* give assurance of delivery under - prescribed conditions; there are four classes and three drop - probabilities, yielding twelve separate DSCP encodings from AF11 - through AF43. -- *Class Selector PHBs* provide backward compatibility with the - IP Precedence field. +- *Default PHB* typically maps to best-effort traffic. +- *Expedited Forwarding (EF) PHB* is dedicated to low-loss, low-latency + traffic. +- *Assured Forwarding (AF) PHBs* give assurance of delivery under + prescribed conditions; there are four classes and three drop + probabilities, yielding twelve separate DSCP encodings from AF11 + through AF43. +- *Class Selector PHBs* provide backward compatibility with the + IP Precedence field. As EF is often used for carrying VoIP traffic, we'll also configure our example network to do that. @@ -86,15 +86,15 @@ Configuration and Behavior The showcase contains three different configurations: -- ``VoIP_WithoutQoS``: The queue in the router's PPP interface is - overloaded and packets are dropped. -- ``VoIP_WithPolicing``: The VoIP traffic is classified as EF - traffic and others as AF. AF traffic is rate - limited using Token Bucket to 70% of the link's capacity. -- ``VoIP_WithPolicingAndQueuing``: This is the same as the previous - configuration, except the router's queue is configured so that EF - packets are prioritized over other packets, so lower delays are - expected. +- ``VoIP_WithoutQoS``: The queue in the router's PPP interface is + overloaded and packets are dropped. +- ``VoIP_WithPolicing``: The VoIP traffic is classified as EF + traffic and others as AF. AF traffic is rate + limited using Token Bucket to 70% of the link's capacity. +- ``VoIP_WithPolicingAndQueuing``: This is the same as the previous + configuration, except the router's queue is configured so that EF + packets are prioritized over other packets, so lower delays are + expected. The router's PPP interface contains the key elements of Differentiated Services in this network: a queue (``queue``) and a traffic conditioner (``egressTC``). diff --git a/showcases/index.rst b/showcases/index.rst index 29cebfb2048..4eda3b32965 100644 --- a/showcases/index.rst +++ b/showcases/index.rst @@ -1,15 +1,16 @@ Showcases ========= + INET Showcases are small simulation studies that show off various components -and features of the INET Framework. Each showcase consist of a fully +and features of the INET Framework. Each showcase consists of a fully configured simulation model and a web page that presents the goal of the study, -the simulation setup and the results. Although not expressly designed as tutorials, +the simulation setup, and the results. Although not expressly designed as tutorials, showcases were created in the hope that they will be directly useful for INET users doing related simulations. You can browse the showcase pages on this web site. The source code of -the simulations (NED, ini and other files) and the web pages are in the +the simulations (NED, ini, and other files) and the web pages are in the `showcases/` subdirectory of the `INET repository `_. diff --git a/showcases/measurement/datarate/doc/index.rst b/showcases/measurement/datarate/doc/index.rst index b526c7a3777..f05cc7c1241 100644 --- a/showcases/measurement/datarate/doc/index.rst +++ b/showcases/measurement/datarate/doc/index.rst @@ -4,7 +4,7 @@ Measuring Data Rate Goals ----- -In this example we explore the data rate statistics of application, queue, and +In this example, we explore the data rate statistics of application, queue, and filter modules inside network nodes. | INET version: ``4.4`` @@ -13,10 +13,10 @@ filter modules inside network nodes. The Model --------- -The data rate is measured by observing the packets as they are passing through +The data rate is measured by observing the packets as they pass through over time at a certain point in the node architecture. For example, an application -source module produces packets over time and this process has its own data rate. -Similarly, a queue module enqueues and dequeues packets over time and both of +source module produces packets over time, and this process has its own data rate. +Similarly, a queue module enqueues and dequeues packets over time, and both of these processes have their own data rate. These data rates are different, which in turn causes the queue length to increase or decrease over time. diff --git a/showcases/measurement/endtoenddelay/doc/index.rst b/showcases/measurement/endtoenddelay/doc/index.rst index e048a9fa6ef..91eb2dbd2a1 100644 --- a/showcases/measurement/endtoenddelay/doc/index.rst +++ b/showcases/measurement/endtoenddelay/doc/index.rst @@ -4,7 +4,7 @@ Measuring End-to-end Delay Goals ----- -In this example we explore the end-to-end delay statistics of applications. +In this example, we explore the end-to-end delay statistics of applications. | INET version: ``4.4`` | Source files location: `inet/showcases/measurement/endtoenddelay `__ @@ -16,7 +16,7 @@ The end-to-end delay is measured from the moment the packet leaves the source application to the moment the same packet arrives at the destination application. The end-to-end delay is measured by the ``meanBitLifeTimePerPacket`` statistic. -The statistic measures the lifetime of the packet, i.e. time from creation in the source application +The statistic measures the lifetime of the packet, i.e., time from creation in the source application to deletion in the destination application. .. note:: The `meanBit` part refers to the statistic being defined per bit, and the result is the mean of the per-bit values of all bits in the packet. @@ -27,7 +27,7 @@ The simulations use a network with two hosts (:ned:`StandardHost`) connected via .. figure:: media/Network.png :align: center -We configure the packet source in the source hosts' UDP app to generate 1200-Byte packets with a period of around 100us randomly. +We configure the packet source in the source host's UDP app to generate 1200-Byte packets with a period of around 100us randomly. This corresponds to about 96Mbps of traffic. Here is the configuration: .. literalinclude:: ../omnetpp.ini @@ -39,7 +39,7 @@ Results The traffic is around 96 Mbps, but the period is random. Thus, the traffic can be higher than the 100Mbps capacity of the Ethernet link. This might result in packets accumulating in the queue in the source host, and increased end-to-end delay (the queue length is unlimited by default). -We display the end-to-end delay, we plot the ``meanBitLifeTimePerPacket`` statistic in vector and histogram form: +To display the end-to-end delay, we plot the ``meanBitLifeTimePerPacket`` statistic in vector and histogram form: .. figure:: media/EndToEndDelayHistogram.png :align: center @@ -47,7 +47,7 @@ We display the end-to-end delay, we plot the ``meanBitLifeTimePerPacket`` statis .. figure:: media/EndToEndDelayVector.png :align: center -.. **TODO** why the uptick ? +.. **TODO** why the uptick? The uptick towards the end of the simulation is due to packets accumulating in the queue. @@ -56,5 +56,4 @@ Sources: :download:`omnetpp.ini <../omnetpp.ini>`, :download:`EndToEndDelayMeasu Discussion ---------- -Use `this `__ page in the GitHub issue tracker for commenting on this showcase. - +Use `this `__ page in the GitHub issue tracker for commenting on this showcase. \ No newline at end of file diff --git a/showcases/measurement/flow/doc/index.rst b/showcases/measurement/flow/doc/index.rst index 496cff988bd..0542086e305 100644 --- a/showcases/measurement/flow/doc/index.rst +++ b/showcases/measurement/flow/doc/index.rst @@ -24,11 +24,11 @@ By default, statistics in INET are collected based on local events associated wi For timing measurements that are associated with events happening to a packet, `packet flows` can be defined. A packet flow is a logical classification of packets, identified by its name, in the whole network and over the whole duration of the simulation. A packet flow is defined by a label that is added to some packets. The label contains the flow name (the flow's identity), and some specified measurement requests. -Time measurement along packet flows is useful in situations when the timing data to be measured is associated with a packet, rather then the modules it passes through, processed or arrives at. For example, we might want to measure queueing time for a packet at all the modules it passes through, as opposed to measuring queueing time at a queue module for all packets that it processes. +Time measurement along packet flows is useful in situations when the timing data to be measured is associated with a packet, rather than the modules it passes through, processes, or arrives at. For example, we might want to measure the queueing time for a packet at all the modules it passes through, as opposed to measuring queueing time at a queue module for all packets that it processes. -A packet is associated with the flow by specialized modules called `measurement starters`. These modules attach a label to the packet, that indicates which flow it is part of and what measurements are requested. When the packet travels through the network, certain modules add the measured time (or other data) to the packet as meta-information. Other specialized modules (`measurement recorders`) record the attached data as a statistic associated with that particular flow. These modules can optionally remove the label from the packet (then the packet can continue along its route). +A packet is associated with the flow by specialized modules called `measurement starters`. These modules attach a label to the packet that indicates which flow it is part of and what measurements are requested. When the packet travels through the network, certain modules add the measured time (or other data) to the packet as meta-information. Other specialized modules (`measurement recorders`) record the attached data as a statistic associated with that particular flow. These modules can optionally remove the label from the packet (then the packet can continue along its route). -For example, a label is added to the packet in a measurement starter module specifying the flow name ``flow1``, and the queueing time measurement. As the packet is processed in various queue modules in the network, the queue modules attach the time spent in them to the packet as meta-information. The label is removed at a measurement recorder module, and the queueing time accumulated by the packet is recorded. The data can be found in the analysis tool's browse data tab as the ``flowname:statisticname`` result name of the measurement recorder module (for example, ``flow1:queueingTime:histogram``). By default, :ned:`FlowMeasurementRecorder` records the specified measurements as both vectors and historgrams. The statistics can be plotted, exported and analyzed as any other statistic. +For example, a label is added to the packet in a measurement starter module specifying the flow name ``flow1`` and the queueing time measurement. As the packet is processed in various queue modules in the network, the queue modules attach the time spent in them to the packet as meta-information. The label is removed at a measurement recorder module, and the queueing time accumulated by the packet is recorded. The data can be found in the analysis tool's browse data tab as the ``flowname:statisticname`` result name of the measurement recorder module (for example, ``flow1:queueingTime:histogram``). By default, :ned:`FlowMeasurementRecorder` records the specified measurements as both vectors and histograms. The statistics can be plotted, exported, and analyzed as any other statistic. .. in that module as the ``QueueingTime`` statistic associated with ``flow1``. @@ -38,7 +38,7 @@ For example, a label is added to the packet in a measurement starter module spec Any number of flow labels can be added to a packet (it can be part of multiple flows). Also, the same flow can have multiple start and end points. -.. note:: A practical problem is that different parts of a packet may have different history, due to fragmentation and reassembly, for example. Therefore, we need to keep track of the measurements for different regions of the packet. For this purpose, each bit in a packet can have its own meta-information associating it to a flow, called a `region tag`. For more information on region tags, check out the :ref:`dg:sec:packets:region-tagging` section of the INET Developer's Guide. +.. note:: A practical problem is that different parts of a packet may have different history, due to fragmentation and reassembly, for example. Therefore, we need to keep track of the measurements for different regions of the packet. For this purpose, each bit in a packet can have its own meta-information associating it with a flow, called a `region tag`. For more information on region tags, check out the :ref:`dg:sec:packets:region-tagging` section of the INET Developer's Guide. .. **TODO** for wip branch -> more about the available statistics in FlowMeasurementRecorder.ned ? @@ -49,13 +49,13 @@ The dedicated module responsible for adding flow labels and specifying measureme The :ned:`FlowMeasurementStarter` and :ned:`FlowMeasurementRecorder` modules have the same set of parameters that specify the flow name (:par:`flowName` parameter), the set of packets that enter or exit the flow (:par:`packetFilter` parameter), and the required measurements (:par:`measure` parameter). -By default, the filters match all packets (``packetFilter = 'true'``). The :par:`measure` parameter is a list containinig elements from the following set, separated by spaces: +By default, the filters match all packets (``packetFilter = 'true'``). The :par:`measure` parameter is a list containing elements from the following set, separated by spaces: - ``delayingTime``, ``queueingTime``, ``processingTime``, ``transmissionTime``, ``propagationTime``: Time for the different cases, on a *per-bit* basis - ``elapsedTime``: The total elapsed time for the packet being in the flow (see first note below) - ``packetEvent``: Record all events that happen to the packet (see second note below) -The :ned:`FlowMeasurementRecorder` module removes flow labels from packets by default. This is contolled by the :par:`endMeasurement` parameter (``true`` by default). +The :ned:`FlowMeasurementRecorder` module removes flow labels from packets by default. This is controlled by the :par:`endMeasurement` parameter (``true`` by default). .. note that this not necessarily the sum of all the above durations; see **TODO** the manual @@ -76,7 +76,7 @@ Adding Measurement Modules to the Network ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The FlowMeasurementStarter and FlowMeasurementRecorder modules can be inserted anywhere in the network (inside network nodes or protocol modules, etc.) in NED by editing the NED source of modules or extending them as a new type. -However, some modules such as the LayeredEthernetInterface, already have built-in :ned:`MeasurementLayer` submodules. This module contains a FlowMeasurementStarter and a FlowMeasurementRecorder, but they are disabled by default (the type is set to empty string). The modules can be enabled from the .INI file (e.g. ``*.host.eth[0].measurementLayer.typename = "MeasurementLayer"``). +However, some modules such as the LayeredEthernetInterface already have built-in :ned:`MeasurementLayer` submodules. This module contains a FlowMeasurementStarter and a FlowMeasurementRecorder, but they are disabled by default (the type is set to empty string). The modules can be enabled from the .INI file (e.g. ``*.host.eth[0].measurementLayer.typename = "MeasurementLayer"``). .. figure:: media/Default_MeasurementLayer.png :align: center @@ -86,14 +86,14 @@ However, some modules such as the LayeredEthernetInterface, already have built-i Associating Packets to Flows Based on Multiple Criteria ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -A measurement module can filter which packets to associate with a flow, using a packet filter and a packet data filter. Packets can be associated to multiple flows based on multiple criteria by using several measurement modules connected in series. The MultiMeasurementLayer module makes this convenient. It can be used in place of a MeasurementLayer module. It contains a variable number of FlowMeasurementStarter and FlowMeasurementRecorder modules; the number of modules is specified with its :par:`numMeasurementModules` parameter. For example, ``numMeasurementModules = 2``: +A measurement module can filter which packets to associate with a flow, using a packet filter and a packet data filter. Packets can be associated with multiple flows based on multiple criteria by using several measurement modules connected in series. The MultiMeasurementLayer module makes this convenient. It can be used in place of a MeasurementLayer module. It contains a variable number of FlowMeasurementStarter and FlowMeasurementRecorder modules; the number of modules is specified with its :par:`numMeasurementModules` parameter. For example, ``numMeasurementModules = 2``: The example simulations demonstrate both inserting measurement modules into specific locations (below the :ned:`Udp` module) and using built-in :ned:`MeasurementLayer` submodules. Limitations ~~~~~~~~~~~ -Support for fragmentation and frame aggregation is planned for a later release, currently these are not supported. +Support for fragmentation and frame aggregation is planned for a later release; currently, these are not supported. Visualizing Packet Flows ~~~~~~~~~~~~~~~~~~~~~~~~ @@ -110,7 +110,7 @@ This showcase contains two simulations. Both simulations use the following netwo Figure X. The network -The network contains hosts connected via switches (:ned:`EthernetSwitch`) in a dumbbell topology. Note that the host types are parametric (so that they can be configured from the ini file): +The network contains hosts connected via switches (:ned:`EthernetSwitch`) in a dumbbell topology. Note that the host types are parametric (so that they can be configured from the INI file): .. literalinclude:: ../FlowMeasurementShowcase.ned :start-at: client1 diff --git a/showcases/measurement/jitter/doc/index.rst b/showcases/measurement/jitter/doc/index.rst index dbc654618a0..601de28855e 100644 --- a/showcases/measurement/jitter/doc/index.rst +++ b/showcases/measurement/jitter/doc/index.rst @@ -4,7 +4,7 @@ Measuring Packet Delay Variation Goals ----- -In this example we explore the various packet delay variation (also known as packet jitter) statistics of application modules. +In this example, we explore the various packet delay variation (also known as packet jitter) statistics of application modules. | INET version: ``4.4`` | Source files location: `inet/showcases/measurement/jitter `__ diff --git a/showcases/measurement/propagationtime/doc/index.rst b/showcases/measurement/propagationtime/doc/index.rst index 00a83fb4e67..ab761ad1c2d 100644 --- a/showcases/measurement/propagationtime/doc/index.rst +++ b/showcases/measurement/propagationtime/doc/index.rst @@ -4,8 +4,8 @@ Measuring Propagation Time Goals ----- -In this example we explore the channel propagation time statistics for wired and -wireless transmission mediums. +In this example, we explore the channel propagation time statistics for wired and +wireless transmission media. | INET version: ``4.4`` | Source files location: `inet/showcases/measurement/propagationtime `__ @@ -16,7 +16,7 @@ The Model The packet propagation time is measured from the moment the beginning of a physical signal encoding the packet leaves the transmitter network interface up to the moment the beginning of the same physical signal arrives at the receiver network interface. -This time usually equals with the same difference measured for the end of the physical +This time usually equals the same difference measured for the end of the physical signal. The exception would be when the receiver is moving relative to the transmitter with a relatively high speed compared to the propagation speed of the physical signal, but it is rarely the case in communication network simulation. diff --git a/showcases/measurement/queueingtime/doc/index.rst b/showcases/measurement/queueingtime/doc/index.rst index 5556657d956..3e64c50fab5 100644 --- a/showcases/measurement/queueingtime/doc/index.rst +++ b/showcases/measurement/queueingtime/doc/index.rst @@ -4,7 +4,7 @@ Measuring Queueing Time Goals ----- -In this example we explore the queueing time statistics of queue modules of +In this example, we explore the queueing time statistics of queue modules of network interfaces. | INET version: ``4.4`` @@ -17,7 +17,7 @@ The queueing time is measured from the moment a packet is enqueued up to the moment the same packet is dequeued from the queue. Simple packet queue modules are also often used to build more complicated queues such as a priority queue or even traffic shapers. The queueing time statistics are automatically collected -for each of one these cases too. +for each of these cases too. Here is the network: diff --git a/showcases/measurement/relationships/doc/index.rst b/showcases/measurement/relationships/doc/index.rst index b36b9faca72..84602105def 100644 --- a/showcases/measurement/relationships/doc/index.rst +++ b/showcases/measurement/relationships/doc/index.rst @@ -4,7 +4,7 @@ Understanding Measurement Relationships Goals ----- -In this example we explore the relationships between various measurements that +In this example, we explore the relationships between various measurements that are presented in the measurement showcases. | INET version: ``4.4`` @@ -14,9 +14,9 @@ The Model --------- The end-to-end delay measured between two applications can be thought of as a sum -of different time categories such as queueing time, processing time, transmission +of different time categories such as queuing time, processing time, transmission time, propagation time, and so on. Moreover, each one of these specific times can -be further split up between different network nodes, network interface or even +be further split up between different network nodes, network interfaces, or even smaller submodules. Here is the network: diff --git a/showcases/measurement/residencetime/doc/index.rst b/showcases/measurement/residencetime/doc/index.rst index 6c9d425ae4e..492951dc77f 100644 --- a/showcases/measurement/residencetime/doc/index.rst +++ b/showcases/measurement/residencetime/doc/index.rst @@ -4,7 +4,7 @@ Measuring Residence Time Goals ----- -In this example we explore the packet residence time statistics of network nodes. +In this example, we explore the packet residence time statistics of network nodes. | INET version: ``4.4`` | Source files location: `inet/showcases/measurement/residencetime `__ @@ -14,7 +14,7 @@ The Model The packet residence time is measured from the moment a packet enters a network node up to the moment the same packet leaves the network node. This statistic -is also collected for packets which are created and/or destroyed in network +is also collected for packets that are created and/or destroyed in network nodes. Here is the network: diff --git a/showcases/measurement/throughput/doc/index.rst b/showcases/measurement/throughput/doc/index.rst index cd28e9ad176..c5caa2a0ba0 100644 --- a/showcases/measurement/throughput/doc/index.rst +++ b/showcases/measurement/throughput/doc/index.rst @@ -4,8 +4,7 @@ Measuring Channel Throughput Goals ----- -In this example we explore the channel throughput statistics of wired and wireless -transmission mediums. +In this example, we explore the channel throughput statistics of wired and wireless transmission mediums. | INET version: ``4.4`` | Source files location: `inet/showcases/measurement/throughput `__ diff --git a/showcases/measurement/transmissiontime/doc/index.rst b/showcases/measurement/transmissiontime/doc/index.rst index 99d831a0102..8e33c22dc0e 100644 --- a/showcases/measurement/transmissiontime/doc/index.rst +++ b/showcases/measurement/transmissiontime/doc/index.rst @@ -4,7 +4,7 @@ Measuring Transmission Time Goals ----- -In this example we explore the packet transmission time statistics of network +In this example, we explore the packet transmission time statistics of network interfaces for wired and wireless transmission mediums. | INET version: ``4.4`` @@ -16,7 +16,7 @@ The Model The packet transmission time is measured from the moment the beginning of the physical signal encoding the packet leaves the network interface up to the moment the end of the same physical signal leaves the same network interface. This time -usually equals with the packet reception time that is measured at the receiver +usually equals to the packet reception time that is measured at the receiver network interface from the beginning to the end of the physical signal. The exception would be when the receiver is moving relative to the transmitter with a relatively high speed compared to the propagation speed of the physical signal, @@ -26,7 +26,7 @@ Packet transmission time is measured from the beginning of the physical signal encoding the packet leaves the network interface up to the moment the end of the same physical signal leaves the same network interface. -Packet transmission time is the time difference between the start and th end of the physical +Packet transmission time is the time difference between the start and the end of the physical signal transmission on the outgoing interface. Here is the network: diff --git a/showcases/mobility/basic/doc/index.rst b/showcases/mobility/basic/doc/index.rst index 8b780444361..ce348bba3bb 100644 --- a/showcases/mobility/basic/doc/index.rst +++ b/showcases/mobility/basic/doc/index.rst @@ -8,7 +8,7 @@ The positioning and mobility of nodes play a crucial role in many simulation scenarios, especially those that involve wireless communication. INET allows you to add mobility to nodes by using modules that represent different mobility models, such as linear motion or random waypoint. There is a wide variety of -mobility models available in INET and you can even combine them to create +mobility models available in INET, and you can even combine them to create complex motion patterns. This showcase provides a demonstration of some of the elementary mobility models @@ -34,7 +34,7 @@ package. This showcase presents an example for most of the frequently used mobil Mobility models can be categorized in numerous different ways (see User's Guide), but here we present them organized in two groups: ones describing proper -motion and/or dynamic orientation, and ones describing the placement and +motion and/or dynamic orientation and ones describing the placement and orientation of stationary nodes. For the interested reader, the `INET User’s Guide `__ @@ -47,13 +47,13 @@ The Model All simulations use the :ned:`BasicMobilityShowcase` network. The size of the scene is 400x400x0 meters. It contains a configurable number of hosts and an :ned:`IntegratedVisualizer` module to -visualize some aspects of mobility. The following image shows the layout of the network: +visualize aspects of mobility. The following image shows the layout of the network: .. figure:: media/scene.png :scale: 100% :align: center -The ``General`` configuration in the ``omnetpp.ini`` contains some configuration +The ``General`` configuration in the ``omnetpp.ini`` file contains some configuration keys common to all example simulations: .. literalinclude:: ../omnetpp.ini @@ -126,14 +126,14 @@ CircleMobility The :ned:`CircleMobility` module describes circular motion around a center. This example uses two hosts orbiting the same center (the center of the scene) -with different radii, directions and speeds: +with different radii, directions, and speeds: .. literalinclude:: ../omnetpp.ini :language: ini :start-at: *.host[*].mobility.typename = "CircleMobility" :end-at: *.host[1].mobility.startAngle = 270deg -You can see the result of the configuration on the following video: +You can see the result of the configuration in the following video: .. video:: media/CircleMobility.mp4 :width: 50% @@ -163,9 +163,9 @@ can see the XML script for ``host[0]``: The flexibility of :ned:`TurtleMobility` allows the implementation of the functionality of some of the other mobility models. As such, ``host[1]``'s XML script describes another mobility model, :ned:`MassMobility`. -This is a mobility model describing a random motion. The node is assumed to have a mass, and so it -can not turn abruptly. This kind of motion is achieved by allowing only -small time intervals for forward motion, and small turning angles: +This is a mobility model describing random motion. The node is assumed to have mass, and so it +cannot turn abruptly. This kind of motion is achieved by allowing only +small time intervals for forward motion and small turning angles: .. literalinclude:: ../config2.xml :language: xml @@ -197,7 +197,7 @@ Here is the configuration in omnetpp.ini: The mobility module is set to totally random motion, with a variance of 0.5. -The following video shows the resulted random motion: +The following video shows the resulting random motion: .. video:: media/GaussMarkovMobility.mp4 :width: 50% @@ -257,9 +257,9 @@ of triplets (2D) or quadruples (3D). Here you can see the configuration in the i :end-at: **.host[*].mobility.nodeId = -1 The :par:`nodeId` parameter selects the line in the trace file for the given mobility module. -The value -1 gets substituted to the parent module's index. +The value -1 gets substituted for the parent module's index. -The ``bonnmotion.movements`` contains the trace that we want the nodes to follow: +The ``bonnmotion.movements`` file contains the trace that we want the nodes to follow: .. literalinclude:: ../bonnmotion.movements :language: xml @@ -298,7 +298,7 @@ The configuration in omnetpp.ini is the following: :end-at: mobility.numHosts We specify only the :par:`numHosts` parameter; the other parameters of the -mobility are left on their defaults. Thus the layout conforms to the +mobility are left at their defaults. Thus the layout conforms to the available space: .. figure:: media/StaticGridMobility.png @@ -311,8 +311,8 @@ StationaryMobility The :ned:`StationaryMobility` model only sets position. It has :par:`initialX`, :par:`initialY`, :par:`initialZ` and :par:`initFromDisplayString` parameters. By default, the :par:`initFromDisplayString` parameter is true, and the initial coordinate parameters select a random value inside the constraint -area. Additionally, there are parameters to set the initial heading, elevation and bank -(i.e. orientation in 3D), all zero by default. Note that :ned:`StationaryMobility` is the +area. Additionally, there are parameters to set the initial heading, elevation, and bank +(i.e., orientation in 3D), all zero by default. Note that :ned:`StationaryMobility` is the default mobility model in :ned:`WirelessHost` and derivatives. The configuration for the example simulation in omnetpp.ini is the following: diff --git a/showcases/mobility/spatial/doc/index.rst b/showcases/mobility/spatial/doc/index.rst index b88d77dff4e..310ee24d275 100644 --- a/showcases/mobility/spatial/doc/index.rst +++ b/showcases/mobility/spatial/doc/index.rst @@ -23,15 +23,15 @@ movement in three dimensions. One way to generate spatial movement is to use mobility models that support it out of the box, for example, :ned:`LinearMobility`, -:ned:`RandomWaypointMobility`, :ned:`MassMobility`, :ned:`TurtleMobility` +:ned:`RandomWaypointMobility`, :ned:`MassMobility`, :ned:`TurtleMobility`, or :ned:`BonnMotionMobility`. Spatial movement can also be produced using -superposition of several mobility models (where at least one of them +a superposition of several mobility models (where at least one of them must support movement in the Z axis). We show an example for both approaches. In these example simulations, we'll make use of 3D visualization based on OpenSceneGraph (OSG). To try these examples yourself, make sure that your OMNeT++ installation has been compiled with OSG support. If it is not, -you won't be able to switch to 3D view using the globe icon on the Qtenv toolbar. +you won't be able to switch to the 3D view using the globe icon on the Qtenv toolbar. .. figure:: media/QtenvToolbar.png :scale: 100% @@ -57,17 +57,17 @@ scene in 3D: By default, :ned:`IntegratedVisualizer` only contains an :ned:`IntegratedCanvasVisualizer` as submodule, but no OSG visualizer. To add it, we need to set the ``osgVisualizer`` submodule -type to :ned:`IntegratedOsgVisualizer`. We use the ``desert`` image as ground, -and set the background color (:par:`clearColor`) set to ``skyblue``. +type to :ned:`IntegratedOsgVisualizer`. We use the ``desert`` image as the ground, +and set the background color (:par:`clearColor`) to ``skyblue``. The coordinate axes can be displayed by setting the :par:`axisLength` parameter. Additional settings (not shown above) stretch the rendered scene a little larger than the constraint area of the mobility models to enhance visual appearance. Further settings enable various effects in the mobility visualization. Note, however, that -at the time of writing, not all features are implemented in :ned:`MobilityOsgVisualizer` +at the time of writing, not all features are implemented in the :ned:`MobilityOsgVisualizer` (practically, only trail visualization is). -When simulations are run, the scene looks like the following in 3D view: +When simulations are run, the scene looks like the following in the 3D view: .. figure:: media/3DPlayground.png :scale: 100% @@ -82,7 +82,7 @@ Spiral The first example simulation is run using only one host. The 3D model of the host can be set with the :par:`osgModel` parameter. In this example, we use ``glider.osgb``. -For better visibility, the glider is scaled up to 100 times of its original size. It +For better visibility, the glider is scaled up to 100 times its original size. It is also rotated by 180 degrees so that it faces forward as it moves. .. literalinclude:: ../omnetpp.ini @@ -110,9 +110,9 @@ The following video shows the resulting spiral motion: Drones ~~~~~~ -In this example we simulate the movement of 10 drones. The :par:`constraintAreaMinZ` +In this example, we simulate the movement of 10 drones. The :par:`constraintAreaMinZ` is set to 200 meters only because we do not want the drones to reach the ground. -In this example we use the ``drone.ive`` as the 3D OSG model of the hosts: +In this example, we use the ``drone.ive`` as the 3D OSG model of the hosts: .. literalinclude:: ../omnetpp.ini :language: ini diff --git a/showcases/routing/manet/doc/index.rst b/showcases/routing/manet/doc/index.rst index 5d738957b69..2a81c675018 100644 --- a/showcases/routing/manet/doc/index.rst +++ b/showcases/routing/manet/doc/index.rst @@ -26,7 +26,7 @@ mobile nature of the nodes, the network topology can change over time. The nodes create their own network infrastructure: each node also acts as a router, forwarding traffic in the network. MANET routing protocols need to adapt to changes in the network topology and maintain routing -information, so that packets can be forwarded to their destinations. Although +information so that packets can be forwarded to their destinations. Although MANET routing protocols are mainly for mobile networks, they can also be useful for networks of stationary nodes that lack network infrastructure. @@ -37,7 +37,7 @@ infrastructure. There are two main types of MANET routing protocols, reactive and proactive (although there are others which don't fit into either -category). ``Reactive`` or on-demand routing protocols update routing +category). Reactive or on-demand routing protocols update routing information when there is an immediate demand for it, i.e. one of the nodes wants to send a packet (and there is no working route to the destination). Then, they exchange route discovery messages and forward @@ -46,7 +46,7 @@ packet's forwarding, i.e. the packet cannot be forwarded anymore due to a change in the network topology. Examples of reactive MANET routing protocols include AODV, DSR, ABR, etc. -``Proactive`` or table-driven routing protocols continuously maintain +Proactive or table-driven routing protocols continuously maintain routing information so the routes in the network are always up to date. This update typically involves periodic routing maintenance messages exchanged throughout the network. These types of protocols use more maintenance @@ -69,9 +69,9 @@ features several routing protocols, for MANETs and other uses directory for the available routing protocols. The example simulations in this showcase feature the reactive protocol -``Ad hoc On-Demand Distance Vector routing`` (AODV), the proactive -protocol ``Destination-Sequenced Distance Vector routing`` (DSDV), and -the geo routing protocol ``Greedy Perimeter Stateless Routing`` (GPSR). +Ad hoc On-Demand Distance Vector routing (AODV), the proactive +protocol Destination-Sequenced Distance Vector routing (DSDV), and +the geo routing protocol Greedy Perimeter Stateless Routing (GPSR). The following section details these three protocols briefly. About AODV @@ -84,28 +84,28 @@ table with the next hop for reaching destinations. Routes time out after a while if not used (i.e. no packets are sent on them). AODV features the following routing message types: -- ``RREQ``: Route request -- ``RREP``: Route reply -- ``RERR``: Route error +- RREQ: Route request +- RREP: Route reply +- RERR: Route error When a node wants to send a packet, and it doesn't know the route to the -destination, it initiates route discovery, by sending an ``RREQ`` +destination, it initiates route discovery, by sending an RREQ multicast message. The neighboring nodes record where the message came from and forward it to their neighbors until the message gets to the -destination node. The destination node replies with an ``RREP``, which -gets back to the source on the reverse path along which the ``RREQ`` +destination node. The destination node replies with an RREP, which +gets back to the source on the reverse path along which the RREQ came. Forward routes are set up in the intermediate nodes as the -``RREP`` travels back to the source. An intermediate node can also send -an ``RREP`` in reply to a received ``RREQ`` if it knows the route to +RREP travels back to the source. An intermediate node can also send +an RREP in reply to a received RREQ if it knows the route to the destination, thus nodes can join an existing route. When the -``RREP`` arrives at the source, and the route is created, communication +RREP arrives at the source, and the route is created, communication can begin between the source and the destination. If a route no longer works due to link break, i.e. messages cannot be forwarded on it, a -``RERR`` message is broadcast by the node which detects the link break. -Other nodes re-broadcast the message. The ``RERR`` message indicates the +RERR message is broadcast by the node which detects the link break. +Other nodes re-broadcast the message. The RERR message indicates the destination which is unreachable. Nodes receiving the message make the -route inactive (and eventually the route is deleted). The next packet to -be sent triggers route discovery. As a reactive protocol, generally AODV +route inactive (and eventually, the route is deleted). The next packet to +be sent triggers route discovery. As a reactive protocol, generally, AODV has less overhead (less route maintenance messages) than proactive ones, but setting up new routes takes time while packets are waiting to be delivered. (Note that the routing protocol overhead depends on the @@ -121,18 +121,18 @@ turned off in INET's AODV implementation. About DSDV ~~~~~~~~~~ -DSDV is a proactive (or table driven) MANET routing protocol, so it +DSDV is a proactive (or table-driven) MANET routing protocol, so it makes sure routing information in the network is always up-to-date. Each node maintains a routing table with the best route to each destination. -The routing table contains routing entries to all possible destinations +The routing table contains routing entries for all possible destinations known either directly because it's a neighbor, or indirectly through neighbors. A routing entry contains the destination's IP address, last known sequence number, hop count required to reach the destination, and -the next hop. Routing information is frequently updated, so all nodes +the next hop. Routing information is frequently updated so all nodes have the best routes in the network. Routing information is updated in two ways: -- Nodes broadcast their entire routing tables periodically +- Nodes broadcast their entire routing table periodically (infrequently) - Nodes broadcast small updates when a change in their routing table occurs @@ -141,21 +141,21 @@ A node updates a routing table entry if it receives a better route. A better route is one that has a higher sequence number, or a lower hop count if the sequence number is the same. -In general, DSDV has more overhead than reactive routing protocols, +In general, DSDV has more overhead than reactive routing protocols because route maintenance messages are sent all the time. Since the routes are always up to date, DSDV has less delay in sending data. About GPSR ~~~~~~~~~~ -GPSR is stateless (regarding routes), geographic location based routing +GPSR is a stateless (regarding routes), geographic location-based routing protocol. Each node maintains the addresses and geographical -co-ordinates of its neighbors, i.e. other nodes in its communication +coordinates of its neighbors, i.e. other nodes in its communication range. Nodes advertise their locations periodically by sending beacons. When no beacons are received from a neighboring node for some time, the node is assumed to be out of range, and its table entry is deleted. A table entry for a node is also deleted after link failure. Nodes attach -their location data on all sent and forwarded packets as well. Each +their location data to all sent and forwarded packets as well. Each packet transmission resets the beacon timer, reducing the required protocol overhead in parts of the network with frequent packet traffic. The protocol is stateless in the context of routes. Nodes only have @@ -165,7 +165,7 @@ about node positions or routes in the network as a whole. Destination is designated by an IP address, but the destination's location is also appended to packets. Packets are routed towards the -destination's location specified with co-ordinates. IP addresses are +destination's location specified with coordinates. IP addresses are only used to determine whether a receiving node is the destination of a packet. The protocol operates in one of two modes: diff --git a/showcases/tsn/combiningfeatures/frerandtas/doc/index.rst b/showcases/tsn/combiningfeatures/frerandtas/doc/index.rst index 2895ef1049e..5ee9cf08ec7 100644 --- a/showcases/tsn/combiningfeatures/frerandtas/doc/index.rst +++ b/showcases/tsn/combiningfeatures/frerandtas/doc/index.rst @@ -4,7 +4,7 @@ Frame Replication with Time-Aware Shaping Goals ----- -In this example we demonstrate how to automatically configure time-aware shaping +In this example, we demonstrate how to automatically configure time-aware shaping in the presence of frame replication. | INET version: ``4.4`` @@ -53,3 +53,4 @@ Discussion Use `this `__ page in the GitHub issue tracker for commenting on this showcase. + diff --git a/showcases/tsn/combiningfeatures/gptpandtas/doc/index.rst b/showcases/tsn/combiningfeatures/gptpandtas/doc/index.rst index 82f3335460c..a3a0de8c6ba 100644 --- a/showcases/tsn/combiningfeatures/gptpandtas/doc/index.rst +++ b/showcases/tsn/combiningfeatures/gptpandtas/doc/index.rst @@ -4,7 +4,7 @@ Effects of Time Synchronization on Time-Aware Shaping Goals ----- -In this showcase we demonstrate how time synchronization affects end-to-end +In this showcase, we demonstrate how time synchronization affects end-to-end delay in a network that is using time-aware traffic shaping. .. note:: This showcase builds upon the :doc:`/showcases/tsn/timesynchronization/clockdrift/doc/index`, @@ -27,13 +27,13 @@ and recovery in time synchronization affect the delay. If time synchronization fails for some reason, such as the primary clock going offline, time-aware shaping cannot guarantee bounded delays any longer. Time -synchronization can continue and delay guarantees can be met however, if all +synchronization can continue, and delay guarantees can be met, however, if all network nodes switch over to a secondary master clock. To demonstrate this, we present three cases, with three configurations: - **Normal operation**: time synchronization works, and the delay is constant (this is the same case as in the last configuration in the :doc:`/showcases/tsn/timesynchronization/gptp/doc/index` showcase). -- **Failure of master clock**: the master clock disconnects from the network, and time is not synchronized anymore. +- **Failure of the master clock**: the master clock disconnects from the network, and time is not synchronized anymore. - **Failover to a secondary master clock**: the master clock disconnects from the network, but time synchronization can continue because network nodes switch to the secondary master clock. The Configuration @@ -41,7 +41,7 @@ The Configuration All simulations in the showcase use the same network as in the :ref:`sh:tsn:timesync:gptp:redundancy` section of the `Using gPTP` showcase. The -network constains :ned:`TsnDevice` and :ned:`TsnClock` modules connected to a +network contains :ned:`TsnDevice` and :ned:`TsnClock` modules connected to a ring of switches (:ned:`TsnSwitch`): .. figure:: media/Network.png @@ -56,7 +56,7 @@ Traffic +++++++ Traffic in the network consists of UDP packets sent between ``tsnDevice1`` and -``tsnDevice4``, and gPTP messages. sent by all nodes. To generate the UDP +``tsnDevice4``, and gPTP messages sent by all nodes. To generate the UDP application traffic, we configure ``tsnDevice1`` to send 10B UDP packets to ``tsnDevice2``: @@ -160,7 +160,7 @@ periodically synchronize their clock time and drift rate to the primary master node. However, due to the randomly changing drift rates, they diverge from the master clock after some time. -.. note:: The other three gPTP time domains are maintained simultaneously, but not used. +.. note:: The other three gPTP time domains are maintained simultaneously but not used. The next chart displays the clock drift for `gPTP time domain 2`, where timing information originates from the hot-standby master clock. All bridge and slave @@ -172,7 +172,7 @@ synchronizes to the primary master (dotted blue line) in another gPTP domain: .. figure:: media/NormalOperation_domain2.png :align: center -Note that bridge and slave nodes update their time when the hot-standby master node's clock has already drifted from the primary master somewhat. +Note that bridge and slave nodes update their time when the hot-standby master node's clock has already drifted from the primary master. The next chart displays the end-to-end delay of application traffic, which is mostly a constant low value: @@ -207,7 +207,7 @@ so the clocks in this time domain keep being synchronized. (the hot-standby mast .. figure:: media/LinkFailure_domain2.png :align: center -When the primary master node goes offline, the hot-standby master node cannot synchronize to it any more, +When the primary master node goes offline, the hot-standby master node cannot synchronize to it anymore, its clock drifts from the primary master's, shown by the orange and blue lines diverging. The bridge and slave nodes continue to synchronize to the hot-standby master node (shown by the other lines following the hot-standby master node). @@ -219,7 +219,7 @@ The next chart shows the delay: .. figure:: media/delay_linkfailure.png :align: center -After the clock divergence grows above a certain value the end-to-end delay +After the clock divergence grows above a certain value, the end-to-end delay suddenly increases dramatically. The reason is that frames often wait for the next gate scheduling cycle because they miss the allocated time slot due to improperly synchronized clocks. The delay increases from the nominal @@ -234,8 +234,8 @@ Failover to Hot-Standby Clock In this configuration, we take the primary master clock offline just as in the previous one, but we also switch the active clock in each node over to the one -that synchronizes to the hot-standby master (gPTP time domain 2 as mentioned -previously), so time synchronization can continue to keep the difference of +that synchronizes with the hot-standby master (gPTP time domain 2 as mentioned +previously), so time synchronization can continue to keep the difference in clocks in the network below the required limit. .. note:: There is no difference in time synchronization at all in the three configurations. The difference is in which clocks/domains are active. @@ -256,7 +256,7 @@ primary master node) are displayed on the following chart: .. figure:: media/Failover_domain0.png :align: center -The clocks begin to diverge from each other after the link break at 2s. +The clocks begin to diverge after the link break at 2s. The next chart displays the clock drifts in domain 2 (clock time of the hot-standby master node): @@ -264,10 +264,10 @@ hot-standby master node): .. figure:: media/Failover_domain2.png :align: center -After the link break, the clocks are synchronized to the hot-standby master's +After the link break, the clocks are synchronized with the hot-standby master's time. -.. note:: The two charts above are exactly the same as the charts for Time Domain 0 and 2 in the Link Failure of Master Clock section, because there is no difference between the two cases in time synchronization and the scheduled link break. The difference is in which one is the active time domain. +.. note:: The two charts above are exactly the same as the charts for Time Domain 0 and 2 in the Link Failure of Master Clock section because there is no difference between the two cases in time synchronization and the scheduled link break. The difference is in which one is the active time domain. The next chart displays the clock drift of the active clock in all nodes: diff --git a/showcases/tsn/combiningfeatures/invehicle/doc/index.rst b/showcases/tsn/combiningfeatures/invehicle/doc/index.rst index e2c5356eee9..8a6961fffa3 100644 --- a/showcases/tsn/combiningfeatures/invehicle/doc/index.rst +++ b/showcases/tsn/combiningfeatures/invehicle/doc/index.rst @@ -4,10 +4,10 @@ In-vehicle Network Goals ----- -In this example we demonstrate the combined features of Time-Sensitive Networking +In this example, we demonstrate the combined features of Time-Sensitive Networking in a complex in-vehicle network. The network utilizes time-aware shaping, automatic gate scheduling, clock drift, time synchronization, credit-based shaping, per-stream -filtering and policying, stream redundancy, unicast and multicast streams, link +filtering and policing, stream redundancy, unicast and multicast streams, link failure protection, frame preemption, and cut-through switching. | INET version: ``4.4`` @@ -16,7 +16,7 @@ failure protection, frame preemption, and cut-through switching. The Model --------- -In this showcase we model the communication network inside a vehicle. The network +In this showcase, we model the communication network inside a vehicle. The network consists of several Ethernet switches connected in a redundant way and multiple end devices. There are several data flows between the end device applications. @@ -34,13 +34,13 @@ Here is the configuration: Standard Ethernet ----------------- -In this configuration we use only standard Ethernet features to have a baseline +In this configuration, we use only standard Ethernet features to have a baseline of statistical results. Time-Sensitive Networking ------------------------- -In this configuration we use advanced Time-Sensitive Networking features to +In this configuration, we use advanced Time-Sensitive Networking features to evaluate their performance. Results diff --git a/showcases/tsn/cutthroughswitching/doc/index.rst b/showcases/tsn/cutthroughswitching/doc/index.rst index e75ab423f08..870fa0b644e 100644 --- a/showcases/tsn/cutthroughswitching/doc/index.rst +++ b/showcases/tsn/cutthroughswitching/doc/index.rst @@ -19,14 +19,14 @@ showcase, we will demonstrate cut-through switching and compare it to store-and- The Model --------- -Cut-through switching reduces switching delay, but skips the FCS check in the switch. The FCS -is at the end of the Ethernet frame; the FCS check is performed in destination host. +Cut-through switching reduces the switching delay but skips the FCS check in the switch. The FCS +is at the end of the Ethernet frame; the FCS check is performed in the destination host. (This is because by the time the FCS check could happen, the frame is almost completely transmitted, so it makes no sense). The delay reduction is more substantial if the packet goes through multiple switches (as one packet transmission duration can be saved at each switch). -Cut-through switching makes use of intra-node packet streaming in INET's modular +Cut-through switching makes use of intranode packet streaming in INET's modular Ethernet model. Packet streaming is required because the frame needs to be processed as a stream (as opposed to as a whole packet) in order for the switch to be able to start forwarding it before the whole packet is received. @@ -34,7 +34,7 @@ start forwarding it before the whole packet is received. .. note:: The default is store-and-forward behavior in hosts such as :ned:`StandardHost`. The example simulation contains two :ned:`TsnDevice` nodes connected by two -:ned:`TsnSwitch`' nodes (all connections are 1 Gbps): +:ned:`TsnSwitch` nodes (all connections are 1 Gbps): .. .. figure:: media/Network.png :align: center @@ -60,7 +60,7 @@ parameter to ``true``. In addition, all necessary components in the switch need to support packet streaming. The cut-through interface in the switches supports packet streaming by default; the default PHY layer in hosts need to be replaced with -:ned:`EthernetStreamingPhyLayer`, which support packet streaming. +:ned:`EthernetStreamingPhyLayer`, which supports packet streaming. Results ------- @@ -95,7 +95,7 @@ vs cut-through switching: :align: center :width: 100% -We can verify that result analytically. In case of store-and-forward, the end-to-end duration +We can verify that result analytically. In the case of store-and-forward, the end-to-end duration is ``3 * (transmission time + propagation time)``, around 25.296 ms. In the case of cut-through, the duration is ``1 * transmission time + 3 propagation time + 2 * cut-through delay``, around 8.432 ms. diff --git a/showcases/tsn/framepreemption/doc/index.rst b/showcases/tsn/framepreemption/doc/index.rst index 91631d9eaac..f905702a327 100644 --- a/showcases/tsn/framepreemption/doc/index.rst +++ b/showcases/tsn/framepreemption/doc/index.rst @@ -5,13 +5,13 @@ Goals ----- Ethernet frame preemption is a feature specified in the 802.1Qbu standard that -allows higher priority frames to interrupt the transmission of lower priority +allows higher priority frames to interrupt the transmission of lower-priority frames at the Media Access Control (MAC) layer of an Ethernet network. This can be useful for time-critical applications that require low latency for high-priority frames. For example, in a Time-Sensitive Networking (TSN) application, high-priority frames may contain time-critical data that must be delivered with minimal delay. Frame preemption can help ensure that these -high-priority frames are given priority over lower priority frames, reducing the +high-priority frames are given priority over lower-priority frames, reducing the latency of their transmission. In this showcase, we will demonstrate Ethernet frame preemption and examine the @@ -29,14 +29,14 @@ Overview ~~~~~~~~ In time-sensitive networking applications, Ethernet preemption can significantly reduce latency. -When a high-priority frame becomes available for transmission during the transmission of a low priority frame, -the Ethernet MAC can interrupt the transmission of the low priority frame, and start sending the +When a high-priority frame becomes available for transmission during the transmission of a low-priority frame, +the Ethernet MAC can interrupt the transmission of the low-priority frame and start sending the high-priority frame immediately. When the high-priority frame finishes, the MAC can continue -transmission of the low priority frame from where it left off, eventually sending the low priority +the transmission of the low-priority frame from where it left off, eventually sending the low-priority frame in two (or more) fragments. Preemption is a feature of INET's composable Ethernet model. It uses INET's packet streaming API, -so that packet transmission is represented as an interruptable stream. Preemption requires the +so that packet transmission is represented as an interruptible stream. Preemption requires the :ned:`LayeredEthernetInterface`, which contains a MAC and a PHY layer, displayed below: .. figure:: media/LayeredEthernetInterface2.png @@ -53,8 +53,8 @@ MAC layers, a preemptable (:ned:`EthernetFragmentingMacLayer`) and an express MA :align: center The :ned:`EthernetPreemptingMacLayer` uses intra-node packet streaming. Discrete packets -enter the MAC module from the higher layers, but leave the sub-MAC-layers (express and preemptable) -as packet streams. Packets exit the MAC layer as a stream, and are represented as such through +enter the MAC module from the higher layers but leave the sub-MAC-layers (express and preemptable) +as packet streams. Packets exit the MAC layer as a stream and are represented as such through the PHY layer and the link. In the case of preemption, packets initially stream from the preemptable sub-MAC-layer. @@ -77,18 +77,18 @@ The simulation uses the following network: .. figure:: media/network.png :align: center -It contains two :ned:`StandardHost`'s connected with 100Mbps Ethernet, and also a :ned:`PcapRecorder` +It contains two :ned:`StandardHost`'s connected with 100Mbps Ethernet and also a :ned:`PcapRecorder` to record PCAP traces; ``host1`` periodically generates packets for ``host2``. Primarily, we want to compare the end-to-end delay, so we run simulations with the same packet length for the low and high-priority traffic in the following three configurations in omnetpp.ini: -- ``FifoQueueing``: The baseline configuration; doesn't use priority queue or preemption. -- ``PriorityQueueing``: Uses priority queue in the Ethernet MAC to lower the delay of high-priority frames. +- ``FifoQueueing``: The baseline configuration; doesn't use a priority queue or preemption. +- ``PriorityQueueing``: Uses a priority queue in the Ethernet MAC to lower the delay of high-priority frames. - ``FramePreemption``: Uses preemption for high-priority frames for a very low delay with a guaranteed upper bound. -Additionally, we demonstrate the use of priority queue and preemption with more realistic traffic: -longer and more frequent low priority frames and shorter, less frequent high-priority frames. +Additionally, we demonstrate the use of a priority queue and preemption with more realistic traffic: +longer and more frequent low-priority frames and shorter, less frequent high-priority frames. These simulations are the extension of the three configurations mentioned above, and are defined in the ini file as the configurations with the ``Realistic`` prefix. @@ -100,8 +100,8 @@ instead of the default, which must be disabled: :end-at: LayeredEthernetInterface :language: ini -We also want to record a PCAP trace, so we can examine the traffic in Wireshark. We enable PCAP recording, -and set the PCAP recorder to dump Ethernet PHY frames, because preemption is visible in the PHY header: +We also want to record a PCAP trace, so we can examine the traffic in Wireshark. We enable PCAP recording +and set the PCAP recorder to dump Ethernet PHY frames because preemption is visible in the PHY header: .. literalinclude:: ../omnetpp.ini :start-at: recordPcap @@ -116,11 +116,11 @@ Here is the configuration of traffic generation in ``host1``: :language: ini There are two :ned:`UdpApp`'s in ``host1``, one is generating background traffic (low priority) -and the other, high-priority traffic. The UDP apps put VLAN tags on the packets, and the Ethernet +and the other high-priority traffic. The UDP apps put VLAN tags on the packets, and the Ethernet MAC uses the VLAN ID contained in the tags to classify the traffic into high and low priorities. We set up a high-bitrate background traffic (96 Mbps) and a lower-bitrate high-priority traffic -(9.6 Mbps); both with 1200B packets. Their sum is intentionally higher than the 100 Mbps link capacity +(9.6 Mbps), both with 1200B packets. Their sum is intentionally higher than the 100 Mbps link capacity (we want non-empty queues); excess packets will be dropped. .. literalinclude:: ../omnetpp.ini @@ -128,21 +128,21 @@ We set up a high-bitrate background traffic (96 Mbps) and a lower-bitrate high-p :end-at: app[1].source.productionInterval :language: ini -The ``FifoQueueing`` configuration uses no preemption or priority queue, the configuration just limits +The ``FifoQueueing`` configuration uses no preemption or priority queue. The configuration just limits the :ned:`EthernetMac`'s queue length to 4. In all three cases, the queues need to be short to decrease the queueing time's effect on the measured delay. However, if they are too short, they might be empty too often, which renders the priority queue useless (it cannot prioritize if it contains just one packet, for example). The queue length of 4 is an arbitrary choice. The queue type is set to :ned:`DropTailQueue` -so that it can drop packets if the queue is full: +so that it can drop packets if the queue is full. .. literalinclude:: ../omnetpp.ini :start-at: Config FifoQueueing :end-before: Config :language: ini -In the ``PriorityQueueing`` configuration, we change the queue type in the Mac layer from the +In the ``PriorityQueueing`` configuration, we change the queue type in the MAC layer from the default :ned:`PacketQueue` to :ned:`PriorityQueue`: .. literalinclude:: ../omnetpp.ini @@ -150,9 +150,9 @@ default :ned:`PacketQueue` to :ned:`PriorityQueue`: :end-before: Config :language: ini -The priority queue utilizes two internal queues, for the two traffic categories. To limit +The priority queue utilizes two internal queues for the two traffic categories. To limit the queueing time's effect on the measured end-to-end delay, we also limit the length of -internal queues to 4. We also disable the shared buffer, and set the queue type to +internal queues to 4. We also disable the shared buffer and set the queue type to :ned:`DropTailQueue`. We use the priority queue's classifier to put packets into the two traffic categories. @@ -166,13 +166,13 @@ which support preemption. :end-at: DropTailQueue :language: ini -There is no priority queue in this configuration, the two MAC submodules both have their own queues. -We also limit the queue length to 4, and configure the queue type to be :ned:`DropTailQueue`. +There is no priority queue in this configuration. The two MAC submodules both have their own queues. +We also limit the queue length to 4 and configure the queue type to be :ned:`DropTailQueue`. .. note:: We could also have just one shared priority queue in the EthernetPreemptableMac module, but this is not covered here. -We use the following traffic for the ``RealisticFifoQueueing``, ``RealisticPriorityQueueing`` +We use the following traffic for the ``RealisticFifoQueueing``, ``RealisticPriorityQueueing``, and ``RealisticFramePreemption`` configurations: .. literalinclude:: ../omnetpp.ini @@ -180,40 +180,40 @@ and ``RealisticFramePreemption`` configurations: :end-before: Config RealisticFifoQueueing :language: ini -In this traffic configuration, high-priority packets are 100 times less frequent, +In this traffic configuration, high-priority packets are 100 times less frequent and are 1/10th the size of low-priority packets. Transmission on the Wire ~~~~~~~~~~~~~~~~~~~~~~~~ In order to make sense of how frame preemptions are represented in the OMNeT++ GUI -(in Qtenv's animation and packet log, and in the Sequence Chart in the IDE), it is +(in Qtenv's animation and packet log and in the Sequence Chart in the IDE), it is necessary to understand how packet transmissions are modeled in OMNeT++. Traditionally, transmitting a frame on a link is represented in OMNeT++ by sending a "packet". -The "packet" is a C++ object (i.e. data structure) which is of, or is subclassed from, the -OMNeT++ class ``cPacket``. The sending time corresponds to the start of the transmission. -The packet data structure contains the length of the frame in bytes, and also the (more +The "packet" is a C++ object (i.e., a data structure) which is of or is subclassed from +the OMNeT++ class ``cPacket``. The sending time corresponds to the start of the transmission. +The packet data structure contains the length of the frame in bytes and also the (more or less abstracted) frame content. The end of the transmission is implicit: it is -computed as *start time* + *duration*, where *duration* is either explicit, or derived -from the frame size and the link bitrate. This approach in vanilla form is of course -not suitable for Ethernet frame preemption, because it is not known in advance whether -or not a frame transmission will be preempted, and at which point. +computed as *start time* + *duration*, where *duration* is either explicit or derived +from the frame size and the link bitrate. This approach in vanilla form is, of course, +not suitable for Ethernet frame preemption because it is not known in advance whether +or not a frame transmission will be preempted and at which point. -Instead, in OMNeT++ 6.0 the above approach was modified to accommodate new use cases. +Instead, in OMNeT++ 6.0, the above approach was modified to accommodate new use cases. In the new approach, the original packet sending remains, but its interpretation changes slightly. -It now represents a *prediction*: "this is a frame whose transmission will go through, unless +It now represents a *prediction*: "this is a frame whose transmission will go through unless we say otherwise". Namely, while the transmission is ongoing, it is possible to send -*transmission updates*, which modifies the prediction about the remaining part of the transmission. +*transmission updates*, which modify the prediction about the remaining part of the transmission. A *transmission update* packet essentially says "ignore what I said previously about the total frame size/content and transmission time, here's how much time the remaining transmission is going to take according to the current state of affairs, and here's the updated frame length/content". -A transmission update may truncate, shorten or extend a transmission (and the frame). +A transmission update may truncate, shorten, or extend a transmission (and the frame). For technical reasons, the transmission update packet carries the full frame size and content (not just the remaining part), but it must be crafted by the sender in a way that it is consistent with what has already been transmitted (it cannot alter the past). -For example, truncation is done by indicating zero remaining time, and setting the frame +For example, truncation is done by indicating zero remaining time and setting the frame content to what has been transmitted up to that point. An updated transmission may be further modified by subsequent transmission updates. The end of the transmission is still implicit (it finishes according to the last transmission update), but it is @@ -228,7 +228,7 @@ are all packets. - The first one is the original packet, which contains the full frame size/content and carries the prediction that the frame transmission will go through uninterrupted. - The second one is sent at the time the decision is made inside the node that the frame is going to be preempted. At that time, the node computes the truncated frame and the remaining transmission time, taking into account that at least the current octet and FCS need to be transmitted, and there is a minimum frame size requirement as well. The packet represents the size/content of the truncated frame, including FCS. -- In the current implementation, the Ethernet model also sends an explicit end-transmission update, with zero remaining transmission duration and identical frame size/content as the previous one. This would not be strictly necessary, and may change in future INET releases. +- In the current implementation, the Ethernet model also sends an explicit end-transmission update with zero remaining transmission duration and identical frame size/content as the previous one. This would not be strictly necessary and may change in future INET releases. The above packets are distinguished using name suffixes: ``:progress`` and ``:end`` are appended to the original packet name for transmission updates and for the explicit end-transmission, @@ -238,8 +238,6 @@ a frame called ``background3`` may be followed by ``background3-frag0:progress`` ``background3-frag0:end``. After the intervening express frame has also completed transmission, ``background3-frag1`` will follow (see video in the next section). - - Results ------- @@ -253,7 +251,7 @@ Here is a video of the frame preemption behavior: :align: center The Ethernet MAC in ``host1`` starts transmitting ``background-3``. During the transmission, -a high-priority frame (``ts-1``) arrives at the MAC. The MAC interrupts the transmission of +a high-priority frame (``ts-1``) arrives at the MAC. The MAC interrupts the transmission of ``background-3``; in the animation, ``background-3`` is first displayed as a whole frame, then changes to ``background-3-frag0:progress`` when the high-priority frame is available. After transmitting the high-priority frame, the remaining fragment of ``background-3-frag1`` @@ -269,7 +267,7 @@ As mentioned in the previous section, a preempted frame appears multiple times in the packet log, as updates to the frame are logged. At first, ``background-3`` is logged as an uninterrupted frame. When the high-priority frame becomes available, the frame name changes to ``background-3-frag0``, and it is logged separately. -Actually, only one frame named ``background-3-frag0`` was sent before ``ts-1``, +Actually, only one frame named ``background-3-frag0`` was sent before ``ts-1``, but with three separate packet updates. The same frame sequence is displayed on a sequence chart on the following images, @@ -283,7 +281,7 @@ the timeline is non-linear: Just as in the packet log, the sequence chart contains the originally intended, uninterrupted ``background-3`` frame as it is logged when its transmission is started. -.. note:: You can think of it as there are actually two time dimensions present on the sequence chart: the events and messages as they happen at the moment, and what the modules "think" about the future, i.e. how long will a transmission take. In reality, the transmission might be interrupted and so both the original (``background-3``) and the "updated" (``background-3-frag0``) is present on the chart. +.. note:: You can think of it as there are actually two time dimensions present on the sequence chart: the events and messages as they happen at the moment, and what the modules "think" about the future, i.e., how long will a transmission take. In reality, the transmission might be interrupted, and so both the original (``background-3``) and the "updated" (``background-3-frag0``) is present on the chart. Here is the frame sequence on a linear timeline, with the ``background-3-frag0`` frame highlighted: @@ -315,7 +313,7 @@ Here is ``background-3-frag1`` displayed in Qtenv's packet inspector: :align: center :width: 100% -This fragment does not contain a MAC header, because it is the second part of the original Ethernet frame. +This fragment does not contain a MAC header because it is the second part of the original Ethernet frame. .. **TODO** without highlight @@ -328,7 +326,7 @@ The paths which the high and low priority (express and preemptable) packets take .. figure:: media/express2.png :align: center -Analyzing End-to-end Delay +Analyzing End-to-End Delay ~~~~~~~~~~~~~~~~~~~~~~~~~~ Simulation Results @@ -344,12 +342,12 @@ configuration is distinguished using different line styles and the traffic categ The chart shows that in the case of the default configuration, the delay for the two traffic categories is about the same. The use of the priority queue significantly -decreases the delay for the high priority frames, and marginally increases the +decreases the delay for the high-priority frames and marginally increases the delay of the background frames compared to the baseline default configuration. -Preemption causes an even greater decrease for high priority frames at the cost +Preemption causes an even greater decrease for high-priority frames at the cost of a slight increase for background frames. -Estimating the End-to-end delay +Estimating the End-to-End Delay +++++++++++++++++++++++++++++++ In the next section, we will examine the credibility of these results by doing some @@ -366,10 +364,10 @@ transmission time of 4 frames. Looking at the queue length statistics (see anf f we can see that the average queue length is ~2.6, so packets suffer an average queueing delay of 2.6 frame transmission durations. -The end-to-end delay is rougly the transmission duration of a frame + queueing delay + interframe gap. +The end-to-end delay is roughly the transmission duration of a frame + queueing delay + interframe gap. The transmission duration for a 1200B frame on 100Mbps Ethernet is about 0.1ms. On average, there are two frames in the queue so frames wait two frame transmission -durations in the queue. The interframe gap for 100Mbps Ethernet is 0.96us, so we assume it negligable: +durations in the queue. The interframe gap for 100Mbps Ethernet is 0.96us, so we assume it negligible: ``delay ~= txDuration + 2.6 * txDuration + IFG = 3.6 * txDuration = 0.36ms`` @@ -379,10 +377,10 @@ PriorityQueueing Configuration For the ``PriorityQueueing`` configuration, high-priority frames have their own sub-queue in the PriorityQueue module in the MAC. When a high-priority frame arrives at the queue, the MAC will finish the ongoing low-priority transmission (if there is any) before beginning -the transmission of the high-priority frame. Thus high-priority frames can be delayed, +the transmission of the high-priority frame. Thus high-priority frames can be delayed, as the transmission of the current frame needs to be finished first. Still, using a priority queue decreases the delay of the high-priority frames and increases that of the background -frames, compared to just using one queue for all frames. +frames compared to just using one queue for all frames. Due to high background traffic, a frame is always present in the background queue. A high-priority frame needs to wait until the current background frame transmission @@ -395,7 +393,7 @@ FramePreemption Configuration ***************************** For the ``FramePreemption`` configuration, the high-priority frames have their own queue in the MAC. -When a high priority frame becomes available, the current background frame transmission +When a high-priority frame becomes available, the current background frame transmission is almost immediately stopped. The delay is roughly the duration of an FCS + transmission duration + interframe gap. @@ -416,7 +414,7 @@ The mean end-to-end delay for the realistic traffic case is plotted on the follo :width: 80% The range indicated by the rectangle on the chart above is shown zoomed in on the chart below, -so that its more visible: +so that it's more visible: .. figure:: media/realisticdelay_zoomed.png :align: center @@ -424,12 +422,12 @@ so that its more visible: As described above, the end-to-end delay of high-priority frames when using preemption is independent of the length of background frames. The delay is approximately the -transmission duration of a high-priority frame (apperent in the case of both the +transmission duration of a high-priority frame (apparent in the case of both the realistic and the comparable length traffic results). -In case of realistic traffic, the delay of the background frames is not affected by -either the use of a priority queue or preemption. The delay of the high-priority -frames is reduced significantly, because the traffic is different (originally, +In the case of realistic traffic, the delay of the background frames is not affected +by either the use of a priority queue or preemption. The delay of the high-priority +frames is reduced significantly because the traffic is different (originally, both the background and high-priority packets had the same length, so they could be compared for better demonstration). diff --git a/showcases/tsn/framereplication/automaticfailureprotection/doc/index.rst b/showcases/tsn/framereplication/automaticfailureprotection/doc/index.rst index 52d042e4e3e..c04ef24d422 100644 --- a/showcases/tsn/framereplication/automaticfailureprotection/doc/index.rst +++ b/showcases/tsn/framereplication/automaticfailureprotection/doc/index.rst @@ -4,7 +4,7 @@ Automatic Stream Configuration with Failure Protection Goals ----- -In this example we demonstrate the automatic stream redundancy configuration based +In this example, we demonstrate the automatic stream redundancy configuration based on the link and node failure protection requirements. | INET version: ``4.4`` @@ -13,8 +13,8 @@ on the link and node failure protection requirements. The Model --------- -In this case we use a different automatic stream redundancy configurator that -takes the link and node failure protection requirements for each redundany stream +In this case, we use a different automatic stream redundancy configurator that +takes the link and node failure protection requirements for each redundant stream as an argument. The automatic configurator computes the different paths that each stream must take in order to be protected against any of the listed failures so that at least one working path remains. diff --git a/showcases/tsn/framereplication/automaticmultipathconfiguration/doc/index.rst b/showcases/tsn/framereplication/automaticmultipathconfiguration/doc/index.rst index ee3fae757ee..2c0e9fc7cff 100644 --- a/showcases/tsn/framereplication/automaticmultipathconfiguration/doc/index.rst +++ b/showcases/tsn/framereplication/automaticmultipathconfiguration/doc/index.rst @@ -4,8 +4,8 @@ Automatic Multipath Stream Configuration Goals ----- -In this example we demonstrate the automatic stream redundancy configuration based -on multiple paths from source to destination. +In this example, we demonstrate the automatic stream redundancy configuration based +on multiple paths from the source to the destination. | INET version: ``4.4`` | Source files location: `inet/showcases/tsn/framereplication/automaticmultipathconfiguration `__ @@ -13,10 +13,10 @@ on multiple paths from source to destination. The Model --------- -In this case we use an automatic stream redundancy configurator that takes the +In this case, we use an automatic stream redundancy configurator that takes the different paths for each redundant stream as an argument. The automatic configurator sets the parameters of all stream identification, stream merging, stream splitting, -string encoding and stream decoding components of all network nodes. +string encoding, and stream decoding components of all network nodes. Here is the network: @@ -31,7 +31,7 @@ Here is the configuration: Results ------- -Here is the number of received and sent packets: +Here are the number of received and sent packets: .. figure:: media/packetsreceivedsent.svg :align: center diff --git a/showcases/tsn/framereplication/index.rst b/showcases/tsn/framereplication/index.rst index 4f07c543159..4a4a007425a 100644 --- a/showcases/tsn/framereplication/index.rst +++ b/showcases/tsn/framereplication/index.rst @@ -1,9 +1,7 @@ Frame Replication and Elimination for Reliability ================================================= -Frame replication provides fault tolerance without failover by sending duplicate -frames on multiple different paths. Elimination of duplicate frames makes frame -replication transparent for higher protocol layers. +Frame replication provides fault tolerance without failover by sending duplicate frames on multiple different paths. Elimination of duplicate frames makes frame replication transparent for higher protocol layers. The following showcases demonstrate topics related to frame replication: diff --git a/showcases/tsn/framereplication/manualconfiguration/doc/index.rst b/showcases/tsn/framereplication/manualconfiguration/doc/index.rst index feaad80e39d..1c80b9bcce3 100644 --- a/showcases/tsn/framereplication/manualconfiguration/doc/index.rst +++ b/showcases/tsn/framereplication/manualconfiguration/doc/index.rst @@ -4,8 +4,8 @@ Manual Stream Configuration Goals ----- -In this example we demonstrate manual configuration of stream identification, -stream splitting, stream merging, stream encoding and stream decoding to achieve +In this example, we demonstrate manual configuration of stream identification, +stream splitting, stream merging, stream encoding, and stream decoding to achieve the desired stream redundancy. | INET version: ``4.4`` @@ -14,8 +14,8 @@ the desired stream redundancy. The Model --------- -In this configuration we replicate a network topology that is presented in the -IEEE 802.1 CB amendment. The network contains one source and on destination nodes, +In this configuration, we replicate a network topology that is presented in the +IEEE 802.1 CB amendment. The network contains one source and one destination node, where the source sends a redundant data stream through five switches. The stream is duplicated in three of the switches and merged in two of them. @@ -44,7 +44,7 @@ Here is the ratio of received and sent packets: :align: center The expected number of successfully received packets relative to the number of -sent packets is verified by the python scripts. The expected result is around 0.657. +sent packets is verified by the Python scripts. The expected result is around 0.657. .. The following video shows the behavior in Qtenv: diff --git a/showcases/tsn/framereplication/multicastfailureprotection/doc/index.rst b/showcases/tsn/framereplication/multicastfailureprotection/doc/index.rst index 4580ffc9be7..e51888f977a 100644 --- a/showcases/tsn/framereplication/multicastfailureprotection/doc/index.rst +++ b/showcases/tsn/framereplication/multicastfailureprotection/doc/index.rst @@ -4,7 +4,7 @@ Multicast Streams with Failure Protection Goals ----- -In this example we replicate the multicast stream example from the IEEE 802.1 CB standard. +In this example, we replicate the multicast stream example from the IEEE 802.1 CB standard. | INET version: ``4.4`` | Source files location: `inet/showcases/tsn/framereplication/multicastfailureprotection `__ @@ -12,7 +12,7 @@ In this example we replicate the multicast stream example from the IEEE 802.1 CB The Model --------- -In this configuration we a use a network of TSN switches. A multicast stream is +In this configuration, we use a network of TSN switches. A multicast stream is sent through the network from one of the switches to all other switches. Here is the network: @@ -40,7 +40,6 @@ Results :align: center :width: 100% - Sources: :download:`omnetpp.ini <../omnetpp.ini>`, :download:`MulticastFailureProtectionShowcase.ned <../MulticastFailureProtectionShowcase.ned>` Discussion diff --git a/showcases/tsn/gatescheduling/eager/doc/index.rst b/showcases/tsn/gatescheduling/eager/doc/index.rst index c5148208146..d183b454213 100644 --- a/showcases/tsn/gatescheduling/eager/doc/index.rst +++ b/showcases/tsn/gatescheduling/eager/doc/index.rst @@ -16,7 +16,7 @@ The Model can be set by hand (in the periodic gates in the time aware shapers) but in complex networks this is hard but can be automated by configurators (dont repeat everything on the intro page) -.. Q what is eager gate scheduling? this is done by the eager gate schedule configurator which is a simple configurator that sets schedules eagerly +.. Q what is eager gate scheduling? This is done by the eager gate schedule configurator, which is a simple configurator that sets schedules eagerly. The simulation uses the following network: @@ -31,7 +31,7 @@ Here is the configuration: Results ------- -A gate cycle duration of 1ms is displayed on the following sequence chart. Note how time efficient the flow of packets from the sources to the sinks are: +A gate cycle duration of 1ms is displayed on the following sequence chart. Note how time efficient the flow of packets from the sources to the sinks is: .. figure:: media/seqchart.png :align: center @@ -46,7 +46,7 @@ The following chart displays the delay for individual packets of the different t .. figure:: media/delay.png :align: center -All delay is within the specified constraints. +All delays are within the specified constraints. .. note:: Both video streams and the ``client2 best effort`` stream have two cluster points. This is due to these traffic classes having multiple packets per gate cycle. As the different flows interact, some packets have increased delay. diff --git a/showcases/tsn/gatescheduling/index.rst b/showcases/tsn/gatescheduling/index.rst index b09cb475c44..d5fdcb77c55 100644 --- a/showcases/tsn/gatescheduling/index.rst +++ b/showcases/tsn/gatescheduling/index.rst @@ -2,7 +2,7 @@ Automatic Gate Schedule Configuration ===================================== In time-aware shaping, gate schedules (i.e. when the gates corresponding to different traffic categories are open or closed) can be specified manually. This might be sufficient in some simple cases. -However, in complex cases, manually calculating gate schedules may be impossible, thus automation may be required. Gate schedule configurators can be used for this purpose. +However, in complex cases, manual calculation of gate schedules may be impossible, thus automation may be required. Gate schedule configurators can be used for this purpose. One needs to specify constraints for the different traffic categories, such as maximum delay, and the configurator automatically calculates and configures the gate schedules that satisfy diff --git a/showcases/tsn/index.rst b/showcases/tsn/index.rst index 112feff9db5..5b488690670 100644 --- a/showcases/tsn/index.rst +++ b/showcases/tsn/index.rst @@ -2,12 +2,12 @@ Time-Sensitive Networking ========================= Time-Sensitive Networking (TSN) refers to a set of IEEE 802 standards that make -Ethernet deterministic by default. The TSN extensions in particular address the -transmission with bounded low latency and high availability. Applications include +Ethernet deterministic by default. The TSN extensions, in particular, address +the transmission with bounded low latency and high availability. Applications include converged networks with real-time audio/video streaming and real-time control -streams which are used in automotive or industrial control facilities. +streams used in automotive or industrial control facilities. -The following group of showcases demonstrate the features of Time-Sensitive +The following group of showcases demonstrates the features of Time-Sensitive Networking: .. toctree:: diff --git a/showcases/tsn/streamfiltering/statistical/doc/index.rst b/showcases/tsn/streamfiltering/statistical/doc/index.rst index b446222dd65..9e881cff672 100644 --- a/showcases/tsn/streamfiltering/statistical/doc/index.rst +++ b/showcases/tsn/streamfiltering/statistical/doc/index.rst @@ -4,19 +4,18 @@ Statistical Policing Goals ----- -In this example we combine a sliding window rate meter with a probabilistic packet -dropper to achieve a simple statistical policing. +In this example, we combine a sliding window rate meter with a probabilistic packet +dropper to achieve simple statistical policing. | INET version: ``4.4`` | Source files location: `inet/showcases/tsn/streamfiltering/statistical `__ The Model --------- - -In this configuration we use a sliding window rate meter in combination with a -statistical rate limiter. The former measures the thruput by summing up the +In this configuration, we use a sliding window rate meter in combination with a +statistical rate limiter. The former measures the throughput by summing up the packet bytes over the time window, the latter drops packets in a probabilistic -way by comparing the measured datarate to the maximum allowed datarate. +way by comparing the measured data rate to the maximum allowed data rate. Here is the network: diff --git a/showcases/tsn/streamfiltering/tokenbucket/doc/index.rst b/showcases/tsn/streamfiltering/tokenbucket/doc/index.rst index d6d5292faf2..0b7fb0c9b78 100644 --- a/showcases/tsn/streamfiltering/tokenbucket/doc/index.rst +++ b/showcases/tsn/streamfiltering/tokenbucket/doc/index.rst @@ -4,7 +4,7 @@ Token Bucket based Policing Goals ----- -In this example we demonstrate per-stream policing using chained token buckets +In this example, we demonstrate per-stream policing using chained token buckets which allows specifying committed/excess information rates and burst sizes. | INET version: ``4.4`` @@ -22,7 +22,7 @@ links between them use 100 Mbps :ned:`EthernetLink` channels. There are four applications in the network creating two independent data streams between the client and the server. The average data rates are 40 Mbps and 20 Mbps -but both varies over time using a sinusoid packet interval. +but both vary over time using a sinusoid packet interval. .. literalinclude:: ../omnetpp.ini :start-at: client applications @@ -47,7 +47,7 @@ separate metering and filter paths. :end-before: SingleRateTwoColorMeter :language: ini -We use a single rate two color meter for both streams. This meter contains a +We use a single rate two-color meter for both streams. This meter contains a single token bucket and has two parameters: committed information rate and committed burst size. Packets are labeled green or red by the meter, and red packets are dropped by the filter. @@ -59,22 +59,22 @@ packets are dropped by the filter. Results ------- -The first diagram shows the data rate of the application level outgoing traffic +The first diagram shows the data rate of the application-level outgoing traffic in the client. The data rate varies over time for both traffic classes using a sinusoid packet interval. .. figure:: media/ClientApplicationTraffic.png :align: center -The next diagram shows the operation of the per-stream filter for the best effort -traffic class. The outgoing data rate equals with the sum of the incoming data rate +The next diagram shows the operation of the per-stream filter for the best-effort +traffic class. The outgoing data rate equals the sum of the incoming data rate and the dropped data rate. .. figure:: media/BestEffortTrafficClass.png :align: center The next diagram shows the operation of the per-stream filter for the video traffic -class. The outgoing data rate equals with the sum of the incoming data rate and +class. The outgoing data rate equals the sum of the incoming data rate and the dropped data rate. .. figure:: media/VideoTrafficClass.png @@ -82,12 +82,12 @@ the dropped data rate. The next diagram shows the number of tokens in the token bucket for both streams. The filled areas mean that the number of tokens changes quickly as packets pass -through. The data rate is at maximum when the line is near the minimum. +through. The data rate is at its maximum when the line is near the minimum. .. figure:: media/TokenBuckets.png :align: center -The last diagram shows the data rate of the application level incoming traffic +The last diagram shows the data rate of the application-level incoming traffic in the server. The data rate is somewhat lower than the data rate of the outgoing traffic of the corresponding per-stream filter. The reason is that they are measured at different protocol layers. diff --git a/showcases/tsn/timesynchronization/clockdrift/doc/index.rst b/showcases/tsn/timesynchronization/clockdrift/doc/index.rst index 7ba0a050ca5..36d55a5f0bb 100644 --- a/showcases/tsn/timesynchronization/clockdrift/doc/index.rst +++ b/showcases/tsn/timesynchronization/clockdrift/doc/index.rst @@ -11,7 +11,7 @@ refers to this gradual deviation. To address the issue of clock drift, time synchronization mechanisms can be used to periodically adjust the clocks of network devices to ensure that they remain adequately in sync with each other. -The operation of applications and protocols across the network are often very +The operation of applications and protocols across the network is often very sensitive to the accuracy of this local time. Time synchronization is important in TSN, for example, as accurate time-keeping is crucial in these networks. @@ -98,7 +98,7 @@ The example configurations are the following: In the ``General`` configuration, ``source1`` is configured to send UDP packets to ``sink1``, and ``source2`` to ``sink2``. -.. note:: To demonstrate the effects of drifting clocks on the network traffic, we configure the Ethernet MAC layer in ``switch1`` to alternate between forwarding frames from ``source1`` and ``source2`` every 10 us, by using a TSN gating mechamism in ``switch1``. This does not affect the simulation results in the next few sections but becomes important in the `Effects of Clock Drift on End-to-end Delay`_ section. More details about this part of the configuration are provided there. +.. note:: To demonstrate the effects of drifting clocks on the network traffic, we configure the Ethernet MAC layer in ``switch1`` to alternate between forwarding frames from ``source1`` and ``source2`` every 10 us, by using a TSN gating mechanism in ``switch1``. This does not affect the simulation results in the next few sections but becomes important in the `Effects of Clock Drift on End-to-end Delay` section. More details about this part of the configuration are provided there. In the next few sections, we present the above examples. In the simulations featuring constant clock drift, ``switch1`` always has the same clock drift @@ -142,7 +142,7 @@ By setting different drift rates for the different clocks, we can control how they diverge over time. Note that the drift rate is defined as compared to simulation time. Also, we need to explicitly tell the relevant modules (here, the UDP apps and ``switch1``'s queue) to use the clock module in the host, -otherwise they would use the global simulation time by default. +otherwise, they would use the global simulation time by default. Here are the drifts (time differences) over time: @@ -154,7 +154,7 @@ drift of ``source1`` and ``source2`` compared to ``switch1`` are different as well, i.e. ``source1``'s clock is early and ``source2``'s clock is late compared to ``switch1``'s. -.. note:: A `clock time difference to simulation time` chart can be easily produced by plotting the ``timeChanged:vector`` statistic, and applying a linear trend operation with -1 as argument. +.. note:: A `clock time difference to the simulation time` chart can be easily produced by plotting the ``timeChanged:vector`` statistic and applying a linear trend operation with -1 as the argument. Example: Out-of-Band Synchronization of Clocks, Constant Drift ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -197,7 +197,7 @@ Let's see the time differences: The clock of ``switch1`` has a constant drift rate compared to simulation time. Since the clock drift rate in all clocks is constant, the drift rate differences -are compensated for after the first synchronization event, by setting the +are compensated for after the first synchronization event by setting the oscillator compensation in the synchronized clocks. After that, all clocks have the same drift rate as the clock of ``switch1``. Let's zoom in on the beginning of the above chart: @@ -235,7 +235,7 @@ Example: Out-of-Band Synchronization of Clocks, Random Drift ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This configuration extends the previous one with a periodic out-of-band -synchronization mechanism (using cross network node C++ function calls), defined +synchronization mechanism (using cross-network node C++ function calls), defined in the ``OutOfBandSyncBase`` configuration: .. literalinclude:: ../omnetpp.ini @@ -243,14 +243,14 @@ in the ``OutOfBandSyncBase`` configuration: :start-at: RandomClockDriftOutOfBandSync :end-before: RandomClockDriftGptpSync -As with the constant drift rate + out-of-band synchonization case, -we specify a small random clock time synchornization error, but no +As with the constant drift rate + out-of-band synchronization case, +we specify a small random clock time synchronization error, but no drift rate synchronization error. .. figure:: media/OutOfBandSyncRandom.png :align: center -The clock of switch1 keeps drifting, but the clocks of the sources are +The clock of ``switch1`` keeps drifting, but the clocks of the sources are synchronized to it. Here is the same chart, but zoomed in: .. figure:: media/OutOfBandSyncRandomZoomed.png @@ -306,7 +306,7 @@ precision in time synchronization: :align: center When the clocks are synchronized, the drift rate differences are also -compensated for, by setting the oscillator compensation in clocks. We can +compensated for by setting the oscillator compensation in clocks. We can observe this on the following zoomed-in image: .. figure:: media/GptpSync_RateAccuracy.png @@ -324,7 +324,7 @@ compensation errors. .. note:: - When configuring the :ned:`SimpleClockSynchronizer` with a :par:`synchronizationClockTimeError` of 0, the synchronized time perfectly matches the reference. - When configuring the :ned:`SimpleClockSynchronizer` with a :par:`synchronizationOscillatorCompensationError` of 0, the compensated clock drift rate perfectly matches the reference. Otherwise, the error can be specified in PPM. - - When using any of the synchonization methods, the clock time difference between the clocks is very small, in the order of microseconds. + - When using any of the synchronization methods, the clock time difference between the clocks is very small, in the order of microseconds. Effects of Clock Drift on End-to-end Delay ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -350,7 +350,7 @@ to contain a GatingPriorityQueue, with two inner queues: :language: ini The inner queues in the GatingPriorityQueue each have their own gate. The gates -connect to a PriorityScheduler, so the gating piority queue prioritizes packets +connect to a PriorityScheduler, so the gating priority queue prioritizes packets from the first inner queue. Here is a gating priority queue with two inner queues: @@ -362,7 +362,7 @@ packets from ``source1`` to the first queue and those from source2 to the second, thus the gating priority queue prioritizes packets from ``source1``. The gates are configured to open and close every 10us, with the second gate offset by a 10us period (so they alternate being open). Furthermore, we align the gate -schedules with the traffic generation by offsetting the both gate schedules with +schedules with the traffic generation by offsetting both gate schedules with 3.118us, the time it takes for a packet to be transmitted from a source to ``switch1``. Here is the rest of the gating priority queue configuration: @@ -386,7 +386,7 @@ every 20us, with ``source2`` offset from ``source1`` by 10us: Note that only one data packet fits into the send window. However, gPTP packets are small and are sent in the same send windows as data packets. -We measure the end-to-end delay of packets from from the source applications to +We measure the end-to-end delay of packets from the source applications to the corresponding sink applications. Let's examine the results below. .. First, we take a look at the out-of-band synchronization cases. The delay in the @@ -396,7 +396,7 @@ the corresponding sink applications. Let's examine the results below. First, we take a look at the out-of-band synchronization cases. In the case of no clock drift, packet generation is perfectly aligned in time with gate schedules, thus packets always find the gates open. End-to-end delay is -constant, as it stems from transmission time only (no queueing delay due to +constant, as it stems from transmission time only (no queuing delay due to closed gates). This delay value is displayed on the charts as a baseline: .. figure:: media/delay_outofbandsync.png @@ -413,7 +413,7 @@ cases have periods where the delay is at the baseline level. .. **TODO** not sure the first one is needed -.. note:: Traffic generation and gate opening and closing times doesn't need to be perfectly in sync for the data points to be at the baseline, +.. note:: Traffic generation and gate opening and closing times don't need to be perfectly in sync for the data points to be at the baseline, because the gates are open for 10us, and a packet transmission takes ~6.4us. The following chart shows the same data zoomed in: @@ -424,12 +424,12 @@ The following chart shows the same data zoomed in: In the case of the constant clock drift, the drift rate difference is compensated perfectly at the first synchronization event, thus the line sections are completely horizontal. However, we specified a random error for the time -difference synchronization, thus the values change at every syhcnonization +difference synchronization, thus the values change at every synchronization event, every 0.5ms. In the case of the random clock drift, the drift rate is compensated with no -error at every synchronization event, but the drift rate of the clocks keep -changing randomly even between synchonization events. This results in +error at every synchronization event, but the drift rate of the clocks keeps +changing randomly even between synchronization events. This results in fluctuating delay. Let's see the case where a random clock drift rate oscillator is used with gPTP: @@ -437,7 +437,7 @@ Let's see the case where a random clock drift rate oscillator is used with gPTP: .. figure:: media/delay_gptp.png :align: center -The delay distribution is similar to the out-of-band synchonization case, but +The delay distribution is similar to the out-of-band synchronization case, but there are outliers. gPTP needs to send packets over the network for time synchronization, as opposed to using an out-of-band mechanism. These gPTP messages can sometimes cause delays for packets from ``source1``, causing them @@ -445,14 +445,14 @@ to wait in the queue. .. note:: The outliers can be eliminated by giving gPTP packets priority over the source application packets. Ideally, they can have allocated time in the gate schedule as well. -The following chart displays out-of-band synchonization and gPTP, so they can be compared: +The following chart displays out-of-band synchronization and gPTP, so they can be compared: .. figure:: media/delay_outofbandsync_gptp.png :align: center In all these cases, the applications send packets in sync with the opening of the gates in the queue in ``switch1``. In the no clock drift case, the delay -depends only on the bitrate and packet length. In the case of +depends only on the bit rate and packet length. In the case of ``OutOfBandSynchronization`` and ``GptpSynchronization``, the clocks drift but the drift is periodically eliminated by synchronization, so the delay remains bounded. @@ -465,7 +465,7 @@ Let's see what happens to delay when there is no synchronization: The delay keeps changing substantially compared to the other cases. What's the reason behind these graphs? When there is no clock drift (or it is -eliminated by synchronization), the end-to-end delay is bounded, because the +eliminated by synchronization), the end-to-end delay is bounded because the packets are generated in the sources in sync with the opening of the corresponding gates in ``switch1`` (the send windows). In the constant clock drift case, the delay's characteristics depend on the magnitude and direction of @@ -496,7 +496,7 @@ the packet generation, so the packets arrive just after the gate is closed, and they have to wait for a full cycle in the queue before being sent. If the packet stream is denser (blue graph), there are more packets to send on -average than there are send windows in a given amount of time, so packets +average than the number of send windows in a given amount of time, so packets eventually accumulate in the queue. This causes the delay to keep increasing indefinitely. @@ -507,7 +507,7 @@ indefinitely. Thus, if constant clock drift is not eliminated, the network can no longer guarantee any bounded delay for the packets. The constant clock drift has a -predictable repeated pattern, but it still has a huge effect on delay. +predictable repeated pattern but still has a significant effect on delay. Let's examine the random clock drift case: @@ -521,7 +521,7 @@ The following chart compares the constant and random clock drift rate cases: .. figure:: media/delay_constant_random.png :align: center -The clocks in the similar plots (e.g. constant drift/sink1 and random +The clocks in the similar plots (e.g., constant drift/sink1 and random drift/sink2) drift in the same direction. Sources: :download:`omnetpp.ini <../omnetpp.ini>`, :download:`ClockDriftShowcase.ned <../ClockDriftShowcase.ned>` diff --git a/showcases/tsn/trafficshaping/asynchronousshaper/doc/index.rst b/showcases/tsn/trafficshaping/asynchronousshaper/doc/index.rst index 6cda4676847..1142f9f41c7 100644 --- a/showcases/tsn/trafficshaping/asynchronousshaper/doc/index.rst +++ b/showcases/tsn/trafficshaping/asynchronousshaper/doc/index.rst @@ -31,12 +31,12 @@ eligibility time can be calculated independently for multiple streams. However, since packets from these multiple streams may share the same queue, their respective transmission times can be affected by one another. -The transmission eligibility time is calculated by the asyncronous shaper +The transmission eligibility time is calculated by the asynchronous shaper algorithm. The shaper has two parameters that can be specified: the `committed information rate`, and the `committed burst rate`. The committed information rate is similar to the idle slope parameter of the credit-based shaper in that it specifies an average outgoing data rate that the traffic is limited to. The -committed burst rate allows to temporary increase the data rate above the limit. +committed burst rate allows to temporarily increase the data rate above the limit. Additionally, a `max residence time` value can be specified. The shaper ensures that packets wait less than this time value in the queue, by dropping packets that would exceed it. @@ -47,7 +47,7 @@ each having its place in the TSN node architecture: - :ned:`EligibilityTimeMeter`: calculates transmission eligibility time (in the ingress filter of the bridging layer) - :ned:`EligibilityTimeFilter`: filters out packets that would wait for too long in the queue (in the ingress filter of the bridging layer) - :ned:`EligibilityTimeQueue`: stores packets ordered by transmission eligibility time (in the network interface) -- :ned:`EligibilityTimeGate`: prevents packets to pass through the gate before the transmission eligibility time (in the network interface) +- :ned:`EligibilityTimeGate`: prevents packets from passing through the gate before the transmission eligibility time (in the network interface) For context, here are the meter and filter modules in the bridging layer (``bridging.streamFilter.ingress``): @@ -61,16 +61,16 @@ Here are the queue and gate modules in the network interface (``eth[*].macLayer. To enable asynchronous traffic shaping in a TSN switch, the following is required: -- Enable ingress traffic filtering in the switch (this adds a :ned:`StreamFilterLayer` to the bridging layer): +- Enable ingress traffic filtering in the switch (this adds a :ned:`StreamFilterLayer` to the bridging layer): ``*.switch.hasIngressTrafficFiltering = true`` - + - Set the type of the meter and filter submodules in ``streaminglayer.ingressfilter``: ``*.switch.bridging.streamFilter.ingress.meter[*].typename = "EligibilityTimeMeter"`` ``*.switch.bridging.streamFilter.ingress.filter[*].typename = "EligibilityTimeFilter"`` -- Enable egress traffic shaping in the switch (this adds a :ned:`Ieee8021qTimeAwareShaper` to all network interfaces): +- Enable egress traffic shaping in the switch (this adds a :ned:`Ieee8021qTimeAwareShaper` to all network interfaces): ``*.switch.hasEgressTrafficShaping = true`` @@ -96,7 +96,7 @@ Network +++++++ The network contains three network nodes. The client and the server (:ned:`TsnDevice`) are -connected through the switch (:ned:`TsnSwitch`), with 100Mbps :ned:`EthernetLink` channels: +connected through the switch (:ned:`TsnSwitch`), with 100Mbps :ned:`EthernetLink` channels: .. figure:: media/Network.png :align: center @@ -120,7 +120,7 @@ we want to observe only the effect of the asynchronous shaper on the traffic. Thus our goal is for the traffic to only get altered in the traffic shaper, and avoid any unintended traffic shaping effect in other parts of the network. -Traffic configuration is the same to the +Traffic configuration is the same as the :doc:`/showcases/tsn/trafficshaping/creditbasedshaper/doc/index` showcase. We configure two traffic source applications in the client, creating two independent data streams between the client and the server. The data rate of the @@ -144,8 +144,8 @@ summarize: In the client: -- We enable IEEE 802.1 stream identification and stream encoding. -- We configure the stream identifier module in the bridging layer to assign outgoing packets to named streams by UDP destination port. +- We enable IEEE 802.1 stream identification and stream encoding. +- We configure the stream identifier module in the bridging layer to assign outgoing packets to named streams by UDP destination port. - We configure the stream encoder to set the PCP number according to the assigned stream name. In the switch: @@ -172,12 +172,12 @@ and configure them: layer has an ingress filter (:ned:`SimpleIeee8021qFilter`) submodule that we configure to contain the eligibility-time meters and filters). - As we want per-stream filtering, we configure two traffic streams in the ingress filter. -- Configure the mapping in the classifier (:ned:`StreamClassifier`) in the ingress filter. This tells the classifier to send ``best effort`` streams - to gate 0, and video streams to gate 1. +- Configure the mapping in the classifier (:ned:`StreamClassifier`) in the ingress filter. This tells the classifier to send "best effort" streams + to gate 0, and video streams to gate 1. - Override the type of the ``meter`` submodules with :ned:`EligibilityTimeMeter`, and configure the committed information rate and committed burst size - parameters. Also, we set a max residence time of 10ms in the meter; this ensures that packets waiting more that 10ms in the switch are dropped by the filter + parameters. Also, we set a max residence time of 10ms in the meter; this ensures that packets waiting more than 10ms in the switch are dropped by the filter submodule that we configure next. -- Override the type of the ``meter`` submodules with :ned:`EligibilityTimeFilter`. +- Override the type of the ``meter`` submodules with :ned:`EligibilityTimeFilter`. Here is the configuration doing the above: @@ -221,15 +221,15 @@ and the incoming traffic of the shaper's filter module, per-stream: .. figure:: media/client_filter.png :align: center -The data rate of the client is sinusolidal for both traffic classes, with the +The data rate of the client is sinusoidal for both traffic classes, with the average values of 42 and 21 Mbps. For each stream, the client application traffic and the incoming traffic in the shaper's filter module is similar. The data rate is higher in the filter because it already includes protocol overhead, such as the Ethernet header. -The next chart compares the incoming, outgoing and dropped traffic in the -filter, so we can observe how the traffic changes. The commited information rate -(configured in the meter modules) is displayed with the two dashdotted lines: +The next chart compares the incoming, outgoing, and dropped traffic in the +filter, so we can observe how the traffic changes. The committed information rate +(configured in the meter modules) is displayed with the two dash-dotted lines: .. figure:: media/filter_all.png :align: center @@ -240,9 +240,9 @@ traffic. This is due to the filter, which drops packets that would exceed the configured maximum residence time while waiting in the queue for transmission. This filtering mechanism effectively establishes a virtual queue length limit, -as it imposes an upper bound on the queueing time. When the queue length +as it imposes an upper bound on the queuing time. When the queue length approaches this virtual limit, any additional packets are discarded to prevent -excessive wait times. In this case, the filter outgoing data rate equals to the +excessive wait times. In this case, the filter outgoing data rate equals the committed information rate minus some protocol headers. The next chart displays the queue incoming and outgoing (already shaped) traffic: @@ -250,8 +250,8 @@ The next chart displays the queue incoming and outgoing (already shaped) traffic .. figure:: media/queue_both.png :align: center -The shaper allows some bursts, but in general limits the outgoing traffic -to the committed information rate using the transmission eligibility time. +The shaper allows some bursts, but in general, limits the outgoing traffic +to the committed information rate using the transmission eligibility time. The next chart displays the shaper outgoing and the server application traffic data rate: @@ -259,7 +259,7 @@ The next chart displays the shaper outgoing and the server application traffic d :align: center The traffic doesn't change significantly in this part of the network. Again, the -shaper data rate is slighly higher due to protocol overhead. Thus, as per our +shaper data rate is slightly higher due to protocol overhead. Thus, as per our goal, traffic is only altered significantly in the shaper components (filter and queue). @@ -288,7 +288,7 @@ starts limiting the outgoing data rate to the committed information rate. Meanwhile, the excess incoming traffic is being stored in the queue. As described previously, the queue has a virtual limit, as packets that would wait more than the configured max residence time are dropped by the filter. When the -queue is saturated (i.e. it reaches this virtual limit), traffic can only flow +queue is saturated (i.e., it reaches this virtual limit), traffic can only flow into the queue at the same rate as it flows out. Outgoing traffic is limited to the committed information rate by traffic shaping, and incoming traffic is limited to this same value by the filter dropping excess traffic. When the @@ -309,10 +309,10 @@ Here is the same chart zoomed in: When the line is above the X-axis, the queue is blocked. When a line crosses the X-axis from above, the first packet in the queue becomes eligible for transmission. When the line goes below the X-axis, the first packet waits more -than what is absolutely necessary. This can happen do to a higher priority +than what is absolutely necessary. This can happen due to a higher priority traffic class using the channel, as is the case for every other best effort -packet in the right side of the chart. It can also happen for higher-priority -packets ocassionally, because there is no frame preemption. +packet on the right side of the chart. It can also happen for higher-priority +packets occasionally, because there is no frame preemption. The following chart connects all the statistics presented above: diff --git a/showcases/tsn/trafficshaping/creditbasedshaper/doc/index.rst b/showcases/tsn/trafficshaping/creditbasedshaper/doc/index.rst index 8bdc935b0e8..a9383ce717c 100644 --- a/showcases/tsn/trafficshaping/creditbasedshaper/doc/index.rst +++ b/showcases/tsn/trafficshaping/creditbasedshaper/doc/index.rst @@ -22,7 +22,7 @@ Credit-Based Shaping Overview The Credit Based Shaping (CBS) is an algorithm designed for network traffic management. Its core function is to limit the bandwidth a Traffic Class queue can transmit, ensuring optimal bandwidth distribution. -It helps smoothing out bursts by delaying the transmission of successive +It helps smooth out bursts by delaying the transmission of successive frames. CBS helps in mitigating network congestion in bridges and enhancing overall network performance. @@ -87,7 +87,7 @@ The Model The Network +++++++++++ -We demonstrate the operation of CBS using a network containing a client, a server and a switch. +We demonstrate the operation of CBS using a network containing a client, a server, and a switch. The client and the server (:ned:`TsnDevice`) are connected through the switch (:ned:`TsnSwitch`), with 100Mbps Ethernet links: @@ -162,7 +162,7 @@ traffic for the two traffic categories: .. figure:: media/client_shaper.png :align: center -The client application and shaper incoming traffic is quite similar, but not identical. The shaper's incoming traffic +The client application and shaper incoming traffic are quite similar, but not identical. The shaper's incoming traffic has a slightly higher data rate because of additional protocol overhead that wasn't present in the application. Also, the two streams of packets are combined in the client's network interface, which can cause some packets to be diff --git a/showcases/tsn/trafficshaping/underthehood/doc/index.rst b/showcases/tsn/trafficshaping/underthehood/doc/index.rst index d192ebc567a..6f6aeb05258 100644 --- a/showcases/tsn/trafficshaping/underthehood/doc/index.rst +++ b/showcases/tsn/trafficshaping/underthehood/doc/index.rst @@ -12,7 +12,7 @@ comprehensive network setup. In this showcase, we demonstrate the creation of a fully operational Asynchronous Traffic Shaper (ATS) by directly interconnecting its individual -components. Next, we construct a straightforward queueing network by linking the +components. Next, we construct a straightforward queuing network by linking the ATS to traffic sources and traffic sinks. The key highlight is the observation of traffic shaping within the network, achieved by plotting the generated traffic both before and after the shaping process. @@ -31,7 +31,7 @@ four essential modules: - :ned:`EligibilityTimeMeter`: Calculates the transmission eligibility time, determining when a packet becomes eligible for transmission. - :ned:`EligibilityTimeFilter`: Filters out expired packets, those that would wait excessively before becoming eligible for transmission. - :ned:`EligibilityTimeQueue`: Stores packets in order of their transmission eligibility time. -- :ned:`EligibilityTimeGate`: Opens at the transmission eligibility time for the next packet +- :ned:`EligibilityTimeGate`: Opens at the transmission eligibility time for the next packet. In a complete network setup, these modules are typically distributed across various components of a network node, such as an Ethernet switch. Specifically, @@ -67,7 +67,7 @@ used outside INET network nodes. The network is depicted in the following image: .. figure:: media/network3.png :align: center -The queueing network includes three independent packet sources, each linked to an +The queuing network includes three independent packet sources, each linked to an :ned:`EligibilityTimeMeter`, ensuring individual data rate metering for each packet stream. These meters are then connected to a single :ned:`EligibilityTimeFilter` module that drops expired packets. @@ -194,7 +194,7 @@ The maximum number of packets that can accumulate in the queue from each stream is 20, calculated as follows: `max residence time / average production interval = 10ms / 0.5ms = 20`. Since the traffic pattern is different for the three streams, the 20 packets/stream limit is reached at different times for each of -them. We can observe on the chart as the queue becomes satured for each stream +them. We can observe on the chart as the queue becomes saturated for each stream individually. The maximum of the queue length is 60, when all streams reach their maximum packet count in the queue. diff --git a/showcases/visualizer/canvas/datalinkactivity/doc/index.rst b/showcases/visualizer/canvas/datalinkactivity/doc/index.rst index 1e6e4b22977..dedde5666a1 100644 --- a/showcases/visualizer/canvas/datalinkactivity/doc/index.rst +++ b/showcases/visualizer/canvas/datalinkactivity/doc/index.rst @@ -6,7 +6,7 @@ Goals Visualizing network traffic is an important aspect of running simulations. INET provides various visualizers to help you understand the network activity, -including :ned:`DataLinkVisualizer` which focuses on the data link level. This +including the :ned:`DataLinkVisualizer`, which focuses on the data link level. This module allows you to see the visual representation of the data link level traffic in the form of arrows that fade as the traffic ceases. @@ -26,28 +26,28 @@ contains a :ned:`DataLinkVisualizer` module. Data link visualization is disabled by default; it can be enabled by setting the visualizer's :par:`displayLinks` parameter to true. -:ned:`DataLinkVisualizer` can observe packets at *service*, *peer* +:ned:`DataLinkVisualizer` can observe packets at the *service*, *peer*, and *protocol* level. The level where packets are observed can be set by the :par:`activityLevel` parameter. -- At *service* level, those packets are displayed which pass through - the data link layer (i.e. carry data from/to higher layers). -- At *peer* level, the visualization is triggered by those packets - which are processed inside the link layer in the source node and - processed inside the link layer in the destination node. -- At *protocol* level, :ned:`DataLinkVisualizer` displays those packets - which are going out at the bottom of the link layer in the source node - and going in at the bottom of the link layer in the destination node. +- At the *service* level, those packets are displayed which pass through + the data link layer (i.e. carry data from/to higher layers). +- At the *peer* level, the visualization is triggered by those packets + which are processed inside the link layer in the source node and + processed inside the link layer in the destination node. +- At the *protocol* level, :ned:`DataLinkVisualizer` displays those packets + which are going out at the bottom of the link layer in the source node + and going in at the bottom of the link layer in the destination node. The activity between two nodes is represented visually by an arrow that points from the sender node to the receiver node. The arrow appears after the first packet has been received, then gradually fades out -unless it is refreshed by further packets. The style, color, fading time +unless it is refreshed by further packets. The style, color, fading time, and other graphical properties can be changed with parameters of the visualizer. By default, all packets, interfaces, and nodes are considered for the -visualization. This selection can be narrowed to certain packets and/or +visualization. This selection can be narrowed down to certain packets and/or nodes with the visualizer's :par:`packetFilter`, :par:`interfaceFilter`, and :par:`nodeFilter` parameters. @@ -55,12 +55,12 @@ Enabling Visualization of Data Link Activity -------------------------------------------- The following example shows how to enable the visualization of data link -activity, and what the visualization looks like. In the first example, +activity and what the visualization looks like. In the first example, we configure a simulation for a wired network. The simulation can be run by choosing the ``EnablingVisualizationWired`` configuration from the ini file. -The wired network contains two :ned:`StandardHost`'s, ``wiredSource`` and +The wired network contains two :ned:`StandardHost`s, ``wiredSource`` and ``wiredDestination``. The ``linkVisualizer`` module's type is :ned:`DataLinkVisualizer`. @@ -87,13 +87,13 @@ OMNeT++ animation for packet transmissions and has nothing to do with :ned:`DataLinkVisualizer`. When the packet is received in whole by ``wiredDestination`` (the red strip disappears), a dark cyan arrow is added by :ned:`DataLinkVisualizer` between the two hosts, indicating data -link activity. The packet's name is also displayed on the arrow. The -arrow fades out quickly because the :par:`fadeOutTime` parameter of the +link activity. The packet's name is also displayed on the arrow. The arrow +quickly fades out because the :par:`fadeOutTime` parameter of the visualizer is set to a small value. Visualization in a wireless network is very similar. Our next example is the wireless variant of the above simulation. In this network, we use two -:ned:`AdhocHost`'s, ``wirelessSource`` and ``wirelessDestination``. The +:ned:`AdhocHost`s, ``wirelessSource`` and ``wirelessDestination``. The traffic and the visualization settings are the same as the configuration of the wired example. The simulation can be run by choosing the ``EnablingVisualizationWireless`` configuration from the ini file. @@ -108,7 +108,7 @@ The following animation depicts what happens when the simulation is run. This animation is similar to the video of the wired example (apart from an extra blue dotted line which can be ignored, as it is also part of -the standard OMNeT++ packet animation.) Note, however, that the ACK +the standard OMNeT++ packet animation). Note, however, that the ACK frame does not activate the visualization because ACK frames do not pass through the data link layer. @@ -129,14 +129,14 @@ We use the following network for this showcase. This network consists of four switches (``etherSwitch1..etherSwitch4``) and six endpoints: two source hosts (``source1``, ``source2``), two -destination hosts (``destination1``, ``destination2``) and two other +destination hosts (``destination1``, ``destination2``), and two other hosts (``host1``, ``host2``) which are inactive in this simulation. ``Source1`` pings ``destination1``, and ``source2`` pings ``destination2``. For this network, the visualizer's type is :ned:`IntegratedVisualizer`. Data link activity visualization is filtered to display only ping -messages. The other packets, e.g. ARP packets, are not visualized by +messages. The other packets, e.g., ARP packets, are not visualized by :ned:`DataLinkVisualizer`. We adjust the ``fadeOutMode`` and the :par:`fadeOutTime` parameters so that the activity arrows do not fade out completely before the next ping messages are sent. @@ -150,17 +150,17 @@ We use the following configuration for the visualization. The following animation shows what happens when we start the simulation. You can see that although there is both ARP and ping traffic in the -network, :ned:`DataLinkVisualizer` only takes the latter into account, due +network, :ned:`DataLinkVisualizer` only takes the latter into account due to the presence of the :par:`packetFilter` parameter. .. video:: media/Filtering_v0613.m4v :width: 100% -It also is possible to filter for network nodes. For the following +It is also possible to filter for network nodes. For the following example, let's assume we want to display traffic between the hosts ``source1`` and ``destination1`` only, along the path ``etherSwitch1``, ``etherSwitch4``, and ``etherSwitch2``. To this end, we set the -visualizer's :par:`nodeFilter` parameter by using the following line (note +visualizer's :par:`nodeFilter` parameter using the following line (note the curly brace syntax used for specifying numeric substrings). .. literalinclude:: ../omnetpp.ini @@ -168,22 +168,22 @@ the curly brace syntax used for specifying numeric substrings). :end-at: dataLinkVisualizer.nodeFilter :language: ini -It looks like the following when we run the simulation: +It looks like the following when we run the simulation. .. video:: media/Filtering2_v0613.m4v :width: 100% As you can see, visualization allows us to follow the ping packets between ``source1`` and ``destination1``. Note, however, that ping -traffic between the two other hosts, ``source2`` and ``destination2``, +traffic between the two other hosts ``source2`` and ``destination2`` also activates the visualization on the link between ``etherSwitch1`` and ``etherSwitch4``. Displaying Data Link Activity at Different Levels ------------------------------------------------- -The following example demonstrates, how to visualize data link activity -at *protocol*, *peer* and *service* level. This simulation can be run by +The following example demonstrates how to visualize data link activity +at the *protocol*, *peer*, and *service* levels. This simulation can be run by selecting the ``ActivityLevel`` configuration from the ini file. We use the following wireless network for this example. @@ -203,8 +203,8 @@ submodules can be specified with parameters for each visualizer submodule. In this example, data link activity will be displayed at three different -levels. To achieve this, three :ned:`DataLinkVisualizer` will be -configured, observing packets at *service*, *peer* and *protocol* level. +levels. To achieve this, three :ned:`DataLinkVisualizer` modules will be +configured, observing packets at *service*, *peer*, and *protocol* level. They are marked with different colors. The ``visualizer`` module is configured as follows. @@ -213,22 +213,22 @@ configured as follows. :end-at: dataLinkVisualizer[2].labelColor :language: ini -By using the :par:`numDataLinkVisualizers` parameter, we set three +By using the :par:`numDataLinkVisualizers` parameter, we set up three :ned:`DataLinkVisualizer` modules. In this example, we are interested in *video* packets. To highlight them, we use the :par:`packetFilter` parameter. The :par:`fadeOutMode` parameter specifies that inactive links -fade out in animation time. The :par:`holdAnimationTime` parameter stops +fade out during animation time. The :par:`holdAnimationTime` parameter stops the animation for a while, delaying the fading of the data link activity -arrows. The ``activityLevel``, ``lineColor`` and ``labelColor`` -parameters are different at each :ned:`DataLinkVisualizer` to make data +arrows. The ``activityLevel``, ``lineColor``, and ``labelColor`` +parameters are different for each :ned:`DataLinkVisualizer` to make data link activity levels easy to distinguish: -- ``dataLinkVisualizer[0]`` is configured to display \ *protocol* level - activity with purple arrows. -- ``dataLinkVisualizer[1]`` is configured to display \ *peer* level - activity with blue arrows, -- ``dataLinkVisualizer[2]`` is configured to display \ *service* level - activity with green arrows, +- ``dataLinkVisualizer[0]`` is configured to display *protocol* level + activity with purple arrows. +- ``dataLinkVisualizer[1]`` is configured to display *peer* level + activity with blue arrows. +- ``dataLinkVisualizer[2]`` is configured to display *service* level + activity with green arrows. The following video shows what happens when the simulation is running. @@ -238,19 +238,19 @@ The following video shows what happens when the simulation is running. At the beginning of the video, ``person1`` sends a ``VideoStrmReq`` packet, requesting the video stream. In response to this, ``videoServer`` starts to send video stream packet fragments to -``person1``. The packets are fragmented because their size is greater +``person1``. The packets are fragmented because their size is larger than the Maximum Transmission Unit. The first packet fragment, -``VideoStrmPk-frag0`` causes data link activity only at *protocol* level -and at *peer* level, because other packet fragments are required to +``VideoStrmPk-frag0``, causes data link activity only at the *protocol* level +and at the *peer* level because other packet fragments are required to allow the packet to be forwarded to higher layers. When ``VideoStrmPk-frag1`` is received by ``person1``, the packet is -reassembled in and is sent to the upper layers. As a result of this, a +reassembled and is sent to the upper layers. As a result, a green arrow is displayed between ``videoServer`` and ``person1``, -representing data link activity at *service* level. +representing data link activity at the *service* level. -Another phenomenon can also be observed in the video. There is +Another phenomenon that can be observed in the video is the *protocol*-level data link activity between ``person2`` and the other -nodes. This activity is because frames are also received in the physical layer +nodes. This activity is because frames are also received at the physical layer of ``person2``, but they are dropped at the data link layer level because they are not addressed to ``person2``. @@ -267,7 +267,7 @@ We use the following network for this simulation: .. figure:: media/DataLinkVisualizerDynamic.png :width: 100% -Nodes are of the type :ned:`AodvRouter`, and are placed randomly on the +Nodes are of the type :ned:`AodvRouter` and are placed randomly on the scene. The communication range of the nodes is chosen so that the network is connected, but nodes can typically only communicate by using multi-hop paths. The nodes will also randomly roam within predefined @@ -302,7 +302,7 @@ As AODV operates with two message types, we'll use two *.rrepVisualizer.*.lineColor = "blue" *.rrepVisualizer.*.labelColor = "blue" -The following video has been captured from the simulation, and allows us +The following video has been captured from the simulation and allows us to observe the AODV protocol in action. The dark cyan arrows indicate RREQ packets which flood the network. When an RREQ message reaches ``destination``, ``destination`` sends an RREP message (blue arrow) back diff --git a/showcases/visualizer/canvas/instrumentfigures/doc/index.rst b/showcases/visualizer/canvas/instrumentfigures/doc/index.rst index f475c4e1526..450e032eb7e 100644 --- a/showcases/visualizer/canvas/instrumentfigures/doc/index.rst +++ b/showcases/visualizer/canvas/instrumentfigures/doc/index.rst @@ -96,7 +96,7 @@ We would like the following statistics to be displayed using instrument figures: displayed by an ``indexedImage`` figure. IDLE means nothing to send, DEFER means the channel is in use, IFS\_AND\_BACKOFF means the channel is free and contending to acquire channel; -- Download progress should be displayed by a ``progessMeter`` figure; +- Download progress should be displayed by a ``progressMeter`` figure; - The number of socket data transfers to the client application should be displayed by a ``counter`` figure. diff --git a/showcases/visualizer/canvas/packetdrop/doc/index.rst b/showcases/visualizer/canvas/packetdrop/doc/index.rst index d0f3a01147e..bc7d0e09679 100644 --- a/showcases/visualizer/canvas/packetdrop/doc/index.rst +++ b/showcases/visualizer/canvas/packetdrop/doc/index.rst @@ -27,9 +27,9 @@ the node icon and fades away. The visualization of packet drops can be enabled with the visualizer's :par:`displayPacketDrops` parameter. By default, all packet drops at all nodes are visualized. This selection can be narrowed with the :par:`nodeFilter`, -:par:`interfaceFilter` and :par:`packetFilter` parameters, which match for node, interface and packet. +:par:`interfaceFilter`, and :par:`packetFilter` parameters, which match for node, interface, and packet. (The :par:`packetFilter` can filter for packet properties, such as name, fields, chunks, protocol headers, etc.) -Additionally, and the :par:`detailsFilter` parameter to filter for packet drop reason. +Additionally, use the :par:`detailsFilter` parameter to filter for packet drop reasons. The :par:`packetFormat` parameter is a format string and specifies the text displayed with the packet drop animation. By default, the dropped packet's name is displayed. @@ -39,9 +39,9 @@ The format string can contain the following directives: - `%c`: packet class - `%r`: drop reason -Packets can be dropped for the following reasons, for example: +Packets can be dropped for the following reasons: -.. For example, some of the reasons packets are dropped for are the following: +.. For example, packets can be dropped for the following reasons: - queue overflow - retry limit exceeded @@ -50,7 +50,7 @@ Packets can be dropped for the following reasons, for example: - interface down One can click on the packet drop icon to display information about the -packet drop in the inspector panel, such as the packet drop reason, +packet drop in the inspector panel, such as the packet drop reason or the module where the packet was dropped: .. figure:: media/inspector2.png @@ -59,16 +59,16 @@ or the module where the packet was dropped: .. todo:: - The color of the icon indicates the reason for the packet drop - There is a list of reasons, identified by a number + The color of the icon indicates the reason for the packet drop. + There is a list of reasons, identified by a number. -The following sections demonstrate some reasons for dropped packets, with example simulations. +The following sections demonstrate some reasons for dropped packets with example simulations. In the simulations, the :ned:`PacketDropVisualizer` is configured to indicate the packet name and the drop reason. Queue overflow -------------- -In this section, we present an example for demonstrating packet drops due +In this section, we present an example demonstrating packet drops due to queue overflow. This simulation can be run by choosing the ``QueueFull`` configuration from the ini file. The network contains a bottleneck link where packets will be dropped due to an overflowing @@ -83,12 +83,12 @@ It contains a :ned:`StandardHost` named ``source``, an :ned:`EthernetSwitch`, a ``destination``. The ``source`` is configured to send a stream of UDP packets to ``destination``. The packet stream starts at two seconds after ``destination`` got associated with the access point. The -``source`` is connected to the ``etherSwitch`` via a high speed, 100 +``source`` is connected to the ``etherSwitch`` via a high-speed, 100 Gbit/s ethernet cable, while the ``etherSwitch`` and the ``router`` are -connected with a low speed, 10 MBit/s cable. This low speed cable creates a bottleneck +connected with a low-speed, 10 Mbit/s cable. This low-speed cable creates a bottleneck in the network, between the switch and the router. The source host is configured to generate more UDP traffic than the 10Mbit/s channel can -carry. The cause of packet drops, in this case, is that the queue in +carry. The cause of packet drops in this case is that the queue in ``etherSwitch`` fills up. The queue types in the switch's Ethernet interfaces are set to @@ -98,7 +98,7 @@ queue of the switch. The visualization is activated with the ``displayPacketDrops`` parameter. The visualizer is configured to display the packet name -and the drop reason, by setting the :par:`labelFormat` parameter. +and the drop reason by setting the :par:`labelFormat` parameter. Also, the fade-out time is set to three seconds, so that the packet drop animation is more visible. @@ -165,11 +165,11 @@ file. The configuration uses the following network: .. figure:: media/maclimitnetwork.png :align: center -It contains two :ned:`AdhocHost`'s, named ``source`` and ``destination``. -The hosts' communication ranges are set up so they are out of range of +It contains two :ned:`AdhocHost`'s named ``source`` and ``destination``. +The hosts' communication ranges are set up, so they are out of range of each other. The source host is configured to ping the destination host. -The reason for packet drops, in this case, is that the hosts are not in -range, thus they can't reach each other. The ``source`` transmits the +The reason for packet drops in this case is that the hosts are not in +range, so they can't reach each other. The ``source`` transmits the ping packets, but it doesn't receive any ACK in reply. The ``source's`` MAC module drops the packets after the retry limit has been reached. This scenario is illustrated in the following animation: @@ -177,7 +177,7 @@ This scenario is illustrated in the following animation: .. video:: media/retry.mp4 :width: 512 -These events looks like the following in the logs: +These events look like the following in the logs: .. figure:: media/log_macretrylimit_2.png :width: 100% @@ -197,7 +197,7 @@ It contains two connected :ned:`StandardHost`'s. The :ned:`Ipv4NetworkConfigurator` is instructed not to add any static routes, and ``host1`` is configured to ping ``host2``. -The ping packets can't be routed, thus the IP module drops them. This scenario is +The ping packets can't be routed, so the IP module drops them. This scenario is illustrated in the following video: .. video:: media/noroute.mp4 diff --git a/showcases/visualizer/canvas/routingtable/doc/index.rst b/showcases/visualizer/canvas/routingtable/doc/index.rst index 98302933981..d1f289eea69 100644 --- a/showcases/visualizer/canvas/routingtable/doc/index.rst +++ b/showcases/visualizer/canvas/routingtable/doc/index.rst @@ -46,7 +46,7 @@ parameter. Filtering is also possible. The :par:`nodeFilter` parameter controls which nodes' routing tables should be visualized (by default, all nodes), and the :par:`destinationFilter` parameter selects the set of destination nodes -to consider (again, by default all nodes.) +to consider (again, by default all nodes). The visualizer reacts to changes. For example, when a routing protocol changes a routing entry, or an IP address gets assigned to an interface @@ -81,7 +81,7 @@ When the simulation is run, the network looks like this: :width: 80% :align: center -Note that IP addresses are displayed above the nodes. These annotations has nothing +Note that IP addresses are displayed above the nodes. These annotations have nothing to do with the :ned:`RoutingTableVisualizer`; they are displayed because we configured it in :ned:`InterfaceTableVisualizer` to improve clarity. diff --git a/showcases/visualizer/canvas/spectrum/doc/index.rst b/showcases/visualizer/canvas/spectrum/doc/index.rst index 2bbdbf961ba..e6e4f3f9701 100644 --- a/showcases/visualizer/canvas/spectrum/doc/index.rst +++ b/showcases/visualizer/canvas/spectrum/doc/index.rst @@ -9,7 +9,7 @@ in the temporal, spectral, and spatial dimensions in various forms. This visuali can help you understand, for example, why a particular signal was received or not received successfully in the presence of interference from other signals or noise. -The visualization can take the form of spectrums, spectrograms and heatmaps representing +The visualization can take the form of spectrums, spectrograms, and heatmaps representing spatial power density. This showcase demonstrates such visualizations with three example simulations. @@ -56,7 +56,7 @@ and the power density is displayed in units of `dBmWpMHz`, i.e. dBmW/MHz. The vi is enabled with the visualizer's :par:`displaySpectrums` parameter. The spectrum plots are color coded. By default, the total power density of all signals is -shown at the given position (blue curve), except for the transmitting and receiveing nodes, +shown at the given position (blue curve), except for the transmitting and receiving nodes, where the transmitted/received signal (green curve) is displayed separately from interfering signals/noises (red curve). @@ -74,13 +74,13 @@ The visualizer's :par:`spectrumMode` parameter specifies what to display in the - ``signal``: display the power density of the transmitted or received signal in green (only visualizes signals at the transmitting and receiving nodes) - ``auto`` (default): display the transmitted/received signals in green and interfering signals/noise in red if there is an ongoing transmission or reception at the given node, otherwise display the total power density in blue (visualizes signals at every node) -All spectrum figures, spectrograms and power density maps in the network share the scales -of power density, frequency and time axes, so the visualizations can be compared. The scales +All spectrum figures, spectrograms, and power density maps in the network share the scales +of power density, frequency, and time axes, so the visualizations can be compared. The scales are automatic by default, and they're determined by the signals which have been seen in the -network so far. Therefore automatic scales can extend over time as new signals are transmitted, +network so far. Therefore, automatic scales can extend over time as new signals are transmitted, but they don't contract. -The scales of power density, frequency and time axes can also be specified manually by a set +The scales of power density, frequency, and time axes can also be specified manually by a set of parameters. There are three parameters for each axis: a minimum and a maximum value, and a switch for auto configuration. For the complete list of parameters shared between the figures, see the NED documentation of :ned:`MediumVisualizerBase`. @@ -96,10 +96,10 @@ the following network: The analog model needs to be dimensional to properly represent the spectral components of signals, thus the radio medium module is :ned:`Ieee80211DimensionalRadioMedium`. -In the ``General`` configuration the background noise (:ned:`IsotropicDimensionalBackgroundNoise` +In the ``General`` configuration, the background noise (:ned:`IsotropicDimensionalBackgroundNoise` by default due to using the dimensional radio medium) is set up. By default, this module uses the power parameter. However, directly specifying power can only be used when all signals have -the same center frequency and bandwidth; otherwise the noise power density parameter needs to +the same center frequency and bandwidth; otherwise, the noise power density parameter needs to be specified. .. literalinclude:: ../omnetpp.ini @@ -107,7 +107,7 @@ be specified. :end-at: backgroundNoise.powerSpectralDensity :language: ini -The following sets the radio type to dimensional, and configures a more realistic signal +The following sets the radio type to dimensional and configures a more realistic signal shape in frequency (the spectral mask in the 802.11 standard): .. figure:: media/spectralmask_wifi.png @@ -122,7 +122,7 @@ shape in frequency (the spectral mask in the 802.11 standard): .. note:: We're using linear interpolation instead of log-linear when defining the signal shape, so the specified signal shape is not exactly the same as the spectral mask. -These lines configure the Wifi channels (the host-pairs are on different, but interfering channels): +These lines configure the Wifi channels (the host-pairs are on different but interfering channels): .. literalinclude:: ../omnetpp.ini :start-at: channelNumber = 0 @@ -154,7 +154,7 @@ spectrum figures display the isotropic background noise at the bottom of the plo is dragged around, the signal of the host it is closer to appears as the stronger signal on the spectrum figure. Note that the power density axis scale keeps changing as the transmitted signals reach the other hosts. The scale also depends on where the probe is dragged; -when its dragged further, smaller signal power density is encountered, and the lower boundary +when it's dragged further, smaller signal power density is encountered, and the lower boundary of the scale extends. Spectrogram @@ -177,7 +177,7 @@ The visualization is enabled with the visualizer's :par:`displaySpectrograms` pa +---------------------+---------------------+ Similarly to the spectrum figure, the :par:`spectrogramMode` parameter specifies what to -display (``total``, ``signal`` or the default ``auto``). The pixmap's resolution can be +display (``total``, ``signal``, or the default ``auto``). The pixmap's resolution can be tuned by additional parameters. Interference is visualized on the spectrogram as a mix of green and red colors; as such, @@ -209,8 +209,8 @@ Here is a video of the simulation: :align: center In the video, the two hosts start transmitting simultaneously. Similarly to the previous -simulation, the spectrograms of the hosts displays the transmitted/received signal separately -from the interference. The signal and interference is displayed as the green and red color +simulation, the spectrograms of the hosts display the transmitted/received signal separately +from the interference. The signal and interference are displayed as the green and red color channel of the spectrogram. The probe displays the total power density in blue. @@ -218,11 +218,11 @@ The playback speed is lowered a bit before the probe is dragged (the signal bloc upwards slower on the spectrogram). Note the change in saturation on the different sides of the spectrogram as the probe is dragged closer to each transmitting host. -The blue signal blocks on the spectrograms of ``host2`` and ``host4`` represents both -transmitted signals before being received. When ``host2`` and ``host4`` starts receiving -their respective signal, the blocks turns to a combination of green and red. +The blue signal blocks on the spectrograms of ``host2`` and ``host4`` represent both +transmitted signals before being received. When ``host2`` and ``host4`` start receiving +their respective signal, the blocks turn to a combination of green and red. At ``host2``, the right side of the signal block is green, the left is red; at ``host4``, -it's the other way around, because the two hosts are receiving different signals. +it's the other way around because the two hosts are receiving different signals. When the transmissions are over, the signal blocks change from green/red to blue at the transmitting hosts, as the signals are no longer transmitted. When the end of the signal @@ -230,7 +230,7 @@ reaches the receiving hosts, their blocks change to blue as well. It is apparent that the ACK is much shorter than the data frame. Also, the data frame is received at a much lower power than the ACK is transmitted. At ``host2`` and ``host4``, -the signal block representing the ACK has a more saturated color, and appears wider; +the signal block representing the ACK has a more saturated color and appears wider; in fact, the ACK has the same bandwidth as the data frame, but since the data frame's power is lower at the receiving node, its block fades into white at the sides. @@ -253,12 +253,12 @@ saturation represents the magnitude. This feature has two distinct visualization which are enabled and configured independently: - **Main power density map**: displays the total power density of the wireless medium within the network's boundaries (or in some limited but fixed boundary). -- **Per-node power density map**: displays the power density in the vicinity of network nodes; similarly to the other figures, the :par:`powerDensityMapMode` parameter specifies what to display (``total``, ``signal`` or the default ``auto``). +- **Per-node power density map**: displays the power density in the vicinity of network nodes; similarly to the other figures, the :par:`powerDensityMapMode` parameter specifies what to display (``total``, ``signal``, or the default ``auto``). The power density map displays the power density at a specific frequency, which is the center of the frequency axis by default. The frequency can also be specified manually be the :par:`powerDensityMapCenterFrequency` parameter. This parameter (and some others) -can be changed from Qtenv and they take effect immediately (whether the simulation is running or stopped). +can be changed from Qtenv, and they take effect immediately (whether the simulation is running or stopped). The power density map uses the same color coding as the other figures (green/red and blue). As with the spectrogram, the saturation of the heatmap's pixels depends on the scale of @@ -285,7 +285,7 @@ Mean is the slowest, but the most accurate. Note that the :par:`powerDensityMapP pertains both to the main and the per-node figures; ``mean`` by default. Similarly, the spectrogram figure has the :par:`SpectrogramPixelMode` parameter. -.. note:: The power density map feature is very CPU-intensive but the visualization can use multiple CPU cores. For multi-core support, INET needs to be compiled with OpenMP. +.. note:: The power density map feature is very CPU-intensive, but the visualization can use multiple CPU cores. For multi-core support, INET needs to be compiled with OpenMP. The ``PowerDensityMap`` configuration uses a similar network as the previous examples, but without a probe module. The network also contains physical objects which represent diff --git a/showcases/visualizer/canvas/transportconnection/doc/index.rst b/showcases/visualizer/canvas/transportconnection/doc/index.rst index cc5cf4cf247..3d85c6c03ab 100644 --- a/showcases/visualizer/canvas/transportconnection/doc/index.rst +++ b/showcases/visualizer/canvas/transportconnection/doc/index.rst @@ -3,8 +3,7 @@ Visualizing Transport Connections Goals ----- - -In a complex network with many applications and large number of nodes communicating, +In a complex network with many applications and a large number of nodes communicating, it can be challenging to keep track of all the active transport layer connections. Transport connection visualization makes it easier to identify the two endpoints of each connection by displaying a marker above the nodes. The markers appear in @@ -26,7 +25,7 @@ The icons will appear when the connection is established and disappear when it is closed. Naturally, there can be multiple connections open at a node, thus there can be multiple icons. Icons have the same color at both ends of the connection. In addition to colors, letter codes (A, B, -AA, ...) may also be displayed to help in identifying connections. Note +AA, etc.) may also be displayed to help in identifying connections. Note that this visualizer does not display the paths the packets take. If you are interested in that, take a look at :ned:`TransportRouteVisualizer`, covered in the `Visualizing Transport Path @@ -41,7 +40,7 @@ all connections are included. Filtering by hosts and port numbers can be achieved by setting the :par:`sourcePortFilter`, :par:`destinationPortFilter`, :par:`sourceNodeFilter`, and :par:`destinationNodeFilter` parameters. -The icon, colors and other visual properties can be configured by +The icon, colors, and other visual properties can be configured by setting the visualizer's parameters. Enabling the visualization of transport connections @@ -58,7 +57,7 @@ following network: The network contains two :ned:`StandardHost`'s connected to each other, each containing a TCP application. IP addresses and routing tables are -configured by a :ned:`Ipv4NetworkConfigurator` module. The visualizer +configured by an :ned:`Ipv4NetworkConfigurator` module. The visualizer module is a :ned:`TransportConnectionVisualizer`. The application in ``host1`` is configured to open a TCP connection to ``host2`` and send data to it. The visualization of transport connections is enabled with @@ -90,7 +89,7 @@ configuration from the ini file. It uses the following network: There are two :ned:`StandardHost`'s connected to a switch, which is connected via a router to the server, another :ned:`StandardHost`. IP -addresses and routing tables are configured by a +addresses and routing tables are configured by an :ned:`Ipv4NetworkConfigurator` module. The visualizer module is an :ned:`IntegratedVisualizer`. @@ -102,7 +101,7 @@ The hosts are configured to open TCP connections to the server: 22 The visualizer is instructed to only visualize connections with -destination port 80: +the destination port 80: .. code-block:: none diff --git a/showcases/visualizer/canvas/transportpathactivity/doc/index.rst b/showcases/visualizer/canvas/transportpathactivity/doc/index.rst index f007573beec..fa54efa8841 100644 --- a/showcases/visualizer/canvas/transportpathactivity/doc/index.rst +++ b/showcases/visualizer/canvas/transportpathactivity/doc/index.rst @@ -6,8 +6,8 @@ Goals INET offers a range of network traffic visualizers that operate at different levels of the network stack. In this showcase, we will -focus on :ned:`TransportRouteVisualizer` that provides graphical representation of -transport layer traffic between two endpoints by displaying polyline arrow along +focus on :ned:`TransportRouteVisualizer` that provides a graphical representation of +transport layer traffic between two endpoints by displaying a polyline arrow along the path that fades as the traffic ceases. This showcase contains two simulation models, each highlighting various aspects @@ -24,19 +24,19 @@ In INET, transport path activity can be visualized by including a :ned:`TransportRouteVisualizer` module in the simulation. Adding an :ned:`IntegratedVisualizer` is also an option because it also contains a :ned:`TransportRouteVisualizer`. Transport path activity visualization is -disabled by default; it can be enabled by setting the visualizer's +disabled by default, and it can be enabled by setting the visualizer's :par:`displayRoutes` parameter to true. :ned:`TransportRouteVisualizer` observes packets that pass through the -transport layer, i.e. carry data from/to higher layers. +transport layer, i.e., carry data from/to higher layers. The activity between two nodes is represented visually by a polyline arrow which points from the source node to the destination node. :ned:`TransportRouteVisualizer` follows packets throughout their path so -that the polyline goes through all nodes which are the part of the path +that the polyline goes through all nodes that are part of the path of packets. The arrow appears after the first packet has been received, then gradually fades out unless it is reinforced by further packets. -Color, fading time and other graphical properties can be changed with +Color, fading time, and other graphical properties can be changed with parameters of the visualizer. By default, all packets and nodes are considered for the visualization. @@ -59,10 +59,10 @@ The wired network contains two connected :ned:`StandardHost` type nodes: :width: 60% :align: center -The ``source`` node will be continuously sending UDP packets to the +The ``source`` node will continuously send UDP packets to the ``destination`` node by using a :ned:`UdpBasicApp` application. -In this simulation, ``pathVisualizer's`` type is +In this simulation, the ``pathVisualizer's`` type is :ned:`TransportRouteVisualizer`. It is enabled by setting the :par:`displayRoutes` parameter to true. @@ -156,7 +156,7 @@ what happens if :par:`packetFilter` is set. .. video:: media/Filtering_PacketFilter_v0615.m4v :width: 100% -You can see that although there are both video stream and +You can see that although there is both video stream and ``UDPBasicAppData`` traffic in the network, :ned:`TransportRouteVisualizer` displays only the latter, due to the presence of the :par:`packetFilter` parameter. diff --git a/showcases/visualizer/osg/earth/doc/index.rst b/showcases/visualizer/osg/earth/doc/index.rst index a0422c4fb1e..c57e42a9de8 100644 --- a/showcases/visualizer/osg/earth/doc/index.rst +++ b/showcases/visualizer/osg/earth/doc/index.rst @@ -5,10 +5,10 @@ Goals ----- This showcase demonstrates adding a map in the simulation. Displaying a map in a -network simulation provides a real-world context and can help to improve the -visual appeal of the simulation. It also helps to place network nodes in a +network simulation provides a real-world context and can help improve the +visual appeal of the simulation. It also helps place network nodes in a geographic context and allows the addition of objects, such as buildings. The -map doesn't have any effect on the simulation, it only alters the visuals of the +map doesn't have any effect on the simulation; it only alters the visuals of the network. It contains three example configurations of increasing complexity, each @@ -51,19 +51,19 @@ By default, the type of the scene visualizer module in :ned:`IntegratedVisualizer` is :ned:`SceneOsgVisualizer`. Inserting the map requires the :ned:`SceneOsgEarthVisualizer` module, thus, the default OSG scene visualizer is replaced. The :ned:`SceneOsgEarthVisualizer` -provides the same functionality as :ned:`SceneOsgVisualizer`, and adds +provides the same functionality as :ned:`SceneOsgVisualizer` and adds support for the osgEarth map. To display the map, the visualizer requires a ``.earth`` file. This is an -XML file that specifies how the source data is turned into a map, and +XML file that specifies how the source data is turned into a map and how to fetch the necessary data from the Internet. In this configuration, we use the ``boston.earth`` file that contains `OpenStreetMap `__ configured as map data source. More ``.earth`` files can be found at -`osgearth.org `__, and there are also instructions +`osgearth.org `__ and there are also instructions there on how to create ``.earth`` files. -Locations on the map are identified with geographical coordinates, i.e. +Locations on the map are identified with geographical coordinates, i.e., longitude and latitude. In INET, locations of nodes and objects are represented internally by Cartesian coordinates relative to the simulation scene's origin, and the @@ -106,7 +106,7 @@ floor is not visible. Adding Physical Objects ----------------------- -The map doesn't affect simulations in any way, it just gives a real-world +The map doesn't affect simulations in any way; it just gives a real-world context to them. For network nodes to interact with their environment, physical objects have to be added. The example configuration for this section can be run by selecting the ``PhysicalObjects`` configuration diff --git a/showcases/visualizer/osg/environment/doc/index.rst b/showcases/visualizer/osg/environment/doc/index.rst index afff63fada4..5f3258ef9e6 100644 --- a/showcases/visualizer/osg/environment/doc/index.rst +++ b/showcases/visualizer/osg/environment/doc/index.rst @@ -5,11 +5,11 @@ Goals ----- The physical environment has a profound effect on the communication of wireless -devices. For example, physical objects like walls inside buildings constraint +devices. For example, physical objects like walls inside buildings constrain mobility. They also obstruct radio signals, often resulting in packet loss. It is difficult to make sense of the simulation without actually seeing where physical objects are. INET provides the visualization of these objects to -help understanding the simulation better. This feature is available both in +help understand the simulation better. This feature is available both in 2D and 3D visualization. This showcase demonstrates the visualization of physical objects through displaying @@ -33,7 +33,7 @@ properties are defined in the XML configuration of the The two-dimensional projection of physical objects is determined by the :ned:`SceneCanvasVisualizer` module. (This is because the projection is also needed by other visualizers, for example, :ned:`MobilityVisualizer`.) -The default view is top view (z-axis), but you can also configure side +The default view is the top view (z-axis), but you can also configure side view (x and y axes), or isometric or orthographic projection. The default view @@ -54,7 +54,7 @@ The isometric view ------------------ In this example configuration (``IsometricView`` in the ini file), the -view is set to isometric projection, by setting the +view is set to isometric projection by setting the :par:`viewAngle` parameter in :ned:`SceneVisualizer`: .. literalinclude:: ../omnetpp.ini diff --git a/showcases/wireless/aggregation/doc/index.rst b/showcases/wireless/aggregation/doc/index.rst index 4232879c3f1..997b83c08e8 100644 --- a/showcases/wireless/aggregation/doc/index.rst +++ b/showcases/wireless/aggregation/doc/index.rst @@ -31,7 +31,7 @@ spaces (and contention periods, if not in a TXOP) is also reduced. There are two kinds of frame aggregation in 802.11: - MAC Service Data Unit (MSDU) aggregation: the packets received by the MAC from the upper layer are MSDUs. Each packet gets an MSDU subframe header. Two or more subframes are bundled together and put in an 802.11 MAC frame (header + trailer). The resulting frame is an aggregate-MSDU (a-MSDU). The a-MSDUs are transmitted with a single PHY header by the radio. -- MAC Protocol Data Unit (MPDU) aggregation: MPDUs are frames passed from the MAC to the PHY layer. Each MPDU has a MAC header and trailer. Multiple MPDU-s are bundled together to create an aggregate MPDU (a-MPDU), which is transmitted with a PHY header by the radio. +- MAC Protocol Data Unit (MPDU) aggregation: MPDUs are frames passed from the MAC to the PHY layer. Each MPDU has a MAC header and trailer. Multiple MPDUs are bundled together to create an aggregate MPDU (a-MPDU), which is transmitted with a PHY header by the radio. .. figure:: media/dataunits3.png :width: 100% @@ -53,7 +53,7 @@ This way, the individual data packets sent in the a-MPDU can be acknowledged with a block acknowledgment frame, making MPDU aggregation more useful in a high error rate environment (the performance gain from using a-MSDUs might be negated by -the higher error rate in a such an environment). +the higher error rate in such an environment). Because a-MSDUs lack a global FCS, they can only be acknowledged with block ACKs. Here is an a-MSDU of three subframes displayed in qtenv's packet view: @@ -73,14 +73,14 @@ Aggregation can be controlled by the following parameters of :ned:`BasicMsduAggr - :par:`subframeNumThreshold`: Specifies the minimum number of frames needed to create an a-MSDU. By default, the number of frames is not checked. Thus, the MAC creates an a-MSDU from any number of packets present in the queue when the MAC decides to transmit. - :par:`aggregationLengthThreshold`: Specifies the minimum length for an aggregated payload needed to create an a-MSDU. By default, the length is not checked. -- :par:`maxAMsduSize`: Specifies the maximum size for an a-MSDU. By default, its 4065 bytes. +- :par:`maxAMsduSize`: Specifies the maximum size for an a-MSDU. By default, it's 4065 bytes. When the MAC creates an a-MSDU, the resulting a-MSDU complies with all three of these parameters (if it didn't comply with any of them, then no a-MSDU would be created, and the frames would be sent without aggregation). To summarize, by default, the MAC aggregates frames when using HCF -(``qosStation = true`` in the MAC). It aggregates any number of packets +(``qosStation=true`` in the MAC). It aggregates any number of packets present in the queue at the time it decides to transmit, making sure the aggregated frame's length doesn't exceed 4065 bytes. By default, the MAC doesn't aggregate when using DCF. @@ -91,7 +91,7 @@ The Model In the example simulation for this showcase, a host sends small UDP packets to another host via 802.11. We will run the simulation with and without aggregation (and also with and without the use of TXOP), and examine the number -of received packets, the application-level throughput and the end-to-end delay. +of received packets, the application-level throughput, and the end-to-end delay. The Network ~~~~~~~~~~~ @@ -128,7 +128,7 @@ Here are the 802.11 settings in the ``General`` configuration in :language: ini We set the mode to 802.11g ERP mode and 54 Mbps to increase the data transfer rate. -Note that the ``qosStation = true`` key makes the MAC use HCF, but a classifier +Note that the ``qosStation=true`` key makes the MAC use HCF, but a classifier is also needed in order for the MAC to create QoS frames (required for aggregation). There are three scenarios, defined in the following configurations, @@ -206,10 +206,10 @@ but the whole aggregate frame has just one MAC header and MAC trailer, and PHY h so overhead is smaller. Also, there is no contention between the subframes (the MAC still has to contend for channel access before transmitting the aggregate frame). There is also a SIFS period and an ACK after each packet transmission (though there -are less of these compared to the previous case, as ten packets are acknowledged with an ACK). +are fewer of these compared to the previous case, as ten packets are acknowledged with an ACK). In the ``VoicePriorityAggregation``, throughput increases by another 190% -(compared to the ``Aggregation`` case), to about 21Mbps. The MAC doesn't have +(compared to the ``Aggregation`` case), to about 21 Mbps. The MAC doesn't have to contend for channel access during the TXOP, which decreases overhead even more. The following image shows frame exchanges for the three configurations on a sequence chart, diff --git a/showcases/wireless/analogmodel/doc/index.rst b/showcases/wireless/analogmodel/doc/index.rst index c6db2916ba5..0743e53d8d7 100644 --- a/showcases/wireless/analogmodel/doc/index.rst +++ b/showcases/wireless/analogmodel/doc/index.rst @@ -23,40 +23,40 @@ About Analog Models ------------------- The analog signal representation is a model of the signal as a physical phenomenon. Several -modules take part in simulating the transmission, propagation and reception of signals, +modules take part in simulating the transmission, propagation, and reception of signals, according to the chosen analog signal representation model. -The transmission, propagation and reception process is as follows: +The transmission, propagation, and reception process is as follows: - The transmitter module creates an analog description of the signal -- The analog model submodule of the radio medium module applies attenuation (potentially in a space, time and frequency dependent way). +- The analog model submodule of the radio medium module applies attenuation (potentially in a space, time, and frequency-dependent way). - The receiver module gets a physical representation of the signal and the calculated signal-to-noise-and-interference-ratio (SNIR) from the radio medium module. -There are distinct types of transmitter, receiver and radio medium modules, for each +There are distinct types of transmitters, receivers, and radio medium modules for each analog signal representation. INET contains the following analog model types, presented in the order of increasing complexity: -- **Unit disk**: Simple model featuring communication, interference and detection ranges as +- **Unit disk**: Simple model featuring communication, interference, and detection ranges as parameters. Suitable for simulations where the emphasis is not on the relative power of signals, e.g. testing routing protocols. - **Scalar**: Signal power is represented with a scalar value. Transmissions have a center - frequency and bandwidth, but modeled as flat signals in frequency and time. Numerically - calculated attenuation is simulated, and a SNIR value for reception is calculated and used - by error models to calculate reception probability. + frequency and bandwidth but are modeled as flat signals in frequency and time. Numerically + calculated attenuation is simulated, and an SNIR value for reception is calculated and used + by error models to calculate the reception probability. - **Dimensional**: Signal power density is represented as a 2-dimensional function of frequency - and time; arbitrary signal shapes can be specified. Simulates time and frequency dependent + and time; arbitrary signal shapes can be specified. Simulates time and frequency-dependent attenuation and SNIR. Suitable for simulating interference of signals with complex spectral - and temporal characteristics, and cross-technology interference + and temporal characteristics and cross-technology interference (see also :doc:`/showcases/wireless/coexistence/doc/index`). More complex models are more accurate but more computationally intensive. INET contains a version of radio and radio medium module for each type and technology, e.g. :ned:`Ieee80211UnitDiskRadio`/:ned:`UnitDiskRadioMedium`, :ned:`ApskScalarRadio`/:ned:`ScalarRadioMedium`, :ned:`Ieee802154NarrowbandDimensionalRadio`/ :ned:`Ieee802154NarrowbandDimensionalRadioMedium`, etc. -These models use the appropriate analog signal representation (i.e. in the receiver, the transmitter, +These models use the appropriate analog signal representation (i.e., in the receiver, the transmitter, and the radio medium) Unit Disk Model @@ -76,13 +76,13 @@ configurable with parameters: :align: center :width: 60% -In general, the signals using any analog model might carry protocol related meta-information, +In general, the signals using any analog model might carry protocol-related meta-information, configurable by parameters of the transmitter, such as bitrate, header length, modulation, -channel number, etc. The protocol related meta-information can be used by the simulation -model even with unit disk, e.g. a unit disk Wifi transmission might not be correctly receivable +channel number, etc. The protocol-related meta-information can be used by the simulation +model even with a unit disk, e.g. a unit disk Wifi transmission might not be correctly receivable because the transmission's modulation doesn't match the receiver's settings. -.. note:: The simulated level of detail, i.e. packet, bit, or symbol-level, is independent of the used analog model representation, so as the protocol related meta-infos on signals. +.. note:: The simulated level of detail, i.e., packet, bit, or symbol-level, is independent of the used analog model representation, so as the protocol-related meta-infos on signals. The unit disk receiver's :par:`ignoreInterference` parameter configures whether interfering signals ruin receptions (``false`` by default). @@ -93,7 +93,7 @@ and obstacles completely block signals. There is no probabilistic error modeling The unit disk analog model is suitable for wireless simulations in which the details of the physical layer is not important, such as testing routing protocols (no detailed physical layer knowledge required). -The unit disk model produces the physical phenomena relevant to routing protocols, i.e. varying connectivity; +The unit disk model produces the physical phenomena relevant to routing protocols, i.e., varying connectivity; nodes have a range, transmissions interfere, and not all packets get delivered and not directly. In this case, it is an adequate abstraction for physical layer behavior. @@ -101,7 +101,7 @@ The following modules use the unit disk analog model: - :ned:`UnitDiskRadioMedium`: the only radio medium using the unit disk analog model; to be used with all unit disk radio types - :ned:`GenericUnitDiskRadio`: generic radio using the unit disk analog model; contains :ned:`GenericTransmitter` and :ned:`GenericReceiver` -- :ned:`Ieee80211UnitDiskRadio`: unit disk version of Wifi; contains :ned:`Ieee80211Transmitter`, :ned:`Ieee80211Receiver` and :ned:`Ieee80211Mac` +- :ned:`Ieee80211UnitDiskRadio`: unit disk version of Wifi; contains :ned:`Ieee80211Transmitter`, :ned:`Ieee80211Receiver`, and :ned:`Ieee80211Mac` Example: Testing Routing Protocols ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -118,7 +118,7 @@ The source and the destination hosts are stationary, the other hosts move around The hosts use :ned:`Ieee80211UnitDiskRadio`, and the communication ranges are displayed as blue circles; the interference ranges are not displayed, but they are large enough so that all concurrent transmissions interfere. All hosts use the Ad hoc On-Demand Distance Vector Routing (AODV) protocol to maintain routes -as the topology changes, so that they are able to relay the ping messages between the source and the +as the topology changes so that they relay the ping messages between the source and the destination hosts. Here is the configuration in :download:`omnetpp.ini <../omnetpp.ini>`: @@ -129,14 +129,14 @@ Here is the configuration in :download:`omnetpp.ini <../omnetpp.ini>`: :language: ini Here is a video of the simulation running (successful ping message sends between the source and -destination hosts are indicated with colored arrows; routes to destination are indicated with black arrows): +destination hosts are indicated with colored arrows; routes to the destination are indicated with black arrows): .. video:: media/unitdisk2.mp4 :width: 80% The source and destination hosts are connected intermittently. If the intermediate nodes -move out of range before the routes can be built then there is no connectivity. This can -happen if the nodes move too fast, as route formation takes time due to the AODV protocol +move out of range before the routes can be built, then there is no connectivity. This can +happen if the nodes move too fast since route formation takes time due to the AODV protocol overhead. There is no communication outside of the communication range. Hosts contend for channel access, @@ -149,13 +149,13 @@ Scalar Model Overview ~~~~~~~~ -The scalar analog model represents signals with a scalar signal power, a center frequency and a bandwidth. -It also models attenuation, and calculates a signal-to-noise-interference ratio (SNIR) value at reception. -Error models calculate bit error rate and packet error rate of receptions from the SNIR, center frequency, +The scalar analog model represents signals with a scalar signal power, a center frequency, and a bandwidth. +It also models attenuation and calculates a signal-to-noise-interference ratio (SNIR) value at reception. +Error models calculate the bit error rate and packet error rate of receptions from the SNIR, center frequency, bandwidth, and modulation. In the scalar model, signals are represented with a boxcar shape in frequency and time. The model can -simulate interference when the interfering signals have the same center frequency and bandwidth, and +simulate interference when the interfering signals have the same center frequency and bandwidth and spectrally independent transmissions when the spectra don't overlap at all; partially overlapping spectra are not supported by this model (and result in an error). @@ -165,29 +165,29 @@ spectra are not supported by this model (and result in an error). INET contains scalar versions of wireless technologies, such as IEEE 802.11 and 802.15.4; it also contains the scalar version of :ned:`ApskRadio`, which is a generic radio featuring different modulations -such as BPSK, 16-QAM, and 64-QAM. Each of these technologies have a scalar radio module, and a +such as BPSK, 16-QAM, and 64-QAM. Each of these technologies has a scalar radio module and a corresponding scalar radio medium module (they have ``Scalar`` in their module names; the corresponding radio and radio medium modules should be used together). -The scalar model is more realistic than unit disk, but also more computationally intensive. +The scalar model is more realistic than the unit disk model but also more computationally intensive. It can't simulate partially overlapping spectra, only completely overlapping or not overlapping at all. It should be used when power level, attenuation, path and obstacle loss, SNIR, and realistic error modeling is needed. .. note:: In showcases and tutorials, the scalar model is the most commonly used, it's a kind of arbitrary default. When a less complex model is adequate in a showcase or tutorial, the unit disk model is used; when a more complex one is needed, the dimensional is chosen. -Example: SNIR and Packet Error Rate vs Distance +Example: SNIR and Packet Error Rate vs. Distance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In the example simulation, an :ned:`AdhocHost` sends UDP packets to another. The source host is stationary, the destination host moves away from the source host. As the distance increases between them, -the SNIR decreases and packet error rate increases, so as the number of successfully received transmissions. +the SNIR decreases, and the packet error rate increases, as well as the number of successfully received transmissions. -.. **TODO** switch to bitlevel -> per vs snir adott technologiara jellemzo -> try +.. **TODO** switch to bitlevel -> per vs. SNIR adott technologiara jellemzo -> try apsklayeredscalarradio - ha nem bitlevel, akkor a result az error modellben levo fuggveny (per vs snir) + ha nem bitlevel, akkor a result az error modellben levo fuggveny (per vs. SNIR) ha bitlevel, akkor bonyolultabb Here is the configuration in :download:`omnetpp.ini <../omnetpp.ini>`: @@ -197,7 +197,7 @@ Here is the configuration in :download:`omnetpp.ini <../omnetpp.ini>`: :end-at: channelAccess :language: ini -The source host is configured to use the default 802.11g mode, and 54Mbps data rate. +The source host is configured to use the default 802.11g mode and 54Mbps data rate. Here is a video of the simulation (successful link-layer transmissions are indicated with arrows; incorrectly received packets are indicated with packet drop animations): @@ -207,13 +207,13 @@ incorrectly received packets are indicated with packet drop animations): .. normal run, playback speed 1, animation speed 0.1, run until 6.5s -.. **TODO** packet error rate vs snir gorbe (ez nem jott volna ki a unit diskbol) +.. **TODO** packet error rate vs. SNIR gorbe (ez nem jott volna ki a unit diskbol) -.. **TODO** not SNIR is displayed, and its doesnt increase +.. **TODO** not SNIR is displayed, and its doesn't increase -As the distance increases between the two hosts, SNIR decreases and the packet error rate increases, +As the distance increases between the two hosts, SNIR decreases, and the packet error rate increases, and packets are dropped. Note that the communication range of the source host is indicated with a -blue circle. Beyond the circle, transmissions cannot be received correctly, and signal strength falls +blue circle. Beyond the circle, transmissions cannot be received correctly, and the signal strength falls below the SNIR threshold of the receiver. As an optimization, the reception is not even attempted, thus there are no packet drop animations. @@ -226,27 +226,27 @@ Overview The dimensional analog model represents signal power as a 2-dimensional function of time and frequency. It can: - Model arbitrary signal shapes -- Simulate complex signal interactions, i.e. multiple arbitrary signal shapes can overlap to any degree +- Simulate complex signal interactions, i.e., multiple arbitrary signal shapes can overlap to any degree - Simulate interference of different wireless technologies (cross-technology interference) It is the most accurate analog signal representation, but it is computationally more expensive than the scalar model. In contrast to the unit disk and scalar models, the signal spectra of the dimensional analog model can also be -visualized with spectrum figures, spectrograms and power density maps (see :doc:`/showcases/visualizer/canvas/spectrum/doc/index`). +visualized with spectrum figures, spectrograms, and power density maps (see :doc:`/showcases/visualizer/canvas/spectrum/doc/index`). .. figure:: media/dimensional3d.png :align: center :width: 70% - An example for a signal with a complex spectrum but constant in time + An example of a signal with a complex spectrum but constant in time -.. note:: When the dimensional model is used in a way equivalent to the scalar model (i.e. boxcar signal shape in frequency and time), its performance is comparable to the scalar's. +.. note:: When the dimensional model is used in a way equivalent to the scalar model (i.e., boxcar signal shape in frequency and time), its performance is comparable to the scalar's. -The dimensional analog model uses an efficient generic purpose multi-dimensional mathematical +The dimensional analog model uses an efficient generic-purpose multi-dimensional mathematical function API. The analog model represents signal spectral power density [W/Hz] as a 2-dimensional function of time [s] and frequency [Hz]. -The API provides primitive functions (e.g. constant function), function compositions -(e.g. function addition), and allows creating new functions, either by implementing +The API provides primitive functions (e.g., constant function), function compositions +(e.g., function addition), and allows creating new functions, either by implementing the required C++ interface or by combining existing implementations. Here are some example functions provided by the above API: @@ -257,7 +257,7 @@ Primitive functions: - Multi-dimensional bilinear function, linear in 2 dimensions, constant in the others - Multi-dimensional unireciprocal function, reciprocal in 1 dimension, constant in the others - Boxcar function in 1D and 2D, being non-zero in a specific range and zero everywhere else -- Standard gauss function in 1D +- Standard Gauss function in 1D - Sawtooth function in 1D that allows creating chirp signals - Interpolated function with samples on a grid in 1D and 2D - Generic interpolated function with arbitrary samples and interpolations between them @@ -267,8 +267,8 @@ Function compositions: - Algebraic operations: addition, subtraction, multiplication, division - Limiting function domain, shifting function domain, modulating function domain - Combination of two 1D functions into a 2D function -- Approximation in selected dimension -- Integration in selected dimension +- Approximation in the selected dimension +- Integration in the selected dimension Interpolators (between two points): @@ -280,16 +280,16 @@ The dimensional transmitters in INET use the API to create transmissions. For ex - The 2D boxcar function is used to create a flat signal with a specific bandwidth and duration. - Transmission power is applied with the multiplication function. -- The domain shifting function is used to position the signal on the frequency spectrum, and to the appropriate point in time. -- Attenuation is also applied with the multiplication function (it's more complicated, as attenuation is space, time and frequency dependent, and takes obstacles into account). +- The domain shifting function is used to position the signal on the frequency spectrum and to set it to the appropriate point in time. +- Attenuation is also applied with the multiplication function (it's more complicated, as attenuation is space, time, and frequency-dependent and takes obstacles into account). - Interfering signals are summed with the addition function. -- SNIR is calculated by dividing the received signal with interfering signals. +- SNIR is calculated by dividing the received signal by interfering signals. -.. note:: The above function composition can get really complex. For example, the medium visualizer uses a 5 dimensional function to describe the transmission medium total power spectral density over space (the whole scene), time (the whole duration of the simulation), and frequency (the whole spectrum). Similarly, the API can be used to create a space, time, and frequency-dependent background noise module (not provided in INET currently). +.. note:: The above function composition can get really complex. For example, the medium visualizer uses a 5-dimensional function to describe the transmission medium total power spectral density over space (the whole scene), time (the whole duration of the simulation), and frequency (the whole spectrum). Similarly, the API can be used to create a space, time, and frequency-dependent background noise module (not provided in INET currently). -.. note:: The dimensional transmitters in INET select the most optimal representation for the signal, depending on the gains parameters (described later). For example, if the parameters describe a flat signal, they'll use a boxcar function (in 1D or 2D, whether the signal is flat in one or two dimensions). If the gains parameters describe a complex function, they'll use the generic interpolated function; the gains parameter string actually maps to the samples and the types of interplation between them. +.. note:: The dimensional transmitters in INET select the most optimal representation for the signal, depending on the gains parameters (described later). For example, if the parameters describe a flat signal, they'll use a boxcar function (in 1D or 2D, whether the signal is flat in one or two dimensions). If the gains parameters describe a complex function, they'll use the generic interpolated function; the gains parameter string actually maps to the samples and the types of interpolation between them. -INET contains dimensional versions of IEEE 802.11, narrowband and ultra-wideband 802.15.4, +INET contains dimensional versions of IEEE 802.11, narrowband, and ultra-wideband 802.15.4, and APSK radio (the 802.15.4 ultra-wideband version is only available in dimensional form). The signal shapes in frequency and time can be defined with the :par:`frequencyGains` @@ -320,10 +320,10 @@ The normalization parameters specify whether to normalize the gain parameters in before applying the transmission power. The parameter values: - ``""``: No normalization -- ``"integral"``: the area below the signal function equals to 1 -- ``"maximum"``: the maximum of the signal function equals to 1 +- ``"integral"``: the area below the signal function equals 1 +- ``"maximum"``: the maximum of the signal function equals 1 -Then the signal is multiplied by the transmission power. +Then, the signal is multiplied by the transmission power. Example: Interference of Signals with Complex Spectra ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -335,8 +335,8 @@ which overlaps with all transmissions. The :ned:`NoiseSource` module creates dimensional transmissions, which interfere with other signals. It contains a transmitter module (:ned:`NoiseTransmitter`), -an antenna module (:ned:`IsotropicAntenna` by default), mobility module (so that it -has a position, and can optinally move around): +an antenna module (:ned:`IsotropicAntenna` by default), a mobility module (so that it +has a position and can optionally move around): .. figure:: media/noisesource.png :width: 25% @@ -361,9 +361,9 @@ Here is the spectrum displayed on a spectrum figure: :align: center :width: 50% -.. note:: In current error models, bit error rate is based on the minimum or mean of the SNIR, as opposed to per symbol SNIR. +.. note:: In the current error models, the bit error rate is based on the minimum or mean of the SNIR, as opposed to per symbol SNIR. -Background noise is specified as power density, instead of power (specifying signal +Background noise is specified as power density, instead of power (specifying the signal strength with power is only relevant when the frequency of the signal is defined in a range). .. **TODO** network + legend (spectrum figure, spectrogram) diff --git a/showcases/wireless/blockack/doc/index.rst b/showcases/wireless/blockack/doc/index.rst index 64060e14510..505c111811e 100644 --- a/showcases/wireless/blockack/doc/index.rst +++ b/showcases/wireless/blockack/doc/index.rst @@ -34,10 +34,10 @@ the reverse direction. Block ack sessions are terminated by the originator sendi a `DELBA` frame. The originator sends multiple data frames, then sends a `block ack request` frame. -The recipient replies with a `block ack` frame, acknowledging the correctly received -frames from the previous block. The block ack request frame and block ack frame are +The recipient replies with a `block ack` frame, acknowledging the reception of the previous block's +frames. The block ack request frame and block ack frame are acked if *delayed block ack* is used, and not acked if *immediate block ack* is used. -The frames or fragments of frames from the previous block which are not acked are +The frames or fragments of frames from the previous block that were not acked are retransmitted in the next block by the originator. There are several kinds of block acks, such as `normal`, `compressed`, and `multi-tid`. @@ -64,14 +64,14 @@ They have the same set of parameters. The parameters of :ned:`OriginatorBlockAck (at ``mac.hcf.recipientBlockAckAgreementPolicy``) are the following: - :par:`aMsduSupported`: If ``true``, aMSDUs are block acked; otherwise they are normal acked (default). -- :par:`maximumAllowedBufferSize`: Sets the buffer size, i.e. how many MSDUs can be acked with a block ack, 64 by default. +- :par:`maximumAllowedBufferSize`: Sets the buffer size, i.e. how many MSDUs can be acked with a block ack, default is 64. `Ack policy modules` control how packets are acked. There are two module types, one for each direction (originator and recipient), but settings are configured in the originator module. Parameters of :ned:`OriginatorQosAckPolicy` (at ``mac.hcf.originatorAckPolicy``) are the following: -- :par:`blockAckReqThreshold`: The originator sends a block ack request after sending this many packets -- :par:`maxBlockAckPolicyFrameLength`: Only packets below this length are block acked (others are normal acked) +- :par:`blockAckReqThreshold`: The originator sends a block ack request after sending this many packets (default is 5). +- :par:`maxBlockAckPolicyFrameLength`: Only packets below this length are block acked (others are normal acked, default is 1000). Currently, only immediate block ack is implemented. Also, block ack sessions cannot be enabled in one direction and disabled in the other. @@ -92,19 +92,19 @@ UDP packets to ``host2``. :align: center We'll use three configurations to demonstrate various aspects of the block ack mechanism. -The configurations in :download:`omnetpp.ini <../omnetpp.ini>` are the following: +The configurations in :download:`omnetpp.ini <../omnetpp.ini>` contain the following setups: - ``NoFragmentation``: Packets are sent without fragmentation or aggregation. Block ack requests are sent after five packets (default). -- ``Fragmentation``: Packets are fragmented to 16 pieces, block ack requests are sent after 16 fragments. -- ``MixedTraffic``: Two traffic sources in ``host1`` generate packets below and above the :par:`maxBlockAckPolicyFrameLength` threshold. Packets with size below the threshold are block acked; those above aren't. +- ``Fragmentation``: Packets are fragmented into 16 pieces, and block ack requests are sent after 16 fragments. +- ``MixedTraffic``: Two traffic sources in ``host1`` generate packets below and above the :par:`maxBlockAckPolicyFrameLength` threshold. Packets with a size below the threshold are block acked; those above are not. .. TODO when implemented - ``OneWayBlockAck``: Both ``host1`` and ``host2`` send packets to the other. A one-way block ack session is established, i.e. only packets going in one direction are block acked. -Parameter settings common to the three configurations are defined in the ``General`` configuration +The parameters common to the three configurations are defined in the ``General`` configuration in :download:`omnetpp.ini <../omnetpp.ini>`. The rest of this section explains the most important settings. -``host1`` is configured to send 700B UDP packets to ``host2`` with about 10Mbps: +``host1`` is configured to send 700B UDP packets to ``host2`` at about 10Mbps: .. literalinclude:: ../omnetpp.ini :start-at: host1.numApps @@ -125,16 +125,16 @@ Block ack support is enabled in both hosts: :end-at: isBlockAckSupported :language: ini -The maximum block ack frame length is left on default, 1000B. +The maximum block ack frame length is left as default, 1000B. -Fragmentation and aggregation is disabled by raising the thresholds: +Fragmentation and aggregation are disabled by raising the thresholds: .. literalinclude:: ../omnetpp.ini :start-at: fragmentationThreshold :end-at: aggregationLengthThreshold :language: ini -The transmitter power is lowered to make the communication more noisy, in order to +Transmitter power is lowered to make the communication more noisy, in order to get some lost packets: .. literalinclude:: ../omnetpp.ini @@ -149,7 +149,7 @@ No Fragmentation In this simulation, packets are sent unfragmented, and ``host1`` sends a block ack request after sending five frames (the default). The configuration in :download:`omnetpp.ini <../omnetpp.ini>` (``NoFragmentation``) -is empty: +has no specific settings: .. literalinclude:: ../omnetpp.ini :start-at: Config NoFragmentation @@ -163,7 +163,7 @@ Qtenv's packet traffic view: :width: 100% :align: center -First, ``host1`` sends an UDP packet, then an ADDBA request frame. ``host2`` replies +First, ``host1`` sends a UDP packet, then an ADDBA request frame. ``host2`` replies with an ADDBA response frame (both frames are acked). This frame exchange establishes the block ack session, and ``host2`` doesn't normal ack the frames from this point forward but waits for a block ack request. @@ -176,8 +176,8 @@ Here are the ADDBA request and ADDBA response frames displayed in the packet tra It indicates that aggregate MSDUs (aMSDUs) are supported, and the buffer size in both hosts is 64. -After sending five UDP packets, ``host1`` sends a block ack request, ``host2`` replies -with a block ack. The block ack frame acks the five previous frames. Here is the block ack +After sending five UDP packets, ``host1`` sends a block ack request, and ``host2`` replies +with a block ack. The block ack frame acknowledges the five previous frames. Here is the block ack request frame displayed in Qtenv's packet inspector: .. figure:: media/blockackreq.png @@ -191,18 +191,18 @@ first packet to be acked. Here is the block ack response frame: :width: 80% :align: center -It contains the starting sequence number as well, and the bitmap which specifies +The block ack response frame also contains the starting sequence number and the bitmap, which specifies which packets were received correctly. Here is the bitmap: .. figure:: media/bitmap.png :width: 80% :align: center -The first five entries are used. It acks the five packets, starting from sequence number 1. +The first five entries are used. It acknowledges the five packets, starting from sequence number 1. -.. note:: When there is no fragmentation, only the first bit of the 16-bit entry is used to ack the frame. Here, the first three entries are all ones because the MAC already passed those packets to the higher layers, and has no information about the number of fragments, but still it indicates that the packets were successfully received. (This is an implementation detail.) +.. note:: When there is no fragmentation, only the first bit of the 16-bit entry is used to ack the frame. Here, the first three entries are all ones because the MAC has already passed those packets to the higher layers and has no information about the number of fragments, but it still indicates that the packets were successfully received. (This is an implementation detail.) -It is indicated in the block ack when some of the frames in a block were not received correctly. +The block ack also indicates when some of the frames in a block were not received correctly. The MAC retransmits those in the next block. Here is a retransmission in the packet traffic view: .. figure:: media/retransmission.png @@ -210,20 +210,20 @@ The MAC retransmits those in the next block. Here is a retransmission in the pac :align: center First, ``host1`` sends five packets, ``Data-16`` to ``Data-20``. Two of the frames, ``Data-17`` -and ``Data-18`` are not received correctly, indicated in the block ack's bitmap (starting +and ``Data-18``, were not received correctly, as indicated in the block ack's bitmap (starting sequence number is 16, corresponding to ``Data-16``): .. figure:: media/retxblockack2.png :width: 80% :align: center -The next block starts with ``host1`` retransmitting these two frames, then transmitting +The next block starts with ``host1`` retransmitting these two frames, and then transmitting three new ones. Fragmentation ~~~~~~~~~~~~~ -In this simulation, 1080B packets are fragmented to 16 pieces, each 100B. Block ack requests +In this simulation, 1080B packets are fragmented into 16 pieces, each 100B. Block ack requests are sent after 16 frames. It is defined in the ``Fragmentation`` configuration in :download:`omnetpp.ini <../omnetpp.ini>`: .. literalinclude:: ../omnetpp.ini @@ -238,14 +238,14 @@ Here is how the simulation starts displayed in the packet traffic view: :align: center After sending a fragment, the block ack session is negotiated. After that, ``host1`` transmits -the remaining fragments of ``Data-0``, which are normal acked. However, the fragments of the +the remaining fragments of ``Data-0``, which are normally acked. However, the fragments of the next packet, ``Data-1``, are block acked: .. figure:: media/frag_blockack_messageview_.png :width: 100% :align: center -In the next block, two fragments of ``Data-1`` are retransmitted, then several fragments of ``Data-2`` +In the next block, two fragments of ``Data-1`` are retransmitted, and then several fragments of ``Data-2`` are sent: .. figure:: media/frag_retx_messageview.png @@ -260,7 +260,7 @@ Here is the block ack bitmap for this block: The first entry of the bitmap is all ones, indicating that ``Data-1`` was completely received (all fragments), even though only two fragments of it were sent in this block. The second -entry has three zeros. The first zero indicates a lost fragment of ``Data-2``; the last two correspond to fragments that haven't been sent yet. +entry has three zeros. The first zero indicates a lost fragment of ``Data-2``, and the last two correspond to fragments that haven't been sent yet. Mixed Traffic ~~~~~~~~~~~~~ @@ -277,7 +277,7 @@ defined in the ``MixedTraffic`` configuration in :download:`omnetpp.ini <../omne The smaller and larger packets are created at the same rate, so ``host1`` sends them alternately. However, only the smaller ones are considered when sending block ack requests (sent after five -of the 700B packets). The larger packets are normal acked immediately: +of the 700B packets). The larger packets are normally acked immediately: .. figure:: media/mixed_messageview.png :width: 100% @@ -285,15 +285,15 @@ of the 700B packets). The larger packets are normal acked immediately: Here, the block starts with ``Data-6``. ``Data-6`` to ``Data-10``, and also some large packets, are sent. ``Large-8`` and ``Large-9``, for example, are sent twice because they weren't correctly received -for the first time (there was no ack). The 802.11 MAC sequence numbers are contiguous between -all the packets sent by the MAC, so the block ack bitmap contains (the already acked) large +the first time (there was no ack). The 802.11 MAC sequence numbers are contiguous between +all the packets sent by the MAC, so the block ack bitmap contains the (already acked) large packets as well: .. figure:: media/mixed_blockack.png :width: 100% :align: center -Also, the block ack is lost, so the not-yet-acked packets in the previous block (``Data-6`` +Furthermore, the block ack is lost, so the not-yet-acked packets in the previous block (``Data-6`` to ``Data-10``) are retransmitted. .. note:: Instead of retransmitting the data frames, the originator could send the block ack request again if it didn't receive a block ack. This is currently a limitation of the implementation. diff --git a/showcases/wireless/coexistence/doc/index.rst b/showcases/wireless/coexistence/doc/index.rst index b4b0f2a028e..9ce9ad3e3b8 100644 --- a/showcases/wireless/coexistence/doc/index.rst +++ b/showcases/wireless/coexistence/doc/index.rst @@ -370,7 +370,7 @@ if the signal power for one of the transmissions were significantly higher than the other, the lower power transmission might not be correctly receivable.) In the video, ``wifiHost1`` and ``wpanHost1`` transmit concurrently. The Wifi -transmission is correctly received and successfuly ACKED. Then, ``wifiHost2`` +transmission is correctly received and successfully ACKED. Then, ``wifiHost2`` senses the ongoing WPAN transmission, and defers from transmitting. The WPAN tranmission is correctly received by ``wpanHost2``. When the transmission is over, ``wifiHost1`` sends its next frame; since ACKs are not protected by CCA, diff --git a/showcases/wireless/crosstalk/doc/index.rst b/showcases/wireless/crosstalk/doc/index.rst index 083e0efdfaa..d79399ad91d 100644 --- a/showcases/wireless/crosstalk/doc/index.rst +++ b/showcases/wireless/crosstalk/doc/index.rst @@ -40,22 +40,22 @@ The model --------- The 2.4 GHz frequency range in 802.11g, for example, can use a limited -number of channels (13 in the EU.) The bandwidth of transmissions in +number of channels (13 in the EU). The bandwidth of transmissions in 802.11g is 20MHz, and channels are spaced 5MHz apart. Thus adjacent channels overlap, and transmissions on these channels can interfere. There can be a few independent channels, where there is no cross-channel -interference, e.g. Channels 1, 6, and 11, as illustrated below. +interference, e.g., Channels 1, 6, and 11, as illustrated below. .. figure:: media/channels.png :width: 100% :align: center -In INET, the scalar analog model represents signals with a scalar signal power, -and a constant center frequency and bandwidth. The scalar model can only handle -situations when the spectra of two concurrent signals are identical or don’t -overlap at all. When using the dimensional analog model, signal power can change -in both time and frequency; more realistic signal shapes can be specified. -This model is also able to calculate the interference of signals whose spectra +In INET, the scalar analog model represents signals with a scalar signal power, +and a constant center frequency and bandwidth. The scalar model can only handle +situations when the spectra of two concurrent signals are identical or don't +overlap at all. When using the dimensional analog model, signal power can change +in both time and frequency; more realistic signal shapes can be specified. +This model is also able to calculate the interference of signals whose spectra partially overlap. There are example simulations for the three cases outlined in the Goals @@ -100,9 +100,9 @@ of the ``General`` configuration, so it's empty: :language: ini Since the frequency and bandwidth of transmissions for all -hosts is exactly the same, inferring which transmissions interfere is +hosts are exactly the same, inferring which transmissions interfere is obvious (all of them). In this case, a scalar analog model is sufficient. -The following video shows the node-pairs communicating, the number of +The following video shows the node-pairs communicating; the number of sent/received packets is displayed above the nodes, as well as the state of the contention modules of the transmitting hosts. @@ -112,7 +112,7 @@ of the contention modules of the transmitting hosts. -At first the two source nodes, ``host1`` and ``host3``, start +At first, the two source nodes, ``host1`` and ``host3``, start transmitting at the same time. The transmissions collide, and neither destination host is able to receive any of them correctly. The collision avoidance mechanism takes effect, and ``host3`` wins channel access. @@ -122,7 +122,7 @@ Independent Frequency Bands ~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this case, we are modeling host-pairs that are communicating on -different, non-overlapping Wifi channels (e.g. Channels 1 and 6.) Since +different, non-overlapping Wifi channels (e.g., Channels 1 and 6). Since the channels are independent, it is obvious that there won't be any interference. The scalar analog model is sufficient for this case. @@ -137,7 +137,7 @@ to use the non-overlapping Channels 1 and 6: :end-at: 6 :language: ini -.. note:: The channel numbers are set to 0 and 5 because in INET’s 802.11 model, the channels are numbered from 0, so that this setting corresponds to Wifi Channels 1 and 6. +.. note:: The channel numbers are set to 0 and 5 because in INET's 802.11 model, the channels are numbered from 0, so that this setting corresponds to Wifi Channels 1 and 6. The video below shows the hosts communicating: @@ -231,7 +231,7 @@ Even though they are on different channels, the transmissions interfere. In the beginning, ``host1`` and ``host3`` transmit simultaneously, and neither transmission can be successfully received. Due to the collision avoidance mechanism, one of the transmitting hosts -defer from transmitting, and the subsequent transmissions are successful. +defers from transmitting, and the subsequent transmissions are successful. Sources: :download:`omnetpp.ini <../omnetpp.ini>`, :download:`Crosstalk.ned <../Crosstalk.ned>` diff --git a/showcases/wireless/directionalantennas/doc/index.rst b/showcases/wireless/directionalantennas/doc/index.rst index a09d832a6be..b70c39d4aa2 100644 --- a/showcases/wireless/directionalantennas/doc/index.rst +++ b/showcases/wireless/directionalantennas/doc/index.rst @@ -18,7 +18,7 @@ and provide an example simulation that demonstrates the directionality of five different antenna models. Four of these models represent well-known antenna patterns, while the fifth model is a universal antenna that can be used to model any rotationally symmetrical antenna pattern. By the end of this showcase, you -will understand the different antenna models that are available in INET and how +will understand the different antenna models available in INET and how they can be used to simulate the directionality of antennas. | INET version: ``4.1`` @@ -69,7 +69,7 @@ Visualizing Antenna Directionality The :ned:`RadioVisualizer` module can visualize antenna directional characteristics, using its antenna lobe visualization feature. For example, the radiation patterns of -an isotropic and a directional antenna looks like the following: +an isotropic and a directional antenna look like the following: .. figure:: media/antennalobe.png :width: 100% @@ -102,7 +102,7 @@ the ``DirectionalAntennasShowcase`` network: :width: 60% :align: center -The network contains two :ned:`AdhocHost`\ s, named ``source`` and +The network contains two :ned:`AdhocHost` s, named ``source`` and ``destination``. There is also an :ned:`Ipv4NetworkConfigurator`, an :ned:`IntegratedVisualizer`, and an :ned:`Ieee80211ScalarRadioMedium` module. @@ -257,7 +257,7 @@ Here is the reception power vs. direction plot: Cosine Antenna ~~~~~~~~~~~~~~ -The :ned:`CosineAntenna` module models a hypotethical antenna with a cosine-based +The :ned:`CosineAntenna` module models a hypothetical antenna with a cosine-based radiation pattern. This antenna model is commonly used in the real world to approximate various directional antennas. The module has two parameters, :par:`maxGain` and :par:`beamWidth`. The configuration in :download:`omnetpp.ini <../omnetpp.ini>` is the diff --git a/showcases/wireless/errorrate/doc/index.rst b/showcases/wireless/errorrate/doc/index.rst index a761331ef25..e99f232724f 100644 --- a/showcases/wireless/errorrate/doc/index.rst +++ b/showcases/wireless/errorrate/doc/index.rst @@ -4,7 +4,7 @@ Packet Loss vs. Distance Using Various WiFi Bitrates Goals ----- -In this showcase, we perform a parameter study where we examine how packet error +In this showcase, we perform a parameter study where we examine how the packet error rate changes as a function of distance in an 802.11g wireless network. The packet error rate is measured at various WiFi bitrates, providing insights into the impact of different bitrates on the @@ -17,11 +17,11 @@ The Model --------- The network contains two hosts operating in 802.11g ad-hoc mode at 10 mW -transmission power. One of the hosts acts as traffic source, the other -as traffic sink. We will perform a parameter study with the distance and +transmission power. One of the hosts acts as a traffic source, the other +as a traffic sink. We will perform a parameter study with the distance and the bitrate as parameters. The distance will run between 10 and 550 meters, in 2-meter steps. The bitrate will take the ERP modes in -802.11g: 6, 9, 12, 18, 24, 36, 48 and 54 Mbps. This results in about +802.11g: 6, 9, 12, 18, 24, 36, 48, and 54 Mbps. This results in about 2100 simulation runs. To make the model more realistic, we will simulate multipath propagation @@ -32,16 +32,16 @@ path loss models in INET, including :ned:`FreeSpacePathLoss`, requires a ground model, which is configured in the ``physicalEnvironment`` module to be :ned:`FlatGround`. The heights of the hosts above the ground are set to 1.5 meters. We assume isotropic -background noise of -86 dBm (:ned:`IsotropicScalarBackgroundNoise`.) +background noise of -86 dBm (:ned:`IsotropicScalarBackgroundNoise`). -We will use :ned:`Ieee80211NistErrorModel` to compute bit errors. The +We will use the :ned:`Ieee80211NistErrorModel` to compute bit errors. The :ned:`Ieee80211NistErrorModel` is based on the Nist error rate model. In each simulation run, the source host will send a single UDP packet (56 bytes of UDP data, resulting in a 120-byte frame) to the destination -host as a probe. At packet reception, the error model will compute bit +host as a probe. At packet reception, the error model will compute the bit error rate (BER) from the signal-to-noise-plus-interference-ratio -(SNIR). Packet error rate (PER) will be computed from BER. The +(SNIR). The packet error rate (PER) will be computed from the BER. The simulation records SNIR, BER, and PER. Note that in this simulation model, the physical layer simulation is entirely deterministic, hence there is no need for Monte Carlo. @@ -62,18 +62,18 @@ Results SNIR, BER, and PER are recorded from the simulation runs. SNIR is measured at the receiver and depends on the power of the noise and the -power of the signal. The signal power decreases with distance, so SNIR +power of the signal. The signal power decreases with distance, so the SNIR does as well. SNIR is independent of modulations and coding schemes, so -it is the same for all bitrates. This can be seen on the following plot, +it is the same for all bitrates. This can be seen in the following plot, which displays SNIR against distance. .. figure:: media/SNIR_distance_v2.png :width: 100% -The next plot shows how packet error rate decreases with SNIR. Slower +The next plot shows how the packet error rate decreases with SNIR. Slower bitrates use simpler modulation like binary phase shift keying, which is more tolerant to noise than more complex modulations used by faster -bitrates, hence the difference on the graph between the different +bitrates. Hence, there is a difference in the graph between the different bitrates. The various modulations and coding rates of 802.11g ERP modes are the @@ -91,7 +91,7 @@ receiver. .. figure:: media/PER_SNIR_v3.png :width: 100% -The following plot shows the packet error rate vs distance. Again, +The following plot shows the packet error rate vs. distance. Again, slower bitrates show fewer packet errors as the distance increases because of the simpler modulation. @@ -101,12 +101,12 @@ because of the simpler modulation. We also compute the effective bitrate, which is the gross bitrate decreased by packet errors. It is computed with the following formula: -``effective bitrate = (1-PER) * nominal bitrate`` +``effective bitrate = (1 - PER) * nominal bitrate`` It is equal to the nominal data bitrate unless it is decreased because of bit errors as the distance increases. -Effective bitrate vs distance is shown on the next plot. Higher bitrates +Effective bitrate vs. distance is shown on the next plot. Higher bitrates are more sensitive to increases in distance, as the effective bitrate drops rapidly after a critical distance. This critical distance is farther for slower bitrates, and the decrease is not as rapid. @@ -114,7 +114,7 @@ farther for slower bitrates, and the decrease is not as rapid. .. figure:: media/throughput_distance3.png :width: 100% -802.11 ranges depend on many variables, e.g. transmission power, +802.11 ranges depend on many variables, e.g., transmission power, receiver sensitivity, antenna gains and directionality, and background noise levels. The above ranges correspond to arbitrary values for the variables. In reality, ranges can vary significantly. @@ -122,7 +122,7 @@ variables. In reality, ranges can vary significantly. Conclusion ---------- -Packet error rate increases quickly as the distance approaches the +The packet error rate increases quickly as the distance approaches the critical point. Slower bitrates are less sensitive to increasing distance because they use simpler modulation. Faster bitrate modes are advantageous in short distances because of the increased throughput, but diff --git a/showcases/wireless/handover/doc/index.rst b/showcases/wireless/handover/doc/index.rst index c698d78c2ea..efd008ba0f4 100644 --- a/showcases/wireless/handover/doc/index.rst +++ b/showcases/wireless/handover/doc/index.rst @@ -103,7 +103,7 @@ indicated by the dotted arrow, which only goes from the AP to the host. The host remains associated with AP1 as long as it is within communication range, even though it gets into the communication range of AP2 after a while (when it enters the area where the two APs' communication range -circles overlap.) As it leaves AP1's range, the host detects that it no +circles overlap). As it leaves AP1's range, the host detects that it no longer receives AP1's beacon frames. A text bubble appears at the host indicating that it has lost the beacon. The scanning process is restarted by the host's agent module. This is triggered when several diff --git a/showcases/wireless/hiddennode/doc/index.rst b/showcases/wireless/hiddennode/doc/index.rst index d19ea528e52..cd07def8e31 100644 --- a/showcases/wireless/hiddennode/doc/index.rst +++ b/showcases/wireless/hiddennode/doc/index.rst @@ -7,7 +7,7 @@ Goals The hidden node problem occurs when two nodes in a wireless network can communicate with a third node, but cannot communicate with each other directly due to obstacles or being out of range. This can lead to collisions at the third node -when both nodes transmit simultaneously. +when both nodes transmit simultaneously. The RTS/CTS mechanism is used to address this problem by allowing nodes to reserve the channel before transmitting. @@ -167,7 +167,7 @@ used, the number of packets received correctly at Host B is approximately the same regardless of the presence of the wall. The number of received packets at Host B (wall removed, RTS/CTS off): -**1966**\ The number of received packets at Host B (wall removed, +**1966**\ The number of received packets at Host B (wall removed, RTS/CTS on): **1987** Sources: :download:`omnetpp.ini <../omnetpp.ini>`, :download:`HiddenNodeShowcase.ned <../HiddenNodeShowcase.ned>` diff --git a/showcases/wireless/ieee802154/doc/index.rst b/showcases/wireless/ieee802154/doc/index.rst index 4e4adfcd918..a7c806762b5 100644 --- a/showcases/wireless/ieee802154/doc/index.rst +++ b/showcases/wireless/ieee802154/doc/index.rst @@ -23,8 +23,8 @@ networks, which can be used for creating wireless sensor networks (WSNs), Internet-of-things applications, etc. It defines multiple physical layer specifications (PHYs), based on different modulations, such as Direct Sequence Spread Spectrum (DSSS), Chirp Spread Spectrum -(CSS), Ultra-wideband (UWB). It defines CSMA-CA and ALOHA MAC-layer -protocols as well. +(CSS), and Ultra-wideband (UWB). It also defines CSMA-CA and ALOHA MAC-layer +protocols. The INET implementation ~~~~~~~~~~~~~~~~~~~~~~~ @@ -41,7 +41,7 @@ DSSS-OQPSK modulation and operates at 2.45 GHz. By default, signals are transmitted with a bandwidth of 2.8 MHz using 2.24 mW transmission power. The model uses the scalar analog model. -The :ned:`Ieee802154NarrowbandInterface` module contains a +The :ned:`Ieee802154NarrowbandInterface` module contains an :ned:`Ieee802154NarrowbandScalarRadio` and the corresponding :ned:`Ieee802154NarrowbandMac`. The radio medium module required by the radio is :ned:`Ieee802154NarrowbandScalarRadioMedium`. As per the name, the diff --git a/showcases/wireless/multiradio/doc/index.rst b/showcases/wireless/multiradio/doc/index.rst index 94bdfe26bdb..24ef38d2a89 100644 --- a/showcases/wireless/multiradio/doc/index.rst +++ b/showcases/wireless/multiradio/doc/index.rst @@ -68,7 +68,7 @@ The following video has been captured from the simulation. Note how ``host1`` is pinging ``host2`` through ``accessPoint``. The radio signals are visualized as disks, and successful transmissions between nodes' data link layers are visualized by arrows. The transmissions for the two different networks (both disks and arrows) are colored -differently (red for wlan2.4 and blue for wlan5.) +differently (red for wlan2.4 and blue for wlan5). .. video:: media/ping3.mp4 :width: 100% diff --git a/showcases/wireless/power/doc/index.rst b/showcases/wireless/power/doc/index.rst index 34c8b876ccc..8381dd97b6f 100644 --- a/showcases/wireless/power/doc/index.rst +++ b/showcases/wireless/power/doc/index.rst @@ -42,7 +42,7 @@ will look like when the simulation is run: Configuration and behavior ~~~~~~~~~~~~~~~~~~~~~~~~~~ -All hosts are configured ping ``host[0]`` every second. ``host[0]`` +All hosts are configured to ping ``host[0]`` every second. ``host[0]`` doesn't send ping requests, just replies to the requests that it receives. To reduce the probability of collisions, the ping application's start time is chosen randomly for each host as a value @@ -92,7 +92,7 @@ Radio modes and states In the :ned:`Ieee80211ScalarRadio` model used in this simulation (and in other radio models), there are different modes in which radios operate, -such as off, sleep, receiver, transmitter. The mode is set by the model +such as off, sleep, receiver, and transmitter. The mode is set by the model and does not depend on external effects. In addition to mode, radios have states, which depend on what they are doing in the given mode -- i.e. listening, receiving a transmission, or transmitting. The state depends @@ -107,7 +107,7 @@ Radios in the simulation are configured to contain a is based on power consumption values for various radio modes and states, and the time the radio spends in these states. For example, radios consume a small amount of power when they are idle in receive mode, i.e. -when they are listening for transmissions. They consume more when they +when they are listening for transmissions. They consume more power when they are receiving a transmission, and even more when they are transmitting. @@ -199,7 +199,7 @@ The following plot shows a ping request-ping reply exchange (with the associated ACKs) between hosts 0 and 3 on the sequence chart and the corresponding changes in energy levels of ``host[0]``. Note that ``host[0]`` consumes less energy receiving than transmitting. In the -intervals between the transmissions, the curve is increasing, because +intervals between the transmissions, the curve is increasing because the generator is charging ``host[0]``. This image shows that hosts indeed consume more power when transmitting than the generator generates. However, transmissions are very short and very rare, so one diff --git a/showcases/wireless/qos/doc/index.rst b/showcases/wireless/qos/doc/index.rst index 4ced3737663..d8720062a68 100644 --- a/showcases/wireless/qos/doc/index.rst +++ b/showcases/wireless/qos/doc/index.rst @@ -20,7 +20,7 @@ About 802.11 QoS ---------------- When QoS is enabled in 802.11, the MAC uses a technique called `enhanced distributed channel access` (EDCA) -to provide different treatment to various packets classes. EDCA is part of the hybrid coordination function (HCF). +to provide different treatment to various packet classes. EDCA is part of the hybrid coordination function (HCF). In EDCA, packets are classified into four access categories, each category having a different priority. The categories from lowest to highest priority are the following: @@ -237,7 +237,7 @@ The following table summarizes the average jitter for the different access categ As with the delay, the jitter is improved in the QoS case for video and voice, and worse for background and best effort. -Let us spend a minute on explaning an artifact in the QoS jitter plot, namely +Let us spend a minute on explaining an artifact in the QoS jitter plot, namely some data points forming horizontal lines. Here is a relevant part of the plot, zoomed in: .. figure:: media/jitter_qos_zoomed.png @@ -247,7 +247,7 @@ Some jitter data points of best effort and video form two horizontal lines. The best effort line is at -0.25 ms; the video line is at -1 ms. The reason for these data points is that sometimes, packets belonging to the same access category are sent consecutively. The packets are generated every 0.25 and 1 ms, -but it takes a few microseconds to transmit a frame, thus the consequtive packets +but it takes a few microseconds to transmit a frame, thus the consecutive packets arrive at the receiver at the same time (a few microseconds apart). This results in the -0.25 ms and -1 ms jitter. diff --git a/showcases/wireless/ratecontrol/doc/index.rst b/showcases/wireless/ratecontrol/doc/index.rst index e6a6c85aea8..5a3d93f4caf 100644 --- a/showcases/wireless/ratecontrol/doc/index.rst +++ b/showcases/wireless/ratecontrol/doc/index.rst @@ -33,10 +33,10 @@ About rate control The physical layers of IEEE 802.11 devices are capable of transmitting at several different rates. The different rates can use different channel access methods, like orthogonal frequency division multiplexing -(OFDM) or directs sequence spread spectrum (DSSS), and different +(OFDM) or direct sequence spread spectrum (DSSS), and different modulation schemes like binary phase shift keying (BPSK) or types of quadrature amplitude modulation (QAM). Each of these has different -tolerances of effects like fading, attenuation, and interference from +tolerances for effects like fading, attenuation, and interference from other radio sources. In varying channel conditions, using the fastest rate might not be optimal for performance. Rate control algorithms adapt the transmission rate dynamically to the changing channel conditions, so @@ -54,7 +54,7 @@ Rate control algorithms ~~~~~~~~~~~~~~~~~~~~~~~ Some rate control algorithms change the transmission rate according to -packet loss. When too many packets are lost (ie. the ACK for them +packet loss. When too many packets are lost (i.e. the ACK for them doesn't arrive), the transmission rate is lowered. When a number of packets are sent without loss, the rate is increased. @@ -144,7 +144,7 @@ Sources: :download:`omnetpp.ini <../omnetpp.ini>`, :download:`RateControlShowcas Conclusion ~~~~~~~~~~ -These results shows that rate control is effective in increasing the performance +These results show that rate control is effective in increasing the performance of the wireless network, as it increases throughput during varying channel conditions. Also, throughput is not zero in situations when it would be if rate control weren't used. diff --git a/showcases/wireless/sensornetwork/doc/index.rst b/showcases/wireless/sensornetwork/doc/index.rst index 631ee988926..17d9ee8e326 100644 --- a/showcases/wireless/sensornetwork/doc/index.rst +++ b/showcases/wireless/sensornetwork/doc/index.rst @@ -20,7 +20,7 @@ Part 1: Demonstrating the MAC protocols @@ -32,7 +32,7 @@ how the MAC manages when certain nodes can communicate on the channel: their time slot, thus eliminating contention. Examples of this kind of MAC protocols include LMAC, TRAMA, etc. - ``Carrier-sense multiple access (CSMA) based``: These protocols use - carrier sensing and backoffs to avoid collisions, similarly to IEEE + carrier sensing and backoffs to avoid collisions, similar to IEEE 802.11. Examples include B-MAC, SMAC, TMAC, X-MAC. This showcase demonstrates the WSN MAC protocols available in INET: @@ -42,46 +42,44 @@ briefly. B-MAC ~~~~~ -B-MAC (short for Berkeley MAC) is a widely used WSN MAC protocol; it is -part of TinyOS. It employs low-power listening (LPL) to minimize power +B-MAC (short for Berkeley MAC) is a widely used WSN MAC protocol; it +is part of TinyOS. It employs low-power listening (LPL) to minimize power consumption due to idle listening. Nodes have a sleep period, after which they wake up and sense the medium for preambles (clear channel assessment - CCA.) If none is detected, the nodes go back to sleep. If there is a preamble, the nodes stay awake and receive the data packet after the preamble. If a node wants to send a message, it first sends a -preamble for at least the sleep period in order for all nodes to detect +preamble for at least the sleep period to allow all nodes to detect it. After the preamble, it sends the data packet. There are optional acknowledgments as well. After the data packet (or data packet + ACK) exchange, the nodes go back to sleep. Note that the preamble doesn't contain addressing information. Since the recipient's address is contained in the data packet, all nodes receive the preamble and the -data packet in the sender's communication range (not just the intended -recipient of the data packet.) +data packet within the sender's communication range (not just the +intended recipient of the data packet.) X-MAC ~~~~~ X-MAC is a development on B-MAC and aims to improve on some of B-MAC's shortcomings. In B-MAC, the entire preamble is transmitted, regardless of -whether the destination node awoke at the beginning of the preamble or +whether the destination node woke up at the beginning of the preamble or the end. Furthermore, with B-MAC, all nodes receive both the preamble and the data packet. X-MAC employs a strobed preamble, i.e. sending the -same length preamble as B-MAC, but as shorter bursts, with pauses in -between. The pauses are long enough that the destination node can send -an acknowledgment if it is already awake. When the sender receives the -acknowledgment, it stops sending preambles and sends the data packet. -This mechanism can save time because potentially, the sender doesn't have to send -the whole length preamble. Also, the preamble contains the address of the -destination node. Nodes can wake up, receive the preamble, and go back +preamble in shorter bursts with pauses in between. The pauses are long enough +that the destination node can send an acknowledgment if it is already awake. +When the sender receives the acknowledgment, it stops sending preambles and +sends the data packet. This mechanism can save time because potentially, the sender +doesn't have to send the whole length of the preamble. Also, the preamble contains the +address of the destination node. Nodes can wake up, receive the preamble, and go back to sleep if the packet is not addressed to them. These features improve -B-MAC's power efficiency by decreasing nodes' time spent in idle -listening. +B-MAC's power efficiency by reducing the nodes' time spent in idle listening. LMAC ~~~~ -LMAC (short for lightweight MAC) is a TDMA-based MAC protocol. There are -data transfer timeframes, which are divided into time slots. The number +LMAC (short for lightweight MAC) is a TDMA-based MAC protocol. It uses +timeframes divided into time slots for data transmission. The number of time slots in a timeframe is configurable according to the number of nodes in the network. Each node has its own time slot, in which only that particular node can transmit. This feature saves power, as there are no @@ -91,9 +89,9 @@ the data, the length of the data unit, and information about which time slots are occupied. All nodes wake up at the beginning of each time slot. If there is no transmission, the time slot is assumed to be empty (not owned by any nodes), and the nodes go back to sleep. If there is a -transmission, after receiving the control message, nodes that are not -the recipient go back to sleep. The recipient node and the sender node -goes back to sleep after receiving/sending the transmission. Only one +transmission, upon receiving the control message, nodes that are not +the intended recipient go back to sleep. The recipient node and the sender node +go back to sleep after receiving/sending the transmission. Only one message can be sent in each time slot. In the first five timeframes, the network is set up and no data packets are sent. The network is set up by nodes claiming a time slot. They send a control message in the time slot @@ -109,42 +107,42 @@ The three MACs are implemented in INET as the :ned:`BMac`, :ned:`XMac`, and :ned:`LMac` modules. They have parameters to adapt the MAC protocol to the size of the network and the traffic intensity, such as slot time, clear channel assessment duration, bitrate, etc. The parameters have default -values, thus the MAC modules can be used without setting any of their +values, so the MAC modules can be used without setting any of their parameters. Check the NED files of the MAC modules (``BMac.ned``, ``XMac.ned``, and ``LMac.ned``) to see all parameters. The MACs don't have corresponding physical layer models. They can be used with existing generic radio models in INET, such as -:ned:`GenericRadio` or :ned:`ApskRadio`. We're using :ned:`ApskRadio` in this -showcase because it is more realistic than :ned:`GenericRadio`. +:ned:`GenericRadio` or :ned:`ApskRadio`. This showcase uses :ned:`ApskRadio` +because it is more realistic than :ned:`GenericRadio`. INET doesn't have WSN routing protocol models (such as Collection Tree -Protocol), so we're using Ipv4 and static routing. +Protocol), so IPv4 and static routing are used in this showcase. Configuration ~~~~~~~~~~~~~ -The showcase contains three example simulations, which demonstrate the +The showcase contains three example simulations that demonstrate the three MACs in a wireless sensor network. The scenario is that there are wireless sensor nodes in a refrigerated warehouse, monitoring the temperature at their location. They periodically transmit temperature data wirelessly to a gateway node, which forwards the data to a server via a wired connection. -Note that in WSN terminology, the gateway would be called sink. Ideally, +Note that in WSN terminology, the gateway would be called a sink. Ideally, there should be a specific application in the gateway node called -``sink``, which would receive the data from the WSN, and send it to the -server over IP. Thus the node would act as a gateway between the WSN and -the external IP network. In the example simulations, the gateway just +``sink``, which receives the data from the WSN and sends it to the +server over IP. Thus the node acts as a gateway between the WSN and +the external IP network. In the example simulations, the gateway only forwards the data packets over IP. -To run the example simulations, choose the :ned:`BMac`, :ned:`LMac` and +To run the example simulations, choose the :ned:`BMac`, :ned:`LMac`, and :ned:`XMac` configurations from :download:`omnetpp.ini <../omnetpp.ini>`. -Most of the configuration keys in the ini file are shared between the +Most of the configuration keys in the .ini file are shared between the three simulations (they are defined in the ``General`` configuration), except for the MAC protocol-specific settings. All three simulations will use the same network, :ned:`SensorNetworkShowcaseA`, defined in -:download:`SensorNetworkShowcase.ned <../SensorNetworkShowcase.ned>`: +:download:`SensorNetworkShowcase.ned <../SensorNetworkShowcase.ned>`. .. figure:: media/network.png :width: 100% @@ -157,13 +155,13 @@ and an :ned:`ScalarRadioMedium` module. The nodes are placed against the backdrop of a warehouse floorplan. The scene size is 60x30 meters. The warehouse is just a background image providing context. Obstacle loss is not modeled, so the background image doesn't affect -the simulation in any way. +the simulation. The wireless interface in the sensor nodes and the gateway is specified in :download:`omnetpp.ini <../omnetpp.ini>` to be the generic -:ned:`WirelessInterface` (instead of the Ieee802154 specific -:ned:`Ieee802154NarrowbandInterface`, which is the default wlan interface -in :ned:`SensorNode`). The radio type is set to :ned:`ApskScalarRadio`: +:ned:`WirelessInterface`(instead of the IEEE 802.15.4 specific +:ned:`Ieee802154NarrowbandInterface`, which is the default WLAN interface +in :ned:`SensorNode`). The radio type is set to :ned:`ApskScalarRadio`. .. literalinclude:: ../omnetpp.ini :language: ini @@ -172,18 +170,17 @@ in :ned:`SensorNode`). The radio type is set to :ned:`ApskScalarRadio`: Note that the wireless interface module's name is ``wlan`` in all host types that have a wireless interface. The term doesn't imply that it's -Wifi but stands for wireless LAN. +WiFi but stands for wireless LAN. We are using :ned:`ApskScalarRadio` here because it is a relatively simple, generic radio. It uses amplitude and phase-shift keying modulations (e.g. BPSK, QAM-16 or QAM-64, BPSK by default), without additional features such as forward error correction, interleaving or spreading. We set the bitrate in -:download:`omnetpp.ini <../omnetpp.ini>` to 19200 bps, to match the -default for the MAC bitrates (we'll use the default bitrate in the MACs, -which is 19200 bps for all three MAC types.) The :par:`preambleDuration` is -set to be very short for better compatibility with the MACs. We also set -some other parameters of the radio to arbitrary values: +:download:`omnetpp.ini <../omnetpp.ini>` to 19200 bps to match the +default MAC bitrates (which is 19200 bps for all three MAC types). +The :par:`preambleDuration` is set to be very short for better compatibility with the MACs. +We also set some other parameters of the radio to arbitrary values. .. literalinclude:: ../omnetpp.ini :language: ini @@ -193,19 +190,19 @@ some other parameters of the radio to arbitrary values: Routes are set up according to a star topology, with the gateway at the center. This is achieved by dumping the full configuration of :ned:`Ipv4NetworkConfigurator` (which was generated with the configurator's -default settings), and then modifying it. The modified configuration is +default settings) and then modifying it. The modified configuration is in the :download:`config.xml <../config.xml>` file. The following -image shows the routes: +image shows the routes. .. figure:: media/routes.png :width: 100% -Each sensor node will send an UDP packet with a 10-byte payload +Each sensor node will send a UDP packet with a 10-byte payload ("temperature data") every second to the server, with a random start time around 1s. The packets will have an 8-byte UDP header and a 20-byte -Ipv4 header, so they will be 38 bytes at the MAC level. The packets will +IPv4 header, so they will be 38 bytes at the MAC level. The packets will be routed via the gateway. Here are the application settings in -:download:`omnetpp.ini <../omnetpp.ini>`: +:download:`omnetpp.ini <../omnetpp.ini>`. .. literalinclude:: ../omnetpp.ini :language: ini @@ -217,9 +214,9 @@ individual MACs. For B-MAC, the wireless interface's :par:`macType` parameter is set to :ned:`BMac`. Also, the :par:`slotDuration` parameter is set to 0.025s (an -arbitrary value.) This parameter is essentially the nodes' sleep +arbitrary value). This parameter is essentially the nodes' sleep duration. Here is the configuration in -:download:`omnetpp.ini <../omnetpp.ini>`: +:download:`omnetpp.ini <../omnetpp.ini>`. .. literalinclude:: ../omnetpp.ini :language: ini @@ -235,9 +232,8 @@ the destination node. The design of X-MAC allows setting different sleep intervals for different nodes, as long as the sender node's sleep interval is greater than the receiver's. We set the slot duration of the gateway to a shorter value because it has to receive and relay -data from all sensors, thus it has more traffic. Here is the -configuration in :download:`omnetpp.ini <../omnetpp.ini>`: - +data from all sensors, so it has more traffic. Here is the +configuration in :download:`omnetpp.ini <../omnetpp.ini>`. .. literalinclude:: ../omnetpp.ini :language: ini @@ -246,15 +242,15 @@ configuration in :download:`omnetpp.ini <../omnetpp.ini>`: For LMAC, the wireless interface's :par:`macType` parameter is set to :ned:`LMac`. The :par:`numSlots` parameter is set to 8, as it is sufficient -(there are only five nodes in the wireless sensor network.) The +(there are only five nodes in the wireless sensor network). The :par:`reservedMobileSlots` parameter reserves some of the slots for mobile nodes; these slots are not chosen by any of the nodes during network -setup. The parameter's default value is 2, but it is set to 0. The +setup. The parameter's default value is 2, but here it is set to 0. The :par:`slotDuration` parameter's default value is 100ms, but we set it to 50ms to decrease the network setup time. The duration of a timeframe -will be 400ms (number of slots \* slot duration.) The network is set up -in the first five frames, i.e. in the first 2 seconds. Here is the -configuration in :download:`omnetpp.ini <../omnetpp.ini>`: +will be 400ms (number of slots \* slot duration). The network is set up +in the first five frames, i.e. the first 2 seconds. Here is the +configuration in :download:`omnetpp.ini <../omnetpp.ini>`. .. literalinclude:: ../omnetpp.ini :language: ini @@ -266,7 +262,7 @@ The next sections demonstrate the three simulations. B-MAC ~~~~~ -The following video shows sensor nodes sending data to the server: +The following video shows sensor nodes sending data to the server. .. video:: media/BMac2.mp4 :width: 100% @@ -275,17 +271,17 @@ The following video shows sensor nodes sending data to the server: :ned:`BMac` actually sends multiple shorter preambles instead of a long one, so that waking nodes can receive the one that starts after they -woke up. ``sensor3`` starts sending preambles, while the other nodes are +wake up. ``sensor3`` starts sending preambles while the other nodes are asleep. All of them wake up before the end of the preamble transmission. -When the nodes are awake, they receive the preamble, and receive the -data packet as well, at the physical layer (the mac discards it if it is -not for them.) Then the gateway sends it to the server. Note that all -nodes receive the preambles and the data packet as well. +When the nodes are awake, they receive the preamble and the +data packet at the physical layer (the MAC discards it if it is not +intended for them). Then the gateway sends it to the server. Note that all +nodes receive the preambles and the data packet. X-MAC ~~~~~ -In the following video, the sensors send data to the server: +In the following video, the sensors send data to the server. .. video:: media/XMac2.mp4 :width: 100% @@ -303,7 +299,7 @@ server. LMAC ~~~~ -In the following video, sensor nodes send data to the server: +In the following video, sensor nodes send data to the server. .. video:: media/LMac5.mp4 :width: 100% @@ -321,15 +317,15 @@ Part 2: Optimizing for packet loss and comparing power consumption In this section, we'll compare the three MAC protocols in terms of a few statistics, such as the number of UDP packets carried by the network, and power consumption. In order to compare the three protocols, we want -to find the parameter values for each MAC, which lead to the best +to find the parameter values for each MAC that lead to the best performance of the network in a particular scenario. We'll optimize for the number of packets received by the server, i.e. we want to minimize packet loss. -The scenario will be the same as in the :ned:`BMac`, :ned:`XMac` and :ned:`LMac` +The scenario will be the same as in the :ned:`BMac`, :ned:`XMac`, and :ned:`LMac` configurations (each sensor sending data every second to the server), except that it will use a similar, but more generic network layout -instead of the warehouse network: +instead of the warehouse network. .. figure:: media/statisticnetwork.png :width: 60% @@ -351,7 +347,7 @@ server. The parameter study configurations for the three MAC protocols will extend the ``StatisticBase`` config (as well as the ``General`` -configuration): +configuration). .. literalinclude:: ../omnetpp.ini :language: ini @@ -370,7 +366,7 @@ Optimizing B-MAC The goal is to optimize :ned:`BMac`'s :par:`slotTime` parameter for the number of packets received by the server. The configuration in :download:`omnetpp.ini <../omnetpp.ini>` for this is -``StatisticBMac``. Here is the configuration: +``StatisticBMac``. Here is the configuration. .. literalinclude:: ../omnetpp.ini :language: ini @@ -378,16 +374,16 @@ of packets received by the server. The configuration in :end-at: slotDuration In the study, :par:`slotDuration` will run from 10ms to 1s in 10ms -increments (the default of :par:`slotDuration` is 100ms.) The number of +increments (the default of :par:`slotDuration` is 100ms). The number of packets received by the server for each :par:`slotDuration` value is shown -on the following image (time in seconds): +on the following image (time in seconds). .. figure:: media/bmac.png :width: 100% The sensors send 25 packets each during the 25s, thus -100 packets total. It is apparent from the results that the network -cannot carry all traffic in this scenario. The results also outline a +100 packets in total. It is apparent from the results that the network +cannot carry all the traffic in this scenario. The results also outline a smooth curve. The best performing value for :par:`slotDuration` is 0.19s. @@ -398,7 +394,7 @@ Again, we optimize the :par:`slotTime` parameter for the number of packets received by the server. As in the :ned:`XMac` configuration, the ``slotTime`` for the gateway will be shorter than for the sensors. The configuration in :download:`omnetpp.ini <../omnetpp.ini>` for this is -``StatisticXMac``. Here is the configuration: +``StatisticXMac``. Here is the configuration. .. literalinclude:: ../omnetpp.ini :language: ini @@ -408,8 +404,8 @@ configuration in :download:`omnetpp.ini <../omnetpp.ini>` for this is The default of :par:`slotDuration` for :ned:`XMac` is 100ms. In the study, the gateway's :par:`slotDuration` will run from 10ms to 1s in 10ms increments, similarly to the parameter study for B-MAC. The :par:`slotDuration` for the -sensors will be 2.5 times that of the gateway (an arbitrary value.) Here -are the results (time in seconds): +sensors will be 2.5 times that of the gateway (an arbitrary value). Here +are the results (time in seconds). .. figure:: media/xmac.png :width: 100% @@ -423,7 +419,7 @@ Optimizing LMAC We'll optimize the :par:`slotDuration` parameter for the number of packets received by the server. The configuration for this study in :download:`omnetpp.ini <../omnetpp.ini>` is ``StatisticLMac``. -Here is the configuration: +Here is the configuration. .. literalinclude:: ../omnetpp.ini :language: ini @@ -432,15 +428,15 @@ Here is the configuration: We set :par:`reservedMobileSlots` to 0, and :par:`numSlots` to 8. The :par:`slotDuration` parameter will run from 10ms to 1s in 10ms steps. The -number of received packets are displayed on the following image (time in -seconds): +number of received packets is displayed on the following image (time in +seconds). .. figure:: media/lmac.png :width: 100% It is apparent from the results that the network can carry almost all the traffic in this scenario (as opposed to the :ned:`XMac` and :ned:`LMac` -results.) The best performing value for :par:`slotDuration` is 40ms. Note +results). The best performing value for :par:`slotDuration` is 40ms. Note that the lowest :par:`slotDuration` values up until 120ms yield approximately the same results (around 100 packets), with the 40ms value performing marginally better. Choosing the higher :par:`slotDuration` @@ -573,16 +569,16 @@ statistics: packets. As there are 100 packets, this value is also the successful packet reception in percent, and indirectly, packet loss. - ``Network total power consumption``: The sum of the power consumption - of the four sensors and the gateway (values in Joules.) + of the four sensors and the gateway (values in Joules). - ``Power consumption per packet``: Network total power consumption / Total number of packets received, thus power consumption per packet - in the entire network (values in Joules.) + in the entire network (values in Joules). .. - ``Packet loss``: Total number of packets received / total number of packets sent, thus how many packets from the 100 sent are lost. **TODO** not sure its needed Note that the values for the ``residualEnergyCapacity`` statistic are -negative, so it is inverted in the anf file. Here are the results: +negative, so they are inverted in the graph. Here are the results. .. figure:: media/packetsreceived.png :width: 100% diff --git a/showcases/wireless/throughput/doc/index.rst b/showcases/wireless/throughput/doc/index.rst index f68dbdd451b..b81ccbb6e42 100644 --- a/showcases/wireless/throughput/doc/index.rst +++ b/showcases/wireless/throughput/doc/index.rst @@ -26,7 +26,7 @@ Configuration ~~~~~~~~~~~~~ The network contains two :ned:`WirelessHost`'s, at a distance of 1 meter, -one of them acting as traffic source, the other one as traffic sink. The +one of them acting as a traffic source, the other one as a traffic sink. The source host sends a UDP stream to the destination host in ad-hoc mode. The simulation is run with a small packet size of 100 bytes, 1000 bytes, and the default maximum unfragmented packet size in 802.11, 2236 bytes. (The @@ -35,7 +35,7 @@ corresponds to 2236 bytes of application data.) The simulation will be run several times, with different bitrates. The UDP application in the source host is configured to saturate the channel at all bitrates and packet sizes. There will be no packets lost in the physical layer -because the hosts are close to each other, and background noise is +because the hosts are close to each other, and the background noise is configured to be very low. The parameter study iterates over the following 802.11g bitrates: 6, 9, @@ -58,7 +58,7 @@ Results --> Throughput measured in the simulation is compared to analytically -obtained values. The application level throughput can be calculated from +obtained values. The application-level throughput can be calculated from the nominal bitrate and the payload size, for example, using the excel sheet `here `__.) It takes the DIFS, data frame duration, SIFS, ACK duration, and backoff @@ -82,7 +82,7 @@ the simulation for all bitrates and both packet sizes: The two curves match almost exactly. The curves are not linear: throughput doesn't increase linearly with the bitrate, especially at higher bitrates. The curve for the 2268-byte packets is nearly -linear, while the curve for the 100-byte packets is not linear, because +linear, while the curve for the 100-byte packets is not linear because the 100-byte packets have relatively more overhead due to various protocol headers, such as UDP header and 802.11 MAC header. Also, faster bitrates have more overhead. For example, with 1000-byte packets, at 6 @@ -103,7 +103,7 @@ the frame exchange as bitrates increase. The following sequence chart illustrates the relative sizes of the preamble, physical header, and data part of a 54 Mbps frame exchange. -The preamble and the physical header has the same duration regardless of +The preamble and the physical header have the same duration regardless of the bitrate, further increasing overhead at higher bitrates. .. figure:: media/seqchart5.png diff --git a/showcases/wireless/txop/doc/index.rst b/showcases/wireless/txop/doc/index.rst index 82f872176ec..11866da2b85 100644 --- a/showcases/wireless/txop/doc/index.rst +++ b/showcases/wireless/txop/doc/index.rst @@ -16,17 +16,16 @@ About TXOP TXOP is available in QoS mode as part of EDCA (Enhanced Distributed Channel Access), and it is a limited time period of contention-free channel access available to the -channel-owning station. During such a period the station can send multiple frames +channel-owning station. During such a period, the station can send multiple frames that belong to a particular access category. -The benefit of TXOP is that it increases throughput and reduces delay of QoS data -frames via eliminating contention periods between transmissions. TXOP can be used +The benefit of TXOP is that it increases throughput and reduces the delay of QoS data +frames by eliminating contention periods between transmissions. TXOP can be used in combination with aggregation and block acknowledgement to further increase throughput. More precisely, access categories have different channel access parameters, such as AIFS (Arbitration Interframe Spacing), duration, contention window size, -and TXOP limit. In the default EDCA OFDM parameter -set in the 802.11 standard, these values are set so that higher priority packets are +and TXOP limit. In the default EDCA OFDM parameter set in the 802.11 standard, these values are set so that higher priority packets are favored (the MAC waits less before sending them, the contention window is smaller, and they can be sent in a TXOP). The default parameter set specifies a TXOP limit of approximately 3 ms for the video category, and 1.5 ms for the voice category. @@ -75,7 +74,7 @@ such as the transmission of QoS data frames, aggregate frames, RTS and CTS frame acks, and block acks. In the example simulation, one host is configured to send video priority UDP packets -to the other. The host sends 1200B, 3400B and 3500B packets. The RTS, aggregation +to the other. The host sends 1200B, 3400B, and 3500B packets. The RTS, aggregation, and block ack thresholds are configured appropriately so that we get the following frame exchanges for demonstration: @@ -160,7 +159,7 @@ sequence chart: The gap between the frames is a SIFS; the TXOP frame exchange is preceded and followed by a much longer contention period. -This frame exchange sequence was recorded with 24 Mbps PHY rate. When using higher rates, +This frame exchange sequence was recorded with a 24 Mbps PHY rate. When using higher rates, more frames could fit in a TXOP, as the TXOP duration is independent of the PHY rate. This frame sequence is just an example. Various combinations of frames can be sent during