Table Of Contents
Table Of Contents

Combining Time-Aware and Credit-Based Shaping

Goals

INET allows multiple traffic shapers to be used in the same traffic stream. This showcase demonstrates this option by showing a simple network where credit-based and time-aware shaping are combined.

Note

You might be interested in looking at another showcase, in which multiple traffic shapers are used in different traffic streams: Using Different Traffic Shapers for Different Traffic Classes.

INET version: 4.4

The Model

Time-aware shapers (TAS) and credit-based shapers (CBS) can be combined by inserting an Ieee8021qTimeAwareShaper module into an interface, and setting the queue type to Ieee8021qCreditBasedShaper. The number of credits in the CBS only changes when the corresponding gate of the TAS is open.

The Network

In this demonstration, similarly to the Credit-Based Shaping showcase, we employ a Ieee8021qTimeAwareShaper module with two traffic classes. The two traffic classes are shaped with both CBS and TAS.

There are three network nodes in the network. The client and the server are TsnDevice modules, and the switch is a TsnSwitch module. The links between them use 100 Mbps Ethernet links. The client generates two traffic streams, and transmits them to the switch. In the switch, these streams undergo traffic shaping, and are transmitted to the server. In the results section, we plot the traffic in the switch before and after shapers, to see the effects of traffic shaping.

../../../../../_images/Network26.png

Traffic

We create two sinusoidally changing traffic streams (called best effort and video) in the client, with an average data rate of 40 and 20 Mbps. The two streams are terminated in two packet sinks in the server:

# client applications
*.client.numApps = 2
*.client.app[*].typename = "UdpSourceApp"
*.client.app[0].display-name = "best effort"
*.client.app[1].display-name = "video"
*.client.app[*].io.destAddress = "server"
*.client.app[0].io.destPort = 1000
*.client.app[1].io.destPort = 1001
*.client.app[*].source.packetLength = 1000B # 42B = 8B (UDP) + 20B (IP) + 14B (ETH MAC) + 4B (ETH FCS) + 8B (ETH PHY)
*.client.app[0].source.productionInterval = replaceUnit(1 / (sin(dropUnit(simTime()) * 3) + 4.5), "ms")
*.client.app[1].source.productionInterval = replaceUnit(1 / (sin(dropUnit(simTime() * 1)) + sin(dropUnit(simTime() * 8)) + 2), "ms")

# server applications
*.server.numApps = 2
*.server.app[*].typename = "UdpSinkApp"
*.server.app[0].display-name = "best effort"
*.server.app[1].display-name = "video"
*.server.app[0].io.localPort = 1000
*.server.app[1].io.localPort = 1001

Stream Identification and Encoding

The two streams have two different traffic classes: best effort and video. The bridging layer in the client identifies the outgoing packets by their UDP destination port. The client encodes and the switch decodes the streams using the IEEE 802.1Q PCP field.

# enable outgoing streams
*.client.hasOutgoingStreams = true

# enable incoming streams
*.server.hasIncomingStreams = true

# client stream identification
*.client.bridging.streamIdentifier.identifier.mapping = [{stream: "best effort", packetFilter: expr(udp.destPort == 1000)},
                                                         {stream: "video", packetFilter: expr(udp.destPort == 1001)}]

# client stream encoding
*.client.bridging.streamCoder.encoder.mapping = [{stream: "best effort", pcp: 0},
                                                 {stream: "video", pcp: 4}]

# switch stream decoding
*.switch.bridging.streamCoder.decoder.mapping = [{pcp: 0, stream: "best effort"},
                                                 {pcp: 4, stream: "video"}]

Traffic Shaping

The traffic shaping takes place in the outgoing network interface of the switch where both streams pass through. We configure the CBS to limit the data rate of the best effort stream to ~40 Mbps, and the video stream to ~20 Mbps. In the time-aware shaper, we configure the gates to be open for 4ms for best effort, and 2ms for video.

Here is the egress traffic shaping configuration:

# enable egress traffic shaping
*.switch.hasEgressTrafficShaping = true

# credit-based and asynchronous traffic shaping
*.switch.eth[*].macLayer.queue.numTrafficClasses = 2
*.switch.eth[*].macLayer.queue.*[0].display-name = "best effort"
*.switch.eth[*].macLayer.queue.*[1].display-name = "video"
*.switch.eth[*].macLayer.queue.transmissionSelectionAlgorithm[0].typename = "Ieee8021qCreditBasedShaper"
*.switch.eth[*].macLayer.queue.transmissionSelectionAlgorithm[0].idleSlope = 63.96Mbps		# channel data rate * 2/3
*.switch.eth[*].macLayer.queue.transmissionSelectionAlgorithm[1].typename = "Ieee8021qCreditBasedShaper"
*.switch.eth[*].macLayer.queue.transmissionSelectionAlgorithm[1].idleSlope = 63.96Mbps		# channel data rate * 1/3
*.switch.eth[*].macLayer.queue.transmissionGate[0].initiallyOpen = true
*.switch.eth[*].macLayer.queue.transmissionGate[1].initiallyOpen = false
*.switch.eth[*].macLayer.queue.transmissionGate[*].durations = [4ms, 2ms]

Note that the actual committed information rate for CBS is 1/3 and 2/3 of the idle slope values set here, because the corresponding gates are open for 1/3 and 2/3 of the time.

Packets that are held up by the shapers are stored in the MAC layer subqueues of the corresponding traffic class.

Results

The following chart displays the incoming and outgoing data rate in the credit-based shapers:

../../../../../_images/shaper_both1.png

Data rate measurement produces a data point after every 100 packet transmissions, i.e. ~10 ms of continuous transmission. This is the same as the cycle time of the time-aware shaping (including the periods when the gate is closed), so ~2.5 open-gate periods for best effort, ~5 for video. Thus, the fluctuation depends on how many idle periods are counted in a measurement interval (so the data rate seems to fluctuate between two distinct values).

The following sequence chart displays packet transmissions for both traffic categories (blue for best effort, orange for video). We can observe the time-aware shaping gate schedules:

../../../../../_images/seqchart21.png

Sources: omnetpp.ini

Discussion

Use this page in the GitHub issue tracker for commenting on this showcase.