Table Of Contents
Table Of Contents

Token Bucket based Policing

Goals

In this example we demonstrate per-stream policing using chained token buckets which allows specifying committed/excess information rates and burst sizes.

INET version: 4.4

The Model

There are three network nodes in the network. The client and the server are TsnDevice modules, and the switch is a TsnSwitch module. The links between them use 100 Mbps EthernetLink channels.

../../../../../_images/Network21.png

There are four applications in the network creating two independent data streams between the client and the server. The average data rates are 40 Mbps and 20 Mbps but both varies over time using a sinusoid packet interval.

# client applications
*.client.numApps = 2
*.client.app[*].typename = "UdpSourceApp"
*.client.app[0].display-name = "best effort"
*.client.app[1].display-name = "video"
*.client.app[*].io.destAddress = "server"
*.client.app[0].io.destPort = 1000
*.client.app[1].io.destPort = 1001

# best-effort stream ~40Mbps
*.client.app[0].source.packetLength = 1000B
*.client.app[0].source.productionInterval = 200us + replaceUnit(sin(dropUnit(simTime() * 10)), "ms") / 20

# video stream ~20Mbps
*.client.app[1].source.packetLength = 500B
*.client.app[1].source.productionInterval = 200us + replaceUnit(sin(dropUnit(simTime() * 20)), "ms") / 10

# server applications
*.server.numApps = 2
*.server.app[*].typename = "UdpSinkApp"
*.server.app[0].io.localPort = 1000
*.server.app[1].io.localPort = 1001

The two streams have two different traffic classes: best effort and video. The bridging layer identifies the outgoing packets by their UDP destination port. The client encodes and the switch decodes the streams using the IEEE 802.1Q PCP field.

# enable outgoing streams
*.client.hasOutgoingStreams = true

# client stream identification
*.client.bridging.streamIdentifier.identifier.mapping = [{stream: "best effort", packetFilter: expr(udp.destPort == 1000)},
                                                         {stream: "video", packetFilter: expr(udp.destPort == 1001)}]

# client stream encoding
*.client.bridging.streamCoder.encoder.mapping = [{stream: "best effort", pcp: 0},
                                                 {stream: "video", pcp: 4}]

# disable forwarding IEEE 802.1Q C-Tag
*.switch.bridging.directionReverser.reverser.excludeEncapsulationProtocols = ["ieee8021qctag"]

# stream decoding
*.switch.bridging.streamCoder.decoder.mapping = [{pcp: 0, stream: "best effort"},
                                                 {pcp: 4, stream: "video"}]

The per-stream ingress filtering dispatches the different traffic classes to separate metering and filter paths.

# enable ingress per-stream filtering
*.switch.hasIngressTrafficFiltering = true

# per-stream filtering
*.switch.bridging.streamFilter.ingress.numStreams = 2
*.switch.bridging.streamFilter.ingress.classifier.mapping = {"best effort": 0, "video": 1}
*.switch.bridging.streamFilter.ingress.meter[0].display-name = "best effort"
*.switch.bridging.streamFilter.ingress.meter[1].display-name = "video"

We use a single rate two color meter for both streams. This meter contains a single token bucket and has two parameters: committed information rate and committed burst size. Packets are labeled green or red by the meter, and red packets are dropped by the filter.

*.switch.bridging.streamFilter.ingress.meter[*].typename = "SingleRateTwoColorMeter"
*.switch.bridging.streamFilter.ingress.meter[0].committedInformationRate = 40Mbps
*.switch.bridging.streamFilter.ingress.meter[1].committedInformationRate = 20Mbps
*.switch.bridging.streamFilter.ingress.meter[0].committedBurstSize = 10kB
*.switch.bridging.streamFilter.ingress.meter[1].committedBurstSize = 5kB

Results

The first diagram shows the data rate of the application level outgoing traffic in the client. The data rate varies over time for both traffic classes using a sinusoid packet interval.

../../../../../_images/ClientApplicationTraffic1.png

The next diagram shows the operation of the per-stream filter for the best effort traffic class. The outgoing data rate equals with the sum of the incoming data rate and the dropped data rate.

../../../../../_images/BestEffortTrafficClass1.png

The next diagram shows the operation of the per-stream filter for the video traffic class. The outgoing data rate equals with the sum of the incoming data rate and the dropped data rate.

../../../../../_images/VideoTrafficClass1.png

The next diagram shows the number of tokens in the token bucket for both streams. The filled areas mean that the number of tokens changes quickly as packets pass through. The data rate is at maximum when the line is near the minimum.

../../../../../_images/TokenBuckets.png

The last diagram shows the data rate of the application level incoming traffic in the server. The data rate is somewhat lower than the data rate of the outgoing traffic of the corresponding per-stream filter. The reason is that they are measured at different protocol layers.

../../../../../_images/ServerApplicationTraffic1.png

Sources: omnetpp.ini

Discussion

Use this page in the GitHub issue tracker for commenting on this showcase.