Issues in Measuring User Experience

Traditional methods of assessing network performance were inherited from analysis of circuit-switched networks and are not ideal for data networks, such as 4G and 5G. Even though these methods can detect certain types of failures, they are not capable of expressing the level of user satisfaction with the network.

The two main reasons why these methods cannot properly evaluate customer experience in a data network are:

  1. Data throttling: this is an intrinsic characteristic of data networks due to the protocols used. A data network needs to limit user traffic to the total traffic it is capable of carrying; that is, a user may want to use an application a lot faster but the network needs to restrict the amount of data that can be transmitted during a certain period because it cannot transport any additional traffic.
  2. Self-healing: this is a built-in capability of new generation wireless networks (4G and 5G) that exists due to the multiple modulation and coding schemes (MCS) available. Each of these schemes has a balance between Signal to Noise Ratio (SNR) requirements and spectral efficiency. The lower the required SNR, the higher the amount of resources used (because, for example, of the use of extra bits for error correction). This balancing act is required because it allows the network to make up for interference issues by sacrificing traffic capacity. However, this self-healing facility has a potential problematic consequence: as the number of used resources increases (be it due to actual traffic or, for example, error correction), the interference with surrounding cells also increases, lowering the SNR and thus creating a cascade effect.

The following figure shows how the total traffic of a wireless data network is impacted by these two characteristics. An IP data network limits user traffic to the available capacity, as illustrated by the red and yellow bars. The application and IP/TCP overhead are added to the user traffic resulting in the actual data demand (throughput). In a wireless network, this final demand becomes even bigger due to adaptive error correction and additional protocols (green bar).

Testing Network Speed (Speed Test)

Users today expect the internet in their cellphones to perform as well as at their home or office; when this does not happen, user satisfaction with the network is impacted. One of the biggest problems is that the traffic demand generated by the user is not the only thing transmitted by the network, overhead from the application and IP/TCP, as well as adaptive error correction and additional protocols from the aerial interface also are added to the initial user demand. The users themselves, however, are not aware of that, all they can see is how fast the network can transmit the data they are trying to send/receive.

One of the parameters worst represented today is the internet performance (data transmission) in cellphones (wireless networks). When internet performance was initially measured, engineers realized that the results varied with the size of the packet being transmitted. There was an ideal packet size for each protocol being used and, thus, it was not possible to reach a consensus as per the packet size to be used for the measurements, as many different protocols are used in the networks.

It was then defined that network performance should be measured for the packet size that gave the best result. To find this size, however, it was necessary to do a progressive search until the maximum performance point was found. Once this point was determined, the network with the highest performance value was the fastest.

This might have been valid many years ago, when the only activity in these networks was files transfer, which allowed the packed size to be pre-defined. This evaluation criteria, however, has persisted until today and is commonly used in speed tests made by hundreds of applications. The problem with this type of measurement is that it is an illusion, because it does not represent network performance for packet sizes in use.

The IETF (Internet Engineering Task Force) released, in 2011, the RFC 6349; this framework defines the Maximum Transmit Unit (MTU), which is the largest quantity of data that can be transmitted in a TCP/IP without fragmentation.

The Round-Trip Time (RTT) is then calculated for this MTU and, based in these two parameters, the maximum TCP throughput is obtained. Soon the industry realized that this type of measurement also did not represent network performance well when considering the user’s perspective, because, amongst other reasons, the user applications used transmission packets much smaller than the MTU. The Internet industry than defined, in 2004, the iMIX, which represents the average packet size transferred by the Internet and uses this to test routers and other equipment.

Back