Despite the technical challenges and aspiration to build a functional system, it is obvious that a system that is designed to be used by humans must satisfy their needs and create a great user experience. To be competitive under these conditions, it is highly important for service providers to offer an optimal Quality of Experience (QoE), which is described by the degree of delight or annoyance of the user, to their customers. This can be achieved through proper planning of network infrastructures and system configurations based on planning models as well as by continuous and automated monitoring of the users’ QoE to allow an optimized resource allocation and quality control.
To build such models, typically subjective user studies are conducted in highly controlled environments, i.e., laboratory rooms. Here, Quality of Service (QoS) parameters such as a network delay, packet losses, or application parameters will be simulated/controlled. This allows testing variously conditions that might occur in real system usage. After being confronted with the service, within the ACCORDION project with the three use cases scenarios, participants of these studies will express their experience on psychometrically validated and reliable rating tools such as questionnaires or physiological measurements. These tolls will target various concepts which are relevant for the specific service, e.g., the smoothness of a video, or the responsiveness of a system. The goal of the QoE model is now to find a mapping between the simulated system parameters such as delay, also called quality factors, and the measured QoE aspects also called quality features. Here, typically statistical methods in the machine learning domain are used. For the ACCORDION project, in particular, for use case #2, such a model was already developed. It offers a modular structure since it is based on so-called impairment factors which describe the degradation of the measured quality features based on the network parameters (input parameters). An implementation and demo of this model that is implemented in form of a Phyton script, can be seen in the QoE Model video. To evaluate whether a model performs well, which is also the target of the training process to derive the model in the first place, a comparison of the subjective ratings and the predicted QoE scores is performed, e.g., in terms of correlation and Root-mean-square error. For a fair performance evaluation, an independent dataset (the validation dataset) is used which was not part of the training process. In a recent paper and the QoE Model video, we show some of the initial results which reveal that the developed QoE model performs well.