Experimentation analysis
Lockdown influence
Antennas cartography
Network trafic dispersion cartography
Back to the Website


Questionnaires rating repartion
Questionnaires number CDF plot
QoE rating repartition among reported problems
QoE repartition among reported environments
QoE rating repartition in "Not well covered" environment
Questionnaires repartition among reported applications

Experimentation analysis

As stated earlier, the implication varied a lot between our users. Here, we plotted the number of questionnaires answered by our users and we added the average phone QoE relative to each user. We also used the users' histograms to display the composition of their ratings. By ordering the average QoE series, we can highlight the interesting percentiles of the data:

We also notice that only a quarter of the users' average rating is below, telling us that they did not encountered much problems during the experimentation. Regarding the user with the average of 5, he indicated us in the first questionnaire that he usually has a good user experience with his phone and he was a bit more critic regarding the application ratings -4,7 average-. Finally, more than half the users are rating in majority 5 to their phone global QoE, whereas for the other users the most present rating is 4, or 3 for an outlier.

Regarding the distribution of questionnaires among our users, the least amount filled is 33 and the most 255. In the end, this collecting campaign was successful: 40 users filled more than 100 questionnaires and half our users fill more than 120 questionnaires.

Here we looked at the most reported problems and we plotted the repartition of ratings for these problems. Surprisingly, for each problem, users gave the rating 5 at least once. Furthermore, the most used ratings being 4 and 3, we can hypothesize that these problems did not impacted the QoE that much. This kind of plot also allow us to identify which problems have the most impact. For example, "No connection" seems to be the worst problem, having a median of 2 and the most 1 rating.

Here, we tried to see if the environment was having an influence on the MOS rating. The majority of the questionnaires were filled at home, and, probably because of the confinement, few questionnaires were filled in the metro or in the train.

The first thing that stands out is the maximum average rating: it belongs to the "Not well covered area" category and it is not what we expected. Therefore, we looked into the application that were rated in this type of environment to deduce which was the actual usage of the phone. YouTube and Chrome are often very consuming data-wise. They are the most reported apps, and the majority of their ratings being 5, it seems that this category was misused. Probably because the name of the category is a bit vague, therefore leading to a subjective rating, unlike the other environments.

This measure apart, home is, as expected, the environment with the best average QOE for our users. And, as we could have guessed, the metro and the rural areas have the worst average QOE, even if Metro does not have too much ratings.

In a similar fashion than the environments, we drew the number of ratings for the nine more rated applications with the addition of the average QOE rating for each application. Being asked to do so, the users rated YouTube the most and granting it the best QOE rating among the most rated app. We also notice that the users followed the instructions relatively well, rating mostly browsers, video or audio applications.