As the de-facto industry standard, our benchmarking methodology focuses on customer-perceived network quality and covers a wide range of mobile services. It allows a technical analysis that is unprecedented in its level of detail.
Drivetests and Walktests
The network tests covered inner-city areas, outer metropolitan and suburban areas. Measurements are also taken in smaller towns and cities along connecting highways. The combination of test areas is selected to provide representative test results across the populations. The test routes and all visited cities and towns are shown in every report. Our drive-test cars can be equipped with the latest smartphones for the simultaneous measurement of voice and data services in 4G and 5G.
One smartphone per operator in each car is used for the voice tests, setting up test calls from one car to another. The walktest team also carries one smartphone per operator for the voice tests. The audio quality of the transmitted speech samples are evaluated using the HD-voice capable and ITU standardized so-called POLQA wideband algorithm. In order to account for typical smartphone-use scenarios during the voice tests, background data traffic was generated in a controlled way through injection of 100 KB of data traffic (HTTP downloads). Contact us for more information.
As a new KPI in 2019, we also evaluate the so-called Multirab (Multi Radio Access Bearer) Connectivity. This value denominates whether data connectivity is available during the phone calls.
Data performance is measured by using one mobile phone per operator in each car. For the web tests, they access web pages according to the widely recognized Alexa ranking. In addition, the static “Kepler” test web page as specified by ETSI (European Telecommunications Standards Institute) is used. In order to test the data service performance, files of 5 MB and 2.5 MB for download and upload are transferred from or to a test server located in the cloud. In addition, the peak data performance is tested in uplink and downlink directions by assessing the amount of data that is transferred within a seven seconds time period. The evaluation of YouTube playback takes into account that YouTube dynamically adapts the video resolution to the available bandwidth. So, in addition to success ratios and start times, the measurements also determine average video resolution. All the tests are conducted with the best-performing mobile plan available from each operator.
For the collection of crowd-data, we have integrated a background diagnosis processes into thousands of diverse Android apps. If one of these applications is installed on the end-user’s phone, data collection takes place 24/7, 365 days a year (with the user approval). Reports are generated for regularly and sent daily to our cloud servers. We focus on the user experience, avoiding that our software cause any disruption on the user day-by-day. Our data collection is compliant with the GDPR, since we do not include any personal user data. Interested parties can deliberately take part in the data gathering with the specific ”U get“ app. This unique crowdsourcing technology allows us to collect data about real-world experience wherever and whenever customers use their smartphones.
For the assessment of network coverage, we lays a grid of 2 by 2 kilometres over the whole test area. The “evaluation areas“ generated this way are then subdivided into 16 smaller tiles. To ensure statistical relevance, we require a certain number of users and measurement values per operator for each tile and each evaluation area. Above that, we now distinguish urban and non-urban areas in our crowd evaluations – respecting that the coverage with mobile services is usually higher in urban areas than in rural surroundings.
We are investigating the data rates that were actually experienced by each user. For this purpose, we determine maximum download and upload data rates per user within 15 minutes slices. These values are then aggregated per evaluation area in 4-week-timeslices.
Data Service Availability
Also called “operational excellence“, this parameter indicates the number of “service degradations“ – events where data connectivity is impacted by a number of identified anomalies with sufficient severity. To judge this, the algorithm compares similar timeframes on similar days in a window around the day and time of interest. The algorithm looks at large scale anomalies on a network-wide level and ensures that individual users‘ degradations such as a simple loss of coverage due to an indoor stay or similar reasons can not affect the result.
- Buzz Topics