Exploring Crowdsourcing for Subjective Quality Assessment of 3D Graphics
Published in IEEE International Workshop on Multimedia Signal Processing (MMSP), Tampere, Finland, 2021
Multimedia subjective quality assessment experiments are the most prominent and reliable way to evaluate the visual quality as perceived by human observers. Along with laboratory (lab) subjective experiments, crowdsourcing (CS) experiments have become very popular in recent years, e.g., during the COVID-19 pandemic these experiments provide an alternative to lab tests. However, conducting subjective quality assessment tests in CS raises many challenges: internet connection quality, lack of control on participants’ environment, participants’ consistency and reliability, etc. In this work, we evaluate the performance of CS studies for 3D graphics quality assessment. To this end, we conducted a CS experiment based on the double stimulus impairment scale method and using a dataset of 80 meshes with diffuse color information corrupted by various distortions. We compared its results with those previously obtained in a lab study conducted on the same dataset and in a virtual reality environment. Results show that under controlled conditions and with appropriate participant screening strategies, a CS experiment can be as accurate as a lab experiment.