Abstract
Music emotion recognition typically attempts to map audio features from music to a mood representation using machine learning techniques. In addition to having a good dataset, the key to a successful system is choosing the right inputs and outputs. Often, the inputs are based on a set of audio features extracted from a single software library, which may not be the most suitable combination. This paper describes how 47 different types of audio features were evaluated using a five-dimensional support vector regressor, trained and tested on production music, in order to find the combination which produces the best performance. The results show the minimum number of features that yield optimum performance, and which combinations are strongest for mood prediction.
This document was originally presented at the 53rd International Conference: Semantic Audio. The full published version can be found at https://www.aes.org/e-lib/online/browse.cfm?elib=17110
White Paper copyright
Β© Βι¶ΉΤΌΕΔ. All rights reserved. Except as provided below, no part of a White Paper may be reproduced in any material form (including photocopying or storing it in any medium by electronic means) without the prior written permission of Βι¶ΉΤΌΕΔ Research except in accordance with the provisions of the (UK) Copyright, Designs and Patents Act 1988.
The Βι¶ΉΤΌΕΔ grants permission to individuals and organisations to make copies of any White Paper as a complete document (including the copyright notice) for their own internal use. No copies may be published, distributed or made available to third parties whether by paper, electronic or other means without the Βι¶ΉΤΌΕΔ's prior written permission.