COMPRESSED LEARNING FOR TEXT CATEGORIZATION

Artur Ferreira, Mario Figueiredo

Abstract


In text classification based on the bag-of-words (BoW) or similar representations, we usually have a large number of features, many of which are irrelevant (or even detrimental) for classification tasks. Recent results show that compressed learning (CL), i.e., learning in a domain of reduced dimensionality obtained by random projections (RP), is possible, and theoretical bounds on the test set error rate have been shown. In this work, we assess the performance of CL, based on RP of BoW representations for text classification. Our experimental results show that CL significantly reduces the number of features and the training time, while simultaneously improving the classification accuracy. Rather than the mild decrease in accuracy upper bounded by the theory, we actually find an increase of accuracy. Our approach is further compared against two techniques, namely the unsupervised random subspaces method and the supervised Fisher index. The CL approach is suited for unsupervised or semi-supervised learning, without any modification, since it does not use the class labels.

Keywords


random projections, random subspaces, compressed learning, text classification, support vector machines

Full Text:

PDF


DOI: http://dx.doi.org/10.34629/ipl.isel.i-ETC.3

Refbacks

  • There are currently no refbacks.


Copyright (c) 2013 i-ETC : ISEL Academic Journal of Electronics Telecommunications and Computers

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.