Nearest Neighbor Voting in High Dimensional Data: Learning from Past Occurrences
- Artificial Intelligence Laboratory, Jozef Stefan Institute and Jozef Stefan International Postgraduate School
1000 Ljubljana, Slovenia
nenad.tomasev@ijs.si, dunja.mladenic@ijs.si
Abstract
Hubness is a recently described aspect of the curse of dimensionality inherent to nearest-neighbor methods. This paper describes a new approach for exploiting the hubness phenomenon in k-nearest neighbor classification. We argue that some of the neighbor occurrences carry more information than others, by the virtue of being less frequent events. This observation is related to the hubness phenomenon and we explore how it affects high-dimensional k-nearest neighbor classification. We propose a new algorithm, Hubness Information k-Nearest Neighbor (HIKNN), which introduces the k-occurrence informativeness into the hubness-aware k-nearest neighbor voting framework. The algorithm successfully overcomes some of the issues with the previous hubness-aware approaches, which is shown by performing an extensive evaluation on several types of high-dimensional data.
Key words
k-nearest neighbor, curse of dimensionality, hubness, neighbor occurrence models, self-information, fuzzy, voting
Digital Object Identifier (DOI)
https://doi.org/10.2298/CSIS111211014T
Publication information
Volume 9, Issue 2 (June 2012)
Year of Publication: 2012
ISSN: 2406-1018 (Online)
Publisher: ComSIS Consortium
Full text
Available in PDF
Portable Document Format
How to cite
Tomasev, N., Mladenic, D.: Nearest Neighbor Voting in High Dimensional Data: Learning from Past Occurrences. Computer Science and Information Systems, Vol. 9, No. 2, 691-712. (2012), https://doi.org/10.2298/CSIS111211014T