Reminder of classification techniques involving combination of information 

Get Complete Project Material File(s) Now! »

Instance-based approach: k-NN

The k-nearest neighbor algorithm is the simplest example of an instance-based classification algorithm.
It is a simple, intuitive and elegant algorithm that belongs to the category of non-parametric classifiers. Contrary to generative and discriminative classifiers, instance-based classifiers do not try to make any generalization on the training data. Instead, training examples are considered prototypes of the classes to which they belong. An unlabeled example is assigned to a class by measuring the similarity of these examples to the prototypes. Typically, the similarity is defined by the mean of a distance computed in the feature space. For this reason, the most similar prototype is called the closest neighbor.
In it simplest version, 1-NN, a query example is labeled with the class of its closest neighbor. For a number of neighbors k > 1, the k nearest neighbors are retrieved from the training samples and the query sample is assigned to the class that is the most represented among the k labels. In the weighed version of k-NN the distances between the query example and its k closest neighbors are used as weights to determine the closest set of prototypes. In all cases, k-NN returns a unique class label for the tested examples. As previously mentioned, in many circumstances it is necessary to have an estimation of the posterior probability for each class of the problem. These measurements can be obtained with the probabilistic version of the k-NN.
In the probabilistic version of k-NN, we suppose that the distance between examples is given by a Euclidean distance. We denote by R the hypersphere of volume VR containing exactly k training examples. The unconditional probability density function (pdf) of a query sample represented by x can be approximated by: p(x) k NVR.

Schemes for combination of classifiers decisions

Combination methods based on the output of classifiers can be grouped into two categories. The first category, parallel combination, combines the outputs of classifiers dealing with the same problem (i.e the same original data set to be classified into the same known categories). The second category combines the outputs of classifier runs one after another, where the set of classes for each classifier is determined by the previous classifier. This type of combination is reffered to as sequential combination in the following calculations.

Parallel combination

Parallel combination method deals withK systems of classification trained to solve the same problem.
Then for an instance to be classified, we need to combine the K outputs that can be single label, ordered list of classes, or a vector of membership measurements for each class.
A: Single class For classifiers whose outputs are the predicted class, the combination can be performed using voting methods or the Knowledge-Behavior Space approach [HS93]. When the classifiers to be combined have different levels of performance, the decision of the classifiers can be weighted to obtain a better combination. Combining classifiers with single class output is limited, especially when K = 2. Often it is necessary to work with classifiers that can handle additional information about potential alternatives.
B: Ranked list Instead of returning a single decision, some classifiers are developed to return a ranked list of the possible classes. There are several methods to combine K ranked lists. The goal of these methods is to re-order the K ranked lists so that the true class is ranked higher than any other class. From these methods, we retain the Highest rank, the Borda count and the Logistic regression detailed in [HHS02b].
The highest rank analyzes the K lists to derive possible sub-sets of classes. Then, the union of these subsets is re-ordered according to their old ranks in each subset. The Borda count is a generalization of the majority vote. For each class candidate different scores are given according to its position on each list. The class that obtains the highest score is designated as the predicted class.
This method does treat all classifier outputs equally. Some variations of this approach are described
in [VES00]. The Logistic regression is a modification of the Borda count where weights are given to scores produced by each classifier. Additional information can be considered by using membership
measurements instead of list of classes.
C: Vector of membership measurements Some classifiers, such as GMM, return a membership measure for each possible class. As we discussed in Sec.1.2 numerous methods have been developed to derive membership values for each class of the problem for classifiers that are originally developed to return a single class (such as the methods discussed for the SVM and kNN). If we consider that the K classifiers to be combined return a vector and membership measurements, then these vectors can be stored in a single decision matrix called decision profile matrix [Kun04]. Each column of the matrix contains the measurement vector of one classifier. As shown in section 1.2, these membership measurements can be (pseudo)-posterior probability (GMM) or a distance (SVM, kNN).
The variety in the form of the outputs can be problematic for the combination. Some requirements have to be carefully considered before applying the combination rule on classifier outputs. According to [KHDM02] there are two scenarios of combination.

READ  Predictive performance and computational efficiency

Table of contents :

1 Introduction 
1 Structure of the document
2 Outlines
2 Introduction- French Version 
1 Introduction
2 Résumé étendu par chapitre
3 Reminder of classification techniques involving combination of information 
1 Supervised pattern recognition
1.1 Features transformation and selection
1.2 Classifiers architecture
1.2.1 Generative approach: GMM
1.2.2 Discriminative approach: SVM
1.2.3 Instance-based approach: k-NN
1.3 Performance of classification
1.3.1 Measure of performance
1.4 Comparison of classification performances
2 Combination of information for classification problems
2.1 Levels of combination
2.2 Schemes for combination of classifiers decisions
2.2.1 Parallel combination
2.2.2 Sequential combination
2.3 Trainable .vs. non-trainable combiners
3 Conclusions
4 Singing voice: production, models and features 
1 Singing voice production
1.1 Vocal production
1.2 Some specificities of the singing production
1.2.1 Formants tuning
1.2.2 Singing formant
1.2.3 Vocal vibrato
1.2.4 Portamento and Legato
2 Models for voiced sounds
2.1 Source-filter model
2.1.1 Model description
2.1.2 Estimation of the vocal transfer function
2.2 Harmonic sinusoidal model
2.2.1 Model description
2.2.2 Sinusoidal model parameters estimation
2.3 Intonative model
2.3.1 Model description
2.3.2 Model parameters estimation
2.3.3 Model evaluation
2.4 Relation between intonative and source-filter model
2.4.1 Estimation of the formant position from a cycle of frequency modulation
3 Features for singing voice
3.1 Timbral features
3.2 Intonative features
4 Summary and Conclusions
5 Singing voice localization and tracking in polyphonic context 
1 Problems statement
1.1 Singing voice detection
1.2 Singing voice tracking
2 Related works
2.1 Singing voice localization
2.2 Instrument identification
2.2.1 Solo instrument identification
2.2.2 Multiple instruments recognition in polyphonic recordings .
2.3 Singing voice extraction using source separation approach
2.3.1 Blind Source Separation (BBS) approaches
2.3.2 Statistical Modeling approaches
2.3.3 Computational Auditory Scene Analysis (CASA) approaches
2.4 Singing melody transcription
3 Proposed approach for singing voice detection
3.1 Description of the proposed approach to localize vocal segments
3.1.1 Step 1 – Partial tracking and segmentation
3.1.2 Step 2 – Extraction of features
3.1.3 Step 3 – Selection of vocal partials
3.1.4 Step 4 – Formation of vocal segments
3.2 Evaluation
3.2.1 Data set
3.2.2 Results
4 Proposed approach to track harmonic contents of the singing voice
4.1 Description of the method to group harmonically related partials
4.1.1 Theoretical fundaments
4.1.2 Step 1 – Measure of similarity between partials
4.1.3 Step 2 – Distance matrix
4.1.4 Step 3 – Grouping harmonically related partials by clustering
4.2 Evaluation of clusters of harmonic partials
4.2.1 Data set
4.2.2 Measure for cluster evaluation
4.2.3 Results
4.3 Application to singing voice detection
4.3.1 Data set
4.3.2 Results
4.4 Application to multi-pitch and sung melody transcription
4.4.1 Method to estimate the f0 of a cluster
4.4.2 Data set
4.4.3 Measure
4.4.4 Results
5 Summary and conclusions
6 Singer Identification 
1 Problem statement
2 Related works
2.1 Artist identification
2.2 Singer identification
2.3 Perceptual recognition of singers
3 Proposed approach for singer identification
3.1 Description of the proposed approach based on the combination of timbral and intonative features
3.1.1 Sound descriptions complementarity
3.1.2 Combining decisions obtained with each sound description
3.2 Evaluation of the combination method
3.2.1 Data sets
3.2.2 Results
4 Proposed approach to evaluate the features robustness for singer recognition
4.1 Description of the proposed approach
4.2 Evaluation of the features of robustness against intra-song and inter-song variations
4.2.1 Data set
4.2.2 Results for intra-song variations on a cappella recordings
4.2.3 Results for inter-song variations
5 Summary and conclusions
7 Conclusions 
1 Summary
2 Future works

GET THE COMPLETE PROJECT

Related Posts