Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Kernel Methods for Machine Learning with Life Science Applications

Abstract

Denne afhandling omhandler kernelmetoder til ikke-lineær dataanalyse. Kernelmetoder er en fællesbetegnelse for algoritmer, der benytter det såkaldte kerneltrick til at formulere ikke-lineære udvidelser af klassiske lineære algoritmer. Dette kan gøres så længe at data kun indgår som indreprodukter i den lineære model. Overordnet har denne afhandling to hovedmål. Den første del omhandler stabil støjreduktion ved kernel Principal Komponent Analyse hvorefter variansinflations problemet undersøges i relation til kernellæring.Når kernel Principal Component Analysis anvendes til støjreduktion, er løsning af det inverse ill-posed pre-image problem essentielt. Stabil pre-image estimering udgør i denne forbindelse den største udfordring. Denne afhandling præsenterer nye pre-image algoritmer til forbedret støjreduktion ved at introducere henholdsvis `1- og `2-norm regularisering. Eksperimenter på håndskrevne tal samt biomedicinske datasæt illustrerer effekten af de nye estimatorer. Derudover introduceres metoder til at forbedre støjreduktionen, når klasse information er tilgængelig for en del af dataen.Den anden del af denne afhandling omhandler varians-inflation i kernelmetoder. Varians-inflation kan forekomme i høj-dimensionale problemer, når mængden af træningsdata er utilstrækkelig til at repræsentere signalmanifolden. Dette medfører et muligt mismatch mellem underrummet udspændt af henholdsvis trænings- og testdata. I denne afhandling vises det, hvordan varians problemet forefindes i kernelalgoritmer, og metoder til at korrigere for det forøget variansestimat i både kernel Principal Komponent Analyse og Support Vektor Maskiner præsenteres. Standard machine learning datasæt anvendes til at illustrere, hvordan de foreslåede algoritmer gendanner generaliserbarheden.Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear as innerproducts in the model formulation. This dissertation presents research on improving the performance of standard kernel methods like kernel Principal Component Analysis and the Support Vector Machine. Moreover, the goal of the thesis has been two-fold.The first part focuses on the use of kernel Principal Component Analysis for nonlinear denoising. In this context stable solution of the inverse and inherently ill-posed pre-image problem constitutes the main challenge. It is proposed to stabilize the estimation by augmenting the cost function with either an `1-or `2-norm penalty, and solution schemes are derived for both approaches. The methods are experimentally validated on several biomedical data sets. Furthermore, frameworks for exploiting label information for improved denoising in the semisupervised case are proposed.The second part of the thesis examines the effect of variance inflation in kernel methods. Variance inflation occurs in high-dimensional problems when the training data are insufficient to describe the entire signal manifold. Thereby leading to a potential mismatch between the subspaces spanned by the training and test data, respectively. It is shown how this effect extends from linear models to kernel learning, and means for restoring the generalizability in both kernel Principal Component Analysis and the Support Vector Machine are proposed. Viability is proved on a wide range of benchmark machine learning data sets

Similar works

This paper was published in Online Research Database In Technology.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.