Support Vector Machines Support Vector Machines
    • £92.99

Publisher Description

This book explains the principles that make support vector machines (SVMs) a successful modelling and prediction tool for a variety of applications. The authors present the basic ideas of SVMs together with the latest developments and current research questions in a unified style. They identify three reasons for the success of SVMs: their ability to learn well with only a very small number of free parameters, their robustness against several types of model violations and outliers, and their computational efficiency compared to several other methods.

Since their appearance in the early nineties, support vector machines and related kernel-based methods have been successfully applied in diverse fields of application such as bioinformatics, fraud detection, construction of insurance tariffs, direct marketing, and data and text mining. As a consequence, SVMs now play an important role in statistical machine learning and are used not only by statisticians, mathematicians, and computer scientists, but also by engineers and data analysts.

The book provides a unique in-depth treatment of both fundamental and recent material on SVMs that so far has been scattered in the literature. The book can thus serve as both a basis for graduate courses and an introduction for statisticians, mathematicians, and computer scientists. It further provides a valuable reference for researchers working in the field.

The book covers all important topics concerning support vector machines such as: loss functions and their role in the learning process; reproducing kernel Hilbert spaces and their properties; a thorough statistical analysis that uses both traditional uniform bounds and more advanced localized techniques based on Rademacher averages and Talagrand's inequality; a detailed treatment of classification and regression; a detailed robustness analysis; and a description of some of the most recent implementation techniques. To make the book self-contained, an extensive appendix is added which provides the reader with the necessary background from statistics, probability theory, functional analysis, convex analysis, and topology.

Ingo Steinwart is a researcher in the machine learning group at the Los Alamos National Laboratory. He works on support vector machines and related methods.

Andreas Christmann is Professor of Stochastics in the Department of Mathematics at the University of Bayreuth. He works in particular on support vector machines and robust statistics.

GENRE
Computing & Internet
RELEASED
2008
15 September
LANGUAGE
EN
English
LENGTH
617
Pages
PUBLISHER
Springer New York
SIZE
17
MB

Other Books in This Series

Novelty, Information and Surprise Novelty, Information and Surprise
2023
Information and Complexity in Statistical Modeling Information and Complexity in Statistical Modeling
2007
Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis
2012
Probabilistic Conditional Independence Structures Probabilistic Conditional Independence Structures
2006
Statistical and Inductive Inference by Minimum Message Length Statistical and Inductive Inference by Minimum Message Length
2005
Bayesian Networks and Decision Graphs Bayesian Networks and Decision Graphs
2009