De Aller-Bedste Bøger - over 12 mio. danske og engelske bøger
Levering: 1 - 2 hverdage

Bøger i Foundations and Trends (R) in Machine Learning serien

Filter
Filter
Sorter efterSorter Serie rækkefølge
  • - An Overview
    af Ulrike von Luxor
    548,95 kr.

    Provides a high-level overview about the existing literature on clustering stability. In addition to presenting the results in a slightly informal but accessible way, the authors of this book relate them to each other and discuss their different implications.

  • af Ljubisa Stankovic
    1.683,95 kr.

    Provides a comprehensive introduction to generating advanced data analytics on graphs that allows us to move beyond the standard regular sampling in time and space to facilitate modelling in many important areas.

  • af Silvia Chiappa
    728,95 kr.

    Provides a simple and clear description of explicit-duration modelling by categorizing the different approaches into three main groups, which differ in encoding in the explicit-duration variables different information about regime switching/reset boundaries.

  • af Ali H. Sayed
    933,95 kr.

    Examines the topic of information processing over graphs. The presentation is largely self-contained and covers results that relate to the analysis and design of multi-agent networks for the distributed solution of optimization, adaptation, and learning problems from streaming data through localized interactions among agents.

  • af Steve Hanneke
    948,95 kr.

    Describes recent advances in our understanding of the theoretical benefits of active learning, and implications for the design of effective active learning algorithms. Much of the book focuses on a particular technique - disagreement-based active learning. It also briefly surveys several alternative approaches from the literature.

  • - The Optimistic Principle Applied to Optimization and Planning
    af Remi Munos
    933,95 kr.

    Covers several aspects of the "optimism in the face of uncertainty" principle for large scale optimization problems under finite numerical budget. The book lays out the theoretical foundations of the field by characterizing the complexity of the optimization problems and designing efficient algorithms with performance guarantees.

  • - A Convex Optimization Perspective
    af Francis Bach
    978,95 kr.

    Presents the theory of submodular functions in a self-contained way from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems. In particular, it describes how submodular function minimization is equivalent to solving a variety of convex optimization problems.

  • af Fredrik Lindsten
    1.083,95 kr.

    Reviews a branch of Monte Carlo methods that are based on the forward-backward idea, and that are referred to as backward simulators. In recent years, the theory and practice of backward simulation algorithms have undergone a significant development, and the algorithms keep finding new applications.

  • - A Survey
    af Brian Kulis
    723,95 kr.

    Presents an overview of existing research in this topic, including recent progress on scaling to high-dimensional feature spaces and to data sets with an extremely large number of data points. The book presents as unified a framework as possible under which existing research on metric learning can be cast.

  • af Sebastien Bubeck
    983,95 kr.

    Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this book, the focus is on two extreme cases in which the analysis of regret is particularly simple and elegant: independent and identically distributed payoffs and adversarial payoffs.

  • af Charles Sutton
    948,95 kr.

    Provides a comprehensive tutorial aimed at application-oriented practitioners seeking to apply CRFs. The monograph does not assume previous knowledge of graphical modeling, and so is intended to be useful to practitioners in a wide variety of fields.

  • - A Review
    af Mauricio A. Alvarez
    713,95 kr.

    Explores different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and regularization methods. The book is aimed at researchers interested in the theory and application of kernels for vector-valued functions.

  • af Michael W. Mahoney
    878,95 kr.

    Randomized algorithms for very large matrix problems have received much attention in recent years. Much of this work was motivated by problems in large-scale data analysis, largely since matrices are popular structures with which to model data drawn from a wide range of application domains. This book provides a detailed overview of this work.

  • af Alex Kulesza
    1.083,95 kr.

    Provides a comprehensible introduction to determinantal point processes (DPPs), focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and shows how DPPs can be applied to real-world applications.

  • - A Guided Tour
    af Christopher J.C. Burges
    838,95 kr.

    Provides a tutorial overview of several foundational methods for dimension reduction. The authors divide the methods into projective methods and methods that model the manifold on which the data lies.

  • af Shai Shalev-Shwartz
    723,95 kr.

    Provides an overview of online learning. The aim is to provide the reader with a sense of some of the interesting ideas and in particular to underscore the centrality of convexity in deriving efficient online learning algorithms.

  • af Anna Goldenberg
    948,95 kr.

    Provides an overview of the historical development of statistical network modelling and then introduces a number of examples that have been studied in the network literature. Subsequent discussions focus on a number of prominent static and dynamic network models and their interconnections.

  • af Yoshua Bengio
    998,95 kr.

    Discusses the motivations for and principles of learning algorithms for deep architectures. By analysing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.

  • af Martin J. Wainwright
    1.368,95 kr.

    Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, this book develops general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations.

  • - Part 1 Low-Rank Tensor Decompositions
    af Andrzej Cichocki
    1.098,95 kr.

    Provides a systematic and example-rich guide to the basic properties and applications of tensor network methodologies, and demonstrates their promise as a tool for the analysis of extreme-scale multidimensional data. The book demonstrates the ability of tensor networks to provide linearly or even super-linearly, scalable solutions.

  • af Madeleine Udell
    998,95 kr.

    Principal components analysis (PCA) is a well-known technique for approximating a tabular data set by a low rank matrix. In this volume, the authors extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types.

  • - A Survey
    af Mohammed Ghavamzadeh
    1.013,95 kr.

    Discusses models and methods for Bayesian inference in the simple single-step Bandit model. The book then reviews the extensive recent literature on Bayesian methods for model-based RL, where prior information can be expressed on the parameters of the Markov model.

  • af Joel A. Tropp
    1.028,95 kr.

    Offers an invitation to the field of matrix concentration inequalities. The book begins with some history of random matrix theory; describes a flexible model for random matrices that is suitable for many problems; and discusses the most important matrix concentration results.

  • af Alborz Geramifard
    693,95 kr.

    A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms.

  • af Pierre Del Moral
    1.083,95 kr.

    Presents some new concentration inequalities for Feynman-Kac particle processes. The book analyses different types of stochastic particle models, including particle profile occupation measures, genealogical tree based evolution models, particle free energies, as well as backward Markov chain particle models.

  • af Francis Bach
    893,95 kr.

    Presents optimization tools and techniques dedicated to sparsity-inducing penalties from a general perspective. The book covers proximal methods, block-coordinate descent, working-set and homotopy methods, and non-convex formulations and extensions, and provides a set of experiments to compare algorithms from a computational point of view.

  • af Stephen Boyd
    933,95 kr.

    Argues that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.

  • - New Frontiers
    af Sridhar Mahadaven
    1.223,95 kr.

    Describes methods for automatically compressing Markov decision processes (MDPs) by learning a low-dimensional linear approximation defined by an orthogonal set of basis functions. A unique feature of the text is the use of Laplacian operators, whose matrix representations have non-positive off-diagonal elements and zero row sums.

  • af Majid Janzamin
    1.098,95 kr.

    Surveys recent progress in using spectral methods, including matrix and tensor decomposition techniques, to learn many popular latent variable models. The focus is on a special type of tensor decomposition called CP decomposition. The authors cover a wide range of algorithms to find the components of such tensor decomposition.

  • af Christian A. Naesseth
    998,95 kr.

    Sequential Monte Carlo is a technique for solving statistical inference problems recursively. This book shows how this powerful technique can be applied to machine learning problems such as probabilistic programming, variational inference and inference evaluation.