Gradient Descent, Stochastic Optimization, and Other Tales
- Indbinding:
- Paperback
- Sideantal:
- 96
- Udgivet:
- 22. juli 2022
- Størrelse:
- 152x229x5 mm.
- Vægt:
- 141 g.
- 2-3 uger.
- 17. december 2024
På lager
Forlænget returret til d. 31. januar 2025
Normalpris
Abonnementspris
- Rabat på køb af fysiske bøger
- 1 valgfrit digitalt ugeblad
- 20 timers lytning og læsning
- Adgang til 70.000+ titler
- Ingen binding
Abonnementet koster 75 kr./md.
Ingen binding og kan opsiges når som helst.
- 1 valgfrit digitalt ugeblad
- 20 timers lytning og læsning
- Adgang til 70.000+ titler
- Ingen binding
Abonnementet koster 75 kr./md.
Ingen binding og kan opsiges når som helst.
Beskrivelse af Gradient Descent, Stochastic Optimization, and Other Tales
The goal of this book is to debunk and dispel the magic behind the black-box optimizers and stochastic optimizers. It aims to build a solid foundation on how and why the techniques work. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind the strategies. This book doesn't shy away from addressing both the formal and informal aspects of gradient descent and stochastic optimization methods. By doing so, it hopes to provide readers with a deeper understanding of these techniques as well as the when, the how and the why of applying these algorithms.
Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning tasks. Its stochastic version receives attention in recent years, and this is particularly true for optimizing deep neural networks. In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points. In 1951, Robbins and Monro published A stochastic approximation method, one of the first modern treatments on stochastic optimization that estimates local gradients with a new batch of samples. And now, stochastic optimization has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this article is to give a self-contained introduction to concepts and mathematical tools in gradient descent and stochastic optimization.
Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning tasks. Its stochastic version receives attention in recent years, and this is particularly true for optimizing deep neural networks. In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points. In 1951, Robbins and Monro published A stochastic approximation method, one of the first modern treatments on stochastic optimization that estimates local gradients with a new batch of samples. And now, stochastic optimization has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this article is to give a self-contained introduction to concepts and mathematical tools in gradient descent and stochastic optimization.
Brugerbedømmelser af Gradient Descent, Stochastic Optimization, and Other Tales
Giv din bedømmelse
For at bedømme denne bog, skal du være logget ind.Andre købte også..
© 2024 Pling BØGER Registered company number: DK43351621