There have been some interesting developments in machine learning over the past four years, since the 1st edition of this book came out. One is the rise of Deep Belief Networks as an area of real research interest (and business interest, as large internet-based companies look to snap up every small company working in the area), while another is the continuing work on statistical interpretations of machine learning algorithms. This second one is very good for the field as an area of research, but it does mean that computer science students, whose statistical background can be rather lacking, find it hard to get started in an area that they are sure should be of interest to them. The hope is that this book, focussing on the algorithms of machine learning as it does, will help such students get a handle on the ideas, and that it will start them on a journey towards mastery of the relevant mathematics and statistics as well as the necessary programming and experimentation.
In addition, the libraries available for the Python language have continued to develop, so that there are now many more facilities available for the programmer. This has enabled me to provide a simple implementation of the Support Vector Machine that can be used for experiments, and to simplify the code in a few other places. All of the code that was used to create the examples in the book is available at http://stephenmonika.net/ (in the ‘Book’ tab), and use and experimentation with any of this code, as part of any study on machine learning, is strongly encouraged.
Some of the changes to the book include:
• the addition of two new chapters on two of those new areas: Deep Belief Networks
(Chapter 17) and Gaussian Processes (Chapter 18).
• a reordering of the chapters, and some of the material within the chapters, to make a
more natural flow.
• the reworking of the Support Vector Machine material so that there is running code
and the suggestions of experiments to be performed.
• the addition of Random Forests (as Section 13.3), the Perceptron convergence theorem
(Section 3.4.1), a proper consideration of accuracy methods (Section 2.2.4), conjugate
gradient optimisation for the MLP (Section 9.3.2), and more on the Kalman filter and
particle filter in Chapter 16.
• improved code including better use of naming conventions in Python.
• various improvements in the clarity of explanation and detail throughout the book.
I would like to thank the people who have written to me about various parts of the book, and made suggestions about things that could be included or explained better. I would also like to thank the students at Massey University who have studied the material with me, either as part of their coursework, or as first steps in research, whether in the theory or the application of machine learning. Those that have contributed particularly to the content of the second edition include Nirosha Priyadarshani, James Curtis, Andy Gilman, Örjan
Ekeberg, and the Osnabrück Knowledge-Based Systems Research group, especially Joachim Hertzberg, Sven Albrecht, and Thomas Wieman.
Ashhurst, New Zealand
Tham khảo thêm: Statistical Learning With Sparsity The Lasso And Generalizations
Tham khảo thêm: Building Machine Learning Systems With Python
Tham khảo thêm: Learning scikit-learn Machine Learning in Python
Tham khảo thêm: Machine Learning With Spark
Tham khảo thêm: Statistical Learning And Data Sciences
Thẻ từ khóa: Machine Learning An Algorithmic Perspective Second Edition, Machine Learning An Algorithmic Perspective Second Edition pdf, Machine Learning An Algorithmic Perspective Second Edition ebook, Tải sách Machine Learning An Algorithmic Perspective Second Edition