Quantcast
Channel: Hacker News 50
Viewing all articles
Browse latest Browse all 9433

Building Machine Learning Systems with Python | Meta Rabbit

$
0
0

Comments:"Building Machine Learning Systems with Python | Meta Rabbit"

URL:http://metarabbit.wordpress.com/2013/05/31/building-machine-learning-systems-with-python/


I wrote a book. Well, only in part. Willi Richert and I wrote a book.

It is called Building Machine Learning Systems With Python and is now available for pre-order from Amazon (or Amazon.co.uk), although it has already been partially available directly from the publisher for a while (in a form where you get chapters as editing is finished).

§

The book is an introduction to using machine learning in Python.

We mostly rely on scikit-learn, which is the most complete package for machine learning in Python. I do prefer my own code for my own projects, but milk is not as complete. It has stuff that scikit-learn does not (and stuff they have, correctly, appropriated).

We try to cover all the major modes in machine learning and, in particular, have:

classification regression clustering dimensionality reduction topic modeling

and also, towards the end, three more applied chapters:

classification of music pattern recognition in images using jug for parallel processing (including in the cloud).

§

The approach is tutorial-like, without much math but lots of code examples.

This should get people started and will be more than enough if the problem is easy (and there are still many easy problems out there). With good features (which are problem-specific, anyway) knowing how to run an SVM will very often be enough.

Lest you fear we are giving people enough just enough knowledge to be dangerous, we stress correct evaluation of the results throughout the book. We warn repeatedly against mixing up your training and testing data. This simple principle is, unfortunately, still often disregarded in scientific publications. [1]

§

There is an aspect that I really enjoyed about this whole process:

Before starting the book, I had already submitted two papers, neither of which is out already (even though, after some revisions, they are in accepted state). In the meanwhile, the book has been written, edited (only a few minor issues are still pending) and people have been able to buy parts of it for a few months now.

I have now a renewed confidence in the choice to stay in science (also because I moved from a place where things are completely absurd to a place where the work very well). But the delay in publications that is common in the life sciences is an emotional drag. In some cases, the bulk of the work was finished a few years before the paper is finally out.

[1] It is rare to see somebody just report training accuracy and claim their algorithm does well. In fact, I have never seen it in a recent paper. However, performing feature selection or parameter tuning on the whole data prior to cross-validating on the selected features with the tuned parameters is pretty common still today (there are other sins of evaluation too: “we used multiple parameters and report the best”). This leads to inflated results all around. One of the problems is that, if you do things correctly in this environment, you risk that reviewers of your work will say “looks great, but so-and-so got better results” because so-and-so tuned on the testing set and seems to have “beaten” you. (Yes, I’ve had this happen, multiple times; but that is a rant for another day.)

Like this:

LikeLoading...


Viewing all articles
Browse latest Browse all 9433

Trending Articles