Scribbles as an Algorithm Service.
This document should feel like a game. One that will help you understand how an ensemble algorithm might work. It might even help demonstrate how these algorithms might perform better.
You should see a few points and a grayed canvas that you can draw on with the mouse (or finger on a table). It is a random, noisy, subset of a larger set of data points. The subset has noise and we are going to try to get a good prediction out of it while using a “suboptimal” machine learning model (namely, you). Feel free to draw as complex as possible, but try to draw lines that summarise the dots.
Use the mouse to draw a line that matches the pattern you think fits on that small bit of data. When you are done drawing, another sample will appear. Try to continue drawing lines until a pattern emerges.
When you’ve done this a few times; hit the show prediction button on top to see the mean of all the small predictions you’ve made. Even if every small prediction has some form of error, by making many of these predictions, the combined error cancels out quite nicely. Below you can see the difference between what you’ve predicted and what the true function was.
Naturally, the more samples you’ll try to fit, the better the prediction will be because you’ll cancel out any noise generated (either from the sample or from the prediction). At some point, you may imagine that it gets rather bothersome to have human make these small approximations. So instead usually, we’d have a computer do it. Computers lack the intuition to draw pretty curves, so per sample they will produce a worce model but they make up for it in speed. If you press the play button, the computer will attemp to beat your prediction by drawing straight lines only.
This is the intuition behind ensemble models. By combining a lot of simple models, we may compete with a very complicated one. This is why ensemble algorithms in machine learning are like the genetic algorithms in operations research. The mathematical foundation is a bit uninspiring but they tend to work very well for a lot of problems.
It should be clear that the method of selecting subsets is paramount to the performance of the algorithm. If I select points at random over the entire landscape then my model will come with relatively “flat” lines that need to be ensemble. By sampling items close together, every part of the function gets mapped properly.
For attribution, please cite this work as
Warmerdam (2015, Sept. 17). koaning.io: Ensemble Yourself. Retrieved from https://koaning.io/posts/ensemble-yourself/
BibTeX citation
@misc{warmerdam2015ensemble, author = {Warmerdam, Vincent}, title = {koaning.io: Ensemble Yourself}, url = {https://koaning.io/posts/ensemble-yourself/}, year = {2015} }