Machine Learning in a nutshell
You can go to wikipedia for a larger shell - you have been with me for sometime then you know I don't do serious writing
Alan Turing, the father of computer, became more well-known because of the movie Imitation Game. The actor Benjamin Cumberbatch is of course one of the heros here, but more importantly the reason for Turing to invent a computer, at least this is what the movie said, was to mimic the interaction between he and his lover, who died young because of disease. Whether this is true or not, one has to go ask Turing in the heaven, but that does coincide with his idea or theory about computer and artificial intelligent (AI).
There is an escape room game called 999 originally published on NDS and later ported to many different consoles. The game discussed lots of theories, hypothesis and urban legends, one of those being the idea of AI - imagine you are talking to something behind the door. From the conversation you cannot tell whether it is a human or a machine. That's the idea Turing was proposing - and that is super romantic that if he really could invent this AI at his time, he can "reborn" his lover which implies that Turing has nothing more than his lover's brain that he loved and craved. (the part that I loved the most is not this - I like the part they talked about deciphering the German Enigma - whether Turning meant to do that out of justice or not, I personally do not think that matters especially for people with this level of intelligence)
AI is not something very new, it has at least 80 years of history since documented - an idea proposed by Turing in World War 2 - but not until recently has it become something concrete and appliable. This topic is so old that I am not writing on things you can Google, but the most intriguing part was the general AI vs narrow AI ideas. In theory, the AI that Turing was proposing is close to human, and it can catchup no matter how you switch topics and level during the conversation. Narrow AI is what we have in use right now - different model has different use case, where the usage is quite specific to one function, but it did great in that. General AI, if ever possible in my time, would definitely be a quantum computer powered algorithm, which I would imagine we need at least 2 layers of learning - the first layer is the pattern recognition of topic or area, then it will be put through the the second layer of the existing model for processing.
To conclude, machine learning and narrow AI enabled a good division of task between programmer, mathematicians and industrial expert (including different professionals) - where most of the general public is there to pick the suitable model, choosing features that matter to them, train the model and make use of it. This is where the value increment acitivties come into play.
Machine Learning is a pattern recognition program assessed by True-positive-False-positive matrix and pearson score of correlations between trainer set and vaildation set.
Machine Learning is not totally foreign by the time I started working on that. It was because of my capability in RNA-Seq, which is another stream of big-data, that enabled a flatter learning curve. The change over from R to Python is also another great point for me, since R is more difficult to code than Python, but they are so similar and related that they have a bunch of counterpart functions that I only need to acquire the syntax because I have the concept already. They are similar in workflow as well - Data preprocess (exploration) > pick the best model baesd on the data > improve the scoring.
My journey is not as tough as a complete novice I would reckon, but the difficult part is to emphasize the value added by biologist, like what I have done in RNA-Seq.
Last updated