Setting up ML terminal

Now I know one really needs the fundamental certificate to progress further in AWS

Again, very much like working with RNA-Seq, we need a Linux terminal which runs Python. However, to train a ML model would need at least more than 10x of computing power or memory capacity than RNA-Seq. That said, nothing is stopping one to set one up at home, but cloud computing service is more feasible - if you are not working on a personal project and you can justify your usage, then your company or clients will have to pay for it, on top of your skills. But if you are working on a personal project, chances are you are going to use the rig heavily which justify the setup cost. And if you are not going to use the machine so heavily, one always has the option to go cloud. So to conclude if you are setting up your own, the setup will not deviate from this that much, and if you have the resources to set up your own ML terminal, I would imagine I am only rambling for no good.

To train an acoustic recognition model using only 66 samples with 66 vaildation track, a 2vCPU + 8GiB instance without GPU takes 3 hours. A GPU instance makes it in 4 minutes.

That said, this difference reminds me of my GTX950 sleeping in my garage, and my RNA-Seq terminal setting under my table - AWS is fast but it takes time to launch instance, it hangs and it is good for collaboration, meaning that lone wolf like me - oh man again I will go learn how to set up my own machine learning rig.

Last updated