During the process of going through the fast.ai course, I decided to build my own deep learning box. This post is summary of my experience, but it won’t tell you every detail needed to build your own. There are quite a few guides out there.
I remember my brother using a website PC Parts Picker to put a list of compatible parts together and find where to buy them. That website is still commonly used today and what I used in picking my parts. This is the list I ended up with. (Note: Some items are now discontinued by the time you’re reading this). The important thing for me was to get an Nvidia card to be able to properly use CUDA on my machine. The card is on the lower end of what is considered acceptable, but I was on a budget.
Here are most of the parts in their boxes before assembling.
The process was pretty straightforward. Motherboard went in first, then the processor, and so on. I did have an issue where the backplate came off when I unscrewed the mounting screws for the processor fan. I struggled to put the fan on due that until I realized it had fallen behind the case. I wish I had mounted the fan the other way since there is a plastic part that now covers one of the RAM ports.
Most of the cords can only plug into one spot and everything on the motherboard and power supply are labeled. Just paint by numbers for the most part. I did make the mistake of not plugging the GPU into the power but got a nice message when I tried to boot up.
After I was able to boot I installed Arch Linux on it. I have been involved with the Arch community for quite some time in editing the Wiki and have installed it numerous times, so I was comfortable doing so (Here’s my own installation guide; use with caution. Additionally all the super computer clusters I used in graduate school were Linux-based, and our lab setup our own cluster. One downside about using Arch is that sometimes there are libraries that are made for Ubuntu that aren’t as easily compiled for Arch in a straightforward manner.
After installing Arch I installed JupyterLab and JupyterHub, CUDA, and some of the deep learning frameworks and was able to start running pretty quickly.
Thus far, it’s been a good experience using the machine for running a couple of Kaggle competitions.