Hyperdock

- 1 min

Hyperdock is a distributed hyperparameter optimization program that runs in Docker. More so the optimization target – your program or algorithm – can be written in any language and use any framework. The only requirement is that it too runs in a Docker container and can read parameters from and write its loss to a json file.

There exist many great machine learning frameworks such as Tensorflow and PyTorch. However, creating and training the model is just one side of the coin. The other side is trying out sets of hyperparameters, finding the best configuration, and evaluating the model. Hyperdock provides an easy and flexible way of doing exactly that.

Philosophy of Hyperdock

During the development, I settled upon the following three guiding principles for Hyperdock.

1. Minimal Dependencies

A great tool should be easy to install and get started with. It shouldn’t require a specific version of Ubuntu or the compilation of libraries that might interfere with your other programs.

2. Language Independent

It is common in research to use many different languages and machine learning frameworks. I want Hyperdock to work with anything from Keras to MatCaffe because I want to use one optimization tool for all of my projects!

3. Parallel and Distributed Optimization

The hyperparameter space can quickly become very large. I want Hyperdock to be able to parallelize the parameter testing thus taking advantage of clusters, multiple cores, and multiple GPUs.

Getting started with Hyperdock

I hope that what I’ve written sounds interesting and if so I suggest you go to Hyperdock’s Github page. The first version was recently released and while it’s a bit rough it has proven itself to be very useful in my research.

Erik Gärtner

Erik Gärtner

Deep Learning Research Scientist at RADiCAL