Developing Openstack Heat means spending a fair amount of time building and customizing bootable cloud images. A lot of this time is spent waiting for RPMs, debs and tarballs to be downloaded by a vanilla guest OS running inside a VM. And given that I work from home with an average broadband connection in a remote country in the South Pacific, the result is some frustrating wait times.
Since the same packages are often being repeatedly downloaded, I would benefit from some local caching. This seemed like a good excuse to use a Raspberry Pi. I went with a Raspberry Pi B running Raspbian. The aim was to set it up as a bridge and run a transparent squid proxy between eth0 (the inbuilt network interface) and eth1 (a USB ethernet dongle).
Once I'd completed the initial installation, I installed the following:
$ apt-get install squid3 bridge-utils
eth0 and eth1 were set up to bridge on my 192.168.1.0/24 network, and an iptables rule was set to direct any port 80 traffic that passes through the bridge to squid's default port.
The following changes were made to the squid configuration file. Since I'm interested in caching larger files the maximum_object_size has been set to 512MB. My Raspberry Pi is running on a 16GB SD card; for now I have configured cache_dir to use 8GB of that.
And did this actually help my image building time? Using diskimage-builder I ran an Ubuntu customization where the source image file was already cached locally. The first run populated the squid cache with apt repository packages and the second run had a hot squid cache. The build time went from (mm:ss) 04:20 to 01:20 which I'm pretty happy with.
Doing the same with heat-jeos (which is based on oz) managed to get some cache hits on the second run, but had little impact on the (mm:ss) 22:30 build time.