Last month I had the wonderful opportunity to visit Barcelona, Spain to attend the OpenStack Ocata Summit where I was able to speak about the work we've been doing integratingIronic into OpenStack-Ansible.
I really enjoyed the experience, and very much enjoyed co-presenting with Andy McCrae - a great guy and an very clever and hard-working developer. If I ever make it to the UK (and see more than the inside of LHR), I'm sure going to go visit!
And here's a few happy snaps from Barcelona - lovely place, wish I could've stayed more than 6 days :-)
In my job we make use of Vidyo for videoconferencing, but today I ran into an issue after re-imaging my Ubuntu 16.04 desktop.
The latest version of vidyodesktop requires libqt4-gui, which doesn't exist in Ubuntu anymore. This always seems to be a problem with non-free software targeting multiple versions of multiple operating systems.
You can work around the issue, doing something like:
but then you get the dreaded unmet dependencies roadblock which prevents you from future package manager updates and operations. i.e.
You might want to run 'apt-get -f install' to correct these: vidyodesktop : Depends: libqt4-gui (>= 4.8.1) but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
It's a known problem, and it's been well documented. The suggested solution was to modify the VidyoDesktopInstaller-*.deb package, but I didn't want to do that (because when the next version comes out, it will need to be handraulicly fixed too - and that's an ongoing burden I'm not prepared to live with). So I went looking for another solution - and found Debian's equivs package (and thanks to tonyb for pointing me in the right direction!)
So what we want to do is to create a dummy Debian package that will satisfy the libqt4-gui requirement. So first off, let's uninstall vidyodesktop, and install equivs:
It made me realise how crusty the old site had become, how many things I had planned to do which I had left undone, and how I hadn't applied simple concepts such as Infrastructure as Code which have become accepted best-practices in the time since I originally set this up.
Of course things have changed in this time. People blog less now, so I've also taken the opportunity to remove what appear to be dead blogs from the aggregator. If you have a blog of interest to the Linux Australia community, you can ask to be added via emailing planet at linux dot org dot au. All you need is a valid Atom or RSS feed.
The other thing that is that the blog aggregator software we use hasn't seen an update since 2011. It started out as PlanetPlanet, then moved on to Venus, and so I've taken a fork to hopefully improve this some more when I find my round tuit. Fortunately I don't still need to run it under python 2.4 which is getting a little long in the tooth.
Had some fun solving an issue with partitions larger than 2Tb, and came across a little gotcha when it comes to mocking in python when a) you want to mock both an object and a function in that object, and b) when you want to mock.patch.object at both the test class and test method level.
Say you have a function you want to test that looks like this:
and you want to add a new test function, but adding a new patch just for your new test.
You want to verify that the class is instantiated with the right options, and you need to mock the add_partition method as well. How do you use the existing test class (with the mock of the execute function), add a new mock for the DiskPartitioner.add_partition function, and the __init__ of the DiskPartitioner class?
This is a continuation of the previous post on Ironic on a NUC - setting things up. If you're following along at home, read that first.
Creating disk images for deployment
Now let's build some images for use with Ironic. First off, we'll need a deploy ramdisk image for the initial load, and we'll also need the image that we want to deploy to the hardware. We can build these using diskimage builder, part of the triple-o effort.
So let's do that in a virtual environment:
mrda@irony:~/src$ mkvirtualenv dib (dib)mrda@irony:~/src$ pip install diskimage-builder six
And because we want to use some of the triple-o elements, we'll refer to these as we do the build. Once the images are built we'll put them in a place where tftpd-hpa can serve them.
You might have to try that a couple of times to wake up the AMT interface, but it's important that you do to ensure you don't get a failed deploy.
And then we take the node active, which will DHCP the deploy ramdisk, which will in turn write the user image to the disk - if everything goes well. This will also take quite a long time, so time to go make that cup of tea :-)
(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-provision-state thenuc active
Your NUC should have just booted into your user image and should be ready for you to use!
Actually, that's not what really happened. It's what I would have liked to happen.
But there were some issues. Firstly, ironic-conductor complained about not being able to find 'ironic-rootwrap'. And then once I symlinked that into place, it couldn't find the config for rootwrap, so I symlinked that into place. Then it complained that iscsiadm didn't have the correct permissions in rootwrap to do it's thing...
So I gave up, and did the thing I didn't want to. Back on the VM I ended up doing a "sudo python setup.py install" in the ironic directory so everything got installed into the correct system place and then I could restart ironic-conductor.
It should all work in develop mode, but clearly it doesn't, so in the interests of getting something up and going (and finish this blog post :) I did the quick solution and installed system-wide. Perhaps I'll circle back and work out why someday :)
When doing this, the deployment can fail for a number of reasons. To recover, you need to delete the enrolled node and start again once you've worked out what the problem is, and worked out how to fix it. To do that you need to do:
Just because a few people have asked, here is what I did to get a standalone Ironic installation going and running in an Intel NUC.
Why a NUC? Well, the Intel NUC is a cute little piece of hardware that is well suited as a test lab machine that can sit on my desk. I'm using a DC53427HYE, which is an i5 with vPro. vPro is a summary term for a bunch of Intel technologies, including AMT (Active Management Technology). This allows us to remotely manage this desktop for things like power management - think of this as an analogy to IPMI for servers.
Getting the VM ready
I like to do my development in VMs, after all, isn't that what the cloud is for? :-) So first off using your virtualisation technology of choice, build a VM with Ubuntu 14.04.2 server on it. I've allocated 2Gb RAM and 30Gb disk. The reason for the larger than average disk is so that I have room for building ramdisk and deployment disk images. I've called this box 'irony'.
On the VM you'll need a few extra things installed once you've got the base OS installed:
For this set up, I'm going to run separate networks for the control plane and data plane. I've added a USB NIC to the NUC so I can separate the networks. My public net connection to the internet will be on the 192.168.1.X network, whereas the service net control plane will be on 10.x.x.x. To do this I've added a new network interface to the VM, changed the networking to bridging for both NICs, and assigned eth0 and eth1 appropriately, and updated /etc/network/interfaces in the VM, so the right adapter is on the right network. It ended up looking like this in /etc/network/interfaces:
# The loopback network interface
iface lo inet loopback
# The primary (public) network interface
iface eth0 inet dhcp
# Control plane
iface eth1 inet static
Setting up DHCP
We need to make sure we're listening for DHCP requests on the right interface
mrda@irony:~$ sudo sed -i 's/INTERFACES=""/INTERFACES="eth1"/' /etc/default/isc-dhcp-server
Now configure your DHCP server to hand out an address to the NUC, accounting for some of the uniqueness of the device :) The tail of my /etc/dhcp/dhcpd.conf looks a bit like this:
We'll also need to create /tftpboot/map-file which will need to look like this:
re ^(/tftpboot/) /tftpboot/\2
re ^/tftpboot/ /tftpboot/
re ^(^/) /tftpboot/\1
re ^([^/]) /tftpboot/\1
This is because of a weird combination of the feature sets of tftpd-hpa, isc-dhcp-server, ironic and diskimage-builder. Basically the combination of relative and dynamic paths are incompatible, and we need to work around the limitations by setting up a map-file. This would be a nice little patch one-day to send upstream to one or more of these projects. Of course, if you're deploying ironic in a production Openstacky way where you use neutron and dnsmasq, you don't need the map file - it's only when you configure all these things handrolicly that you face this problem.
And we'll want to make sure the PXE boot stuff is all in place ready to be served over TFTP.
Now we'll need to configure ironic to work standalone. There's a few config options that'll need to be changed from the default including changing the authentication policy, setting the right driver for AMT, setting a hostname and turning off that pesky power state syncing task.
mrda@irony:~$ cd src/ironic/ mrda@irony:~/src/ironic (master)$ cp etc/ironic/ironic.conf.sample etc/ironic/ironic.conf.local mrda@irony:~/src/ironic (master)$ sed -i "s/#auth_strategy=keystone/auth_strategy=noauth/" etc/ironic/ironic.conf.local mrda@irony:~/src/ironic (master)$ sed -i "s/#enabled_drivers=pxe_ipmitool/enabled_drivers=pxe_amt/" etc/ironic/ironic.conf.local mrda@irony:~/src/ironic (master)$ sed -i "s/#host=.*/host=test-host/" etc/ironic/ironic.conf.local mrda@irony:~/src/ironic (master)$ sed -i "s/#sync_power_state_interval=60/sync_power_state_interval=-1/" etc/ironic/ironic.conf.local mrda@irony:~/src/ironic (master)$ sed -i "s%#api_url=<None>%api_url=http://10.0.0.5:6385/%" etc/ironic/ironic.conf.local mrda@irony:~/src/ironic (master)$ sed -i "s/#dhcp_provider=neutron/dhcp_provider=none/" etc/ironic/ironic.conf.local
There's also the little matter of making sure the image directories are ready:
And that's everything that needs to be done to prepare the VM for running ironic. The next post will cover starting the ironic services, building images for deployment, and poking ironic from the command line.