Thursday 6 May 2021

Building multiarchitecture aware containers

Multi-architecture is not something that most people think about - everyone assumes that the only architecture that counts is x86_64, but that's not the case!

In my job at Red Hat I care about making sure that software running on ppc64le, arm64 and s390x behaves just the same as it does on x86_64, which brings me to containers.  Did you know that container images are architecture specific?

It makes sense, right?  Containers contain software, and often software is architecture specific.  When you pull down a container, you never specify an architecture to use.  So how does that work?  And more importantly to us developers, how do I ensure my containers are multiarch-aware and "just work" no matter which platform we are running on?

For the purposes of this post, I'm going to be using podman and quay.io, but you could just as easily use docker and dockerhub.  I much prefer the serviceless design of podman, but that's up to you.

[Aside: From here on in I'm going to use the term amd64 in lieu of x86_64.  You can read more about that here.  Likewise you might be wondering why we use arm64 instead of aarch64, you can read about that too.  And if that's not enough, here's a nice link on golang architectures]

A simple application we want to containerise

To begin with, let's build a simple Flask python application.  Something that is super trivial, but can demonstrate software running in a container.  To that end, I present you moo-chop.  It just prints a hello world message in a random colour, like this:


We want to deploy this application via a container on any of {arm64, amd64, ppc64le, s390x}, so how do we do it?

Setting up a container registry


Next we need to create an account on a container registry, and create a repository.  Pretty trivial for the likes of you I'm sure.  I've done this over on quay.io, which you can see here: https://quay.io/repository/mrdaredhat/moo-chop

Setting up our environment

Just for readability, I'll define some environment variable here that you'll need to use on each architecture build host and where you construct your manifest. 

# One time setup
$ ARCH=ppc64le # or amd64, arm64 or s390x
$ QUSER=#your quay.io username
$ QPASS=#your quay.io password
$ PROJECT=moo-chop
$ GITREPO=https://github.com/mrda/$PROJECT.git

Building our containers on each architecture, and pushing them to quay.io

Now we have source code we want to build, and have access to a container repository, we want to build containers for each architecture and push these to the container repository.

The process we follow is exactly the same for each architecture, so I'll only show the steps once.

# Repeat for each architecture you want to support
$ ssh <build-host-for-this-architecture>
# Paste in your environment variables here
$ mkdir -p src
$ cd src
$ git clone $GITREPO
$ cd $PROJECT
# Build your software.  In our case, there's no compiling needed
# as we're only distributing python code.  But if this were C or Go
# or something else, this would be your build step.
$ podman build -t quay.io/$QUSER/$PROJECT:$ARCH --arch $ARCH .
$ podman login quay.io --username $QUSER --password $QPASS
$ podman push quay.io/$QUSER/$PROJECT:$ARCH
^D

And that's one architecture down.  Rinse, lather, repeat for all architectures that you want to build for.

Checking your architecture specific containers on quay.io

The podman push commands that you did in the step above pushed your built containers into the container registry.  You should verify they are all there as expected.  In my case, I can do this by visiting https://quay.io/repository/mrdaredhat/moo-chop?tab=tags and seeing the tag for each container build that is now available.


Building a multiarchecture manifest

We now have built containers for each architecture, and uploaded them to our favourite container registry.  We now need to build a manifest so that the right image is automatically discoverable when we request the base container from the registry.  We do this by building a manifest that links the container images to each architecture.

$ podman login quay.io --username $QUSER --password $QPASS
$ podman manifest create quay.io/$QUSER/$PROJECT:latest
$ podman manifest add quay.io/$QUSER/$PROJECT:latest --arch s390x docker://quay.io/$QUSER/$PROJECT:s390x
$ podman manifest add quay.io/$QUSER/$PROJECT:latest --arch ppc64le docker://quay.io/$QUSER/$PROJECT:ppc64le
$ podman manifest add quay.io/$QUSER/$PROJECT:latest --arch amd64 docker://quay.io/$QUSER/$PROJECT:amd64

# Push the manifest up to quay.io
$ podman manifest push quay.io/$QUSER/$PROJECT:latest docker://quay.io/$QUSER/$PROJECT

Testing it out to see that it all works

Let's examine the manifest to make sure it's multiarchitecture-aware.

podman manifest inspect docker://quay.io/$QUSER/$PROJECT:latest | jq '.manifests[]  | .digest, .platform'
"sha256:e9aea7d03e2d6f77aa79ffb395058d68778c72f52cf4264472a86978a6e9d470"
{
  "architecture": "s390x",
  "os": "linux"
}
"sha256:fd1e3f1a05e8c5df91725760241edf8d676c76da7a451457796f41f6e9ea7940"
{
  "architecture": "ppc64le",
  "os": "linux"
}
"sha256:60b2cbbc4fe9becb95c9d27b89b966b12d7fa8029d29928c900651a09abd6a3b"
{
  "architecture": "amd64",
  "os": "linux"
}

Let's try pulling down and starting the container (note: we aren't specifying the architecture in the podman run command.  It determines the correct container image based on architecture and pulls and runs the correct one)

$ podman run --rm -it -p 5000:5000 quay.io/$QUSER/$PROJECT

And that works on any of arm64, amd64, ppc64le, s390x!


Sunday 1 July 2018

LCA2019 Call for Proposals (CFP) is now open!

On behalf of the LCA2019 team we are pleased to announce that the Call
for Proposals for linux.conf.au 2019 is now open! This Call for
Proposals will close on July 30.

linux.conf.au is one of the best-known community driven Free and Open
Source Software conferences in the world. In 2019 we welcome you to
join us in Christchurch, New Zealand on Monday 21 January through to
Friday 25 January.

For full details including those not covered by this announcement
visit https://linux.conf.au/call-for-papers/

== IMPORTANT DATES ==

* Call for Proposals Opens: 2 July 2018
* Call for Proposals Closes: 30 July 2018 (no extensions)
* Notifications from the programme committee: early-September 2018
* Conference Opens: 21 January 2019

== HOW TO SUBMIT ==

Create an account or login at https://linux.conf.au/dashboard/ and
click the link to submit your proposal.

== ABOUT LINUX.CONF.AU ==

linux.conf.au is a conference where people gather to learn about the
entire world of Free and Open Source Software, directly from the
people who contribute.  Many of these contributors give scheduled
presentations, but much interaction occurs in-between and after formal
sessions between all attendees. Our aim is to create a deeply
technical conference where we bring together industry leaders and
experts on a wide range of subjects.

linux.conf.au welcomes submissions from first-time and seasoned
speakers, from all free and open technology communities, and all walks
of life. We respect and encourage diversity at our conference.

== CONFERENCE THEME ==

Our theme for linux.conf.au 2019 is "The Linux of Things". Building on
the role that Linux plays in our everyday lives, we will address
IoT-related opportunities and concerns from the purely technical
through environmental, health, privacy, security and more. Please let
this inspire you, but not restrict you - we will still have many talks
about other interesting things in our community.

For some suggestions to get you started with your proposal ideas
please visit the CFP page on the linux.conf.au website.

== PROPOSAL TYPES ==

We're accepting submissions for three different types of proposal:

* Presentation (45 minutes): These are generally presented in lecture
  format and form the bulk of the available conference slots.
* Tutorial (100 minutes): These are generally presented in a classroom
  format. They should be interactive or hands-on in nature. Tutorials
  are expected to have a specific learning outcome for attendees.
* Miniconf (full-day): Single-track mini-conferences that run for the
  duration of a day on either Monday or Tuesday. We provide the room,
  and you provide the speakers. Together, you can explore a field in
  Free and Open Source software in depth.

== PROPOSER RECOGNITION ==

In recognition of the value that presenters and miniconf organisers
bring to our conference, once a proposal is accepted, one presenter or
organiser per proposal is entitled to:
* Free registration, which holds all of the benefits of a Professional
  Delegate Ticket
* A complimentary ticket to the Speakers' Dinner, with additional
  tickets for significant others and children available for purchase.
* 50% off the advertised price for sponsorship at the White-Flippered
  Blue Penguin tier.

If your proposal includes more than one presenter or organiser, these
additional people will be entitled to:
* Professional or hobbyist registration at the Early Bird rate,
  regardless of whether the Early Bird rate is generally available
* Speakers' dinner tickets available for purchase at cost

Important Note for miniconf organisers: These discounts apply to the
organisers only. All participants in your miniconf must arrange or
purchase tickets for themselves via the regular ticket sales process
or they may not be able to attend!

As a volunteer-run non-profit conference, linux.conf.au does not pay
speakers to present at the conference; but you may be eligible for
financial assistance.

== FINANCIAL ASSISTANCE ==

linux.conf.au is able to provide limited financial assistance for some speakers.

Financial assistance may be provided to cover expenses that might
otherwise prohibit a speaker from attending such as:
* Cost of flight
* Accommodation
* Other accessibility related costs

To be considered for assistance you can indicate this when making your
proposal. We will try to accommodate as many requests for assistance
as possible within our limited budget.

== ACCESSIBILITY ==

linux.conf.au aims to be accommodating to everyone who wants to attend
or present at the conference. We recognise that some people face
accessibility challenges. If you have special accessibility
requirements, you can provide that information when submitting your
proposal so that we can plan to properly accommodate you.

We recognise that childcare and meeting dietary requirements also fall
under the general principle of making it possible for everyone to
participate, and will be announcing our offering for these in the near
future; if you have concerns or needs in these areas, or in any area
that would impact your ability to participate, please let us when
submitting your proposal.

== CODE OF CONDUCT ==

By agreeing to present at or attend the conference you are agreeing to
abide by the terms and conditions
(https://linux.conf.au/attend/terms-and-conditions/). We require all
speakers and delegates to have read, understood, and act according to
the standards set forth in our Code of Conduct
(https://linux.conf.au/attend/code-of-conduct/).

== RECORDING ==

To increase the number of people that can view your presentation,
linux.conf.au will record your talk and make it publicly available
after the event. We plan to release recordings of every talk at the
conference under a Creative Commons Share-Alike Licence. When
submitting your proposal you may note that you do not wish to have
your talk released, although we prefer and encourage all presentations
to be recorded.

== LICENSING ==

If the subject of your presentation is software, you must ensure the
software has an Open Source Initiative-approved licence at the time of
the close of our CFP.

Thursday 10 November 2016

Speaking at the OpenStack Summit in Barcelona

Last month I had the wonderful opportunity to visit Barcelona, Spain to attend the OpenStack Ocata Summit where I was able to speak about the work we've been doing integrating Ironic into OpenStack-Ansible.


I really enjoyed the experience, and very much enjoyed co-presenting with Andy McCrae - a great guy and an very clever and hard-working developer. If I ever make it to the UK (and see more than the inside of LHR), I'm sure going to go visit!

And here's a few happy snaps from Barcelona - lovely place, wish I could've stayed more than 6 days :-)











Thursday 6 October 2016

Fixing broken Debian packages

In my job we make use of Vidyo for videoconferencing, but today I ran into an issue after re-imaging my Ubuntu 16.04 desktop.

The latest version of vidyodesktop requires libqt4-gui, which doesn't exist in Ubuntu anymore. This always seems to be a problem with non-free software targeting multiple versions of multiple operating systems.

You can work around the issue, doing something like:

sudo dpkg -i --ignore-depends=libqt4-gui VidyoDesktopInstaller-*.deb

but then you get the dreaded unmet dependencies roadblock which prevents you from future package manager updates and operations. i.e.

You might want to run 'apt-get -f install' to correct these:
 vidyodesktop : Depends: libqt4-gui (>= 4.8.1) but it is not installable
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

It's a known problem, and it's been well documented. The suggested solution was to modify the VidyoDesktopInstaller-*.deb package, but I didn't want to do that (because when the next version comes out, it will need to be handraulicly fixed too - and that's an ongoing burden I'm not prepared to live with). So I went looking for another solution - and found Debian's equivs package (and thanks to tonyb for pointing me in the right direction!)

So what we want to do is to create a dummy Debian package that will satisfy the libqt4-gui requirement.  So first off, let's uninstall vidyodesktop, and install equivs:

sudo apt-get -f install
sudo apt-get install equivs

Next, let's make a fake package:

mkdir -p ~/src/fake-libqt4-gui
cd  ~/src/fake-libqt4-gui
cat << EOF > fake-libqt4-gui
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libqt4-gui
Version: 1:100
Maintainer: Michael Davies <michael@the-davies.net>
Architecture: all
Description: fake libqt4-gui to keep vidyodesktop happy
EOF

And now, let's build and install the dummy package:

equivs-build fake-libqt4-gui
sudo dpkg -i libqt4-gui_100_all.deb

And now vidyodesktop installs cleanly!

sudo dpkg -i VidyoDesktopInstaller-*.deb

Tuesday 10 May 2016

Planet Linux Australia... rebooted

Recently Linux Australia needed to move its infrastructure to a different place, and so we took the opportunity to build a fresh new instance of the Planet Linux Australia blog aggregator.

It made me realise how crusty the old site had become, how many things I had planned to do which I had left undone, and how I hadn't applied simple concepts such as Infrastructure as Code which have become accepted best-practices in the time since I originally set this up.

Of course things have changed in this time.  People blog less now, so I've also taken the opportunity to remove what appear to be dead blogs from the aggregator.   If you have a blog of interest to the Linux Australia community, you can ask to be added via emailing planet at linux dot org dot au. All you need is a valid Atom or RSS feed.

The other thing that is that the blog aggregator software we use hasn't seen an update since 2011. It started out as PlanetPlanet, then moved on to Venus, and so I've taken a fork to hopefully improve this some more when I find my round tuit. Fortunately I don't still need to run it under python 2.4 which is getting a little long in the tooth.

Finally, the config for Planet Linux Australia is up on github.  Just like the venus code itself, pull requests welcome.  Share and Enjoy :-)

Sunday 20 September 2015

Mocking python objects and object functions using both class-level and function-level mocks

Had some fun solving an issue with partitions larger than 2Tb, and came across a little gotcha when it comes to mocking in python when a) you want to mock both an object and a function in that object, and b) when you want to mock.patch.object at both the test class and test method level.

Say you have a function you want to test that looks like this:


def make_partitions(...):
        ...
        dp = disk_partitioner.DiskPartitioner(...)
        dp.add_partition(...)
        ...

where the DiskPartitioner class looks like this:


class DiskPartitioner(object):

    def __init__(self, ...):
        ...

    def add_partition(self, ...):
        ...


and you have existing test code like this:

@mock.patch.object(utils, 'execute')
class MakePartitionsTestCase(test_base.BaseTestCase):
    ...


and you want to add a new test function, but adding a new patch just for your new test.

You want to verify that the class is instantiated with the right options, and you need to mock the add_partition method as well. How do you use the existing test class (with the mock of the execute function), add a new mock for the DiskPartitioner.add_partition function, and the __init__ of the DiskPartitioner class?

After a little trial and error, this is how:

    @mock.patch.object(disk_partitioner, 'DiskPartitioner',
                       autospec=True)
    def test_make_partitions_with_gpt(self, mock_dp, mock_exc):

        # Need to mock the function as well
        mock_dp.add_partition = mock.Mock(return_value=None)
        ...
        disk_utils.make_partitions(...)   # Function under test
        mock_dp.assert_called_once_with(...)
        mock_dp.add_partition.assert_called_once_with(...)


Things to note:

1) The ordering of the mock parameters to test_make_partitions_with_gpt isn't immediately intuitive (at least to me).  You specify the function level mocks first, followed by the class level mocks.

2) You need to manually mock the instance method of the mocked class.  (i.e. the add_partition function)


You can see the whole enchilada over here in the review.

Thursday 10 September 2015

Ironic on a NUC - part 2 - Running services, building and deploying images, and testing

This is a continuation of the previous post on Ironic on a NUC - setting things up.  If you're following along at home, read that first.

Creating disk images for deployment

Now let's build some images for use with Ironic.  First off, we'll need a deploy ramdisk image for the initial load, and we'll also need the image that we want to deploy to the hardware.  We can build these using diskimage builder, part of the triple-o effort.

So let's do that in a virtual environment:

mrda@irony:~/src$ mkvirtualenv dib
(dib)mrda@irony:~/src$ pip install diskimage-builder six

And because we want to use some of the triple-o elements, we'll refer to these as we do the build. Once the images are built we'll put them in a place where tftpd-hpa can serve them.

(dib)mrda@irony:~/src$ export ELEMENTS_PATH=~/src/tripleo-image-elements/elements
(dib)mrda@irony:~/src$ mkdir images
(dib)mrda@irony:~/src$ cd images
(dib)mrda@irony:~/src/images$ disk-image-create ubuntu baremetal localboot dhcp-all-interfaces local-config -o my-image
(dib)mrda@irony:~/src/images$ ramdisk-image-create ubuntu deploy-ironic -o deploy-ramdisk
(dib)mrda@irony:~/src/images$ cp -rp * /tftpboot

Starting Ironic services

I like to do my development in virtualenvs, so we'll run our services there. Let's start with ironic-api.

(dib)mrda@irony:~/src/images$ deactivate
mrda@irony:~/src/images$ cd ~/src/ironic/
mrda@irony:~/src/ironic (master)$ tox -evenv --notest
mrda@irony:~/src/ironic (master)$ source .tox/venv/bin/activate
(venv)mrda@irony:~/src/ironic (master)$ ironic-api -v -d --config-file etc/ironic/ironic.conf.local

Now in a new terminal window for our VM, let's run ironic-conductor:

mrda@irony:~/src/images$ cd ~/src/ironic/
mrda@irony:~/src/ironic (master)$ source .tox/venv/bin/activate
(venv)mrda@krypton:~/src/ironic (master)$ python setup.py develop
(venv)mrda@krypton:~/src/ironic (master)$ ironic-conductor -v -d --config-file etc/ironic/ironic.conf.local

(If you get an error about unable to load the pywsman library, follow the workaround over here in a previous blog post)

Running Ironic Client

Let's open a new window on the VM for running an ironic command-line client to exercise what we've built:

mrda@irony:~$ cd src/python-ironicclient/
mrda@irony:~/src/python-ironicclient (master)$ tox -evenv --notest
mrda@irony:~/src/python-ironicclient (master)$ source .tox/venv/bin/activate

Now we need to fudge authentication, and point at our running ironic-api:

(venv)mrda@irony:~/src/python-ironicclient (master)$ export OS_AUTH_TOKEN=fake-token
(venv)mrda@irony:~/src/python-ironicclient (master)$ export IRONIC_URL=http://localhost:6385/

Let's try it out and see what happens, eh?


(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic driver-list

+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| pxe_amt             | test-host      |
+---------------------+----------------+

Looking good! Let's try registering the NUC as an Ironic node, specifying the deployment ramdisk:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-create -d pxe_amt -i amt_password='<the-nuc-amin-password>' -i amt_username='admin' -i amt_address='10.0.0.251' -i deploy_ramdisk='file:///tftpboot/deploy-ramdisk.initramfs' -i deploy_kernel='file:///tftpboot/deploy-ramdisk.kernel' -n thenuc
+--------------+--------------------------------------------------------------------------+
| Property     | Value                                                                    |
+--------------+--------------------------------------------------------------------------+
| uuid         | 924a5447-930e-4d27-837e-6dd5d5f10e16                                     |
| driver_info  | {u'amt_username': u'admin', u'deploy_kernel': u'file:///tftpboot/deploy- |
|              | ramdisk.kernel', u'amt_address': u'10.0.0.251', u'deploy_ramdisk':       |
|              | u'file:///tftpboot/deploy-ramdisk.initramfs', u'amt_password':           |
|              | u'******'}                                                               |
| extra        | {}                                                                       |
| driver       | pxe_amt                                                                  |
| chassis_uuid |                                                                          |
| properties   | {}                                                                       |
| name         | thenuc                                                                   |
+--------------+--------------------------------------------------------------------------+

Again more success!  Since we're not using Nova to manage or kick-off the deploy, we need to tell ironic where the instance we want deployed is, along with some of the instance information:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-update thenuc add instance_info/image_source='file:///tftpboot/my-image.qcow2' instance_info/kernel='file:///tftpboot/my-image.vmlinuz' instance_info/ramdisk='file:///tftpboot/my-image.initrd' instance_info/root_gb=10
+------------------------+-------------------------------------------------------------------------+
| Property               | Value                                                                   |
+------------------------+-------------------------------------------------------------------------+
| target_power_state     | None                                                                    |
| extra                  | {}                                                                      |
| last_error             | None                                                                    |
| updated_at             | None                                                                    |
| maintenance_reason     | None                                                                    |
| provision_state        | available                                                               |
| clean_step             | {}                                                                      |
| uuid                   | 924a5447-930e-4d27-837e-6dd5d5f10e16                                    |
| console_enabled        | False                                                                   |
| target_provision_state | None                                                                    |
| provision_updated_at   | None                                                                    |
| maintenance            | False                                                                   |
| inspection_started_at  | None                                                                    |
| inspection_finished_at | None                                                                    |
| power_state            | None                                                                    |
| driver                 | pxe_amt                                                                 |
| reservation            | None                                                                    |
| properties             | {}                                                                      |
| instance_uuid          | None                                                                    |
| name                   | thenuc                                                                  |
| driver_info            | {u'amt_username': u'admin', u'amt_password': u'******', u'amt_address': |
|                        | u'10.0.0.251', u'deploy_ramdisk': u'file:///tftpboot/deploy-            |
|                        | ramdisk.initramfs', u'deploy_kernel': u'file:///tftpboot/deploy-        |
|                        | ramdisk.kernel'}                                                        |
| created_at             | 2015-09-10T00:55:27+00:00                                               |
| driver_internal_info   | {}                                                                      |
| chassis_uuid           |                                                                         |
| instance_info          | {u'ramdisk': u'file:///tftpboot/my-image.initrd', u'kernel':            |
|                        | u'file:///tftpboot/my-image.vmlinuz', u'root_gb': 10, u'image_source':  |
|                        | u'file:///tftpboot/my-image.qcow2'}                                     |
+------------------------+-------------------------------------------------------------------------+

Let's see what we've got:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-list
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name   | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| f8af4d4e-e3da-4a04-9596-8e4fef15e4eb | thenuc | None          | None        | available          | False       |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+


We now need to create a network port in ironic, and associate it with the mac address of the NUC.  But I'm lazy, so let's extract the node UUID first:

(venv)mrda@irony:~/src/python-ironicclient (master)$ NODEUUID=$(ironic node-list | tail -n +4 | head -n -1 | awk -F "| " '{print $2}')
(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic port-create -n $NODEUUID -a <nuc-mac-address>
+-----------+--------------------------------------+
| Property  | Value                                |
+-----------+--------------------------------------+
| node_uuid | 924a5447-930e-4d27-837e-6dd5d5f10e16 |
| extra     | {}                                   |
| uuid      | c6dddc3d-b9b4-4fbc-99e3-18b8017c7b01 |
| address   | <nuc-mac-address>                    |
+-----------+--------------------------------------+


So let's validate everything we've done, before we try this out in anger:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-validate thenuc
+------------+--------+---------------+
| Interface  | Result | Reason        |
+------------+--------+---------------+
| boot       | True   |               |
| console    | None   | not supported |
| deploy     | True   |               |
| inspect    | None   | not supported |
| management | True   |               |
| power      | True   |               |
| raid       | None   | not supported |
+------------+--------+---------------+

And one more thing to do before we really start things rolling - ensure the NUC is listening to us:

(venv)mrda@irony:~/src/python-ironicclient (master)$ telnet 10.0.0.251 16992
Trying 10.0.0.251...
Connected to 10.0.0.251.
Escape character is '^]'.
^]close

telnet> close
Connection closed.

You might have to try that a couple of times to wake up the AMT interface, but it's important that you do to ensure you don't get a failed deploy.

And then we take the node active, which will DHCP the deploy ramdisk, which will in turn write the user image to the disk - if everything goes well.  This will also take quite a long time, so time to go make that cup of tea :-)

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-provision-state thenuc active

Your NUC should have just booted into your user image and should be ready for you to use!

Postscript #1:

Actually, that's not what really happened.  It's what I would have liked to happen.

But there were some issues. Firstly, ironic-conductor complained about not being able to find 'ironic-rootwrap'.  And then once I symlinked that into place, it couldn't find the config for rootwrap, so I symlinked that into place.  Then it complained that iscsiadm didn't have the correct permissions in rootwrap to do it's thing...

So I gave up, and did the thing I didn't want to.  Back on the VM I ended up doing a "sudo python setup.py install" in the ironic directory so everything got installed into the correct system place and then I could restart ironic-conductor.

It should all work in develop mode, but clearly it doesn't, so in the interests of getting something up and going (and finish this blog post :) I did the quick solution and installed system-wide. Perhaps I'll circle back and work out why someday :)

Postscript #2:

When doing this, the deployment can fail for a number of reasons.  To recover, you need to delete the enrolled node and start again once you've worked out what the problem is, and worked out how to fix it.  To do that you need to do:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-maintenance thenuc on
(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-delete thenuc

Isn't it great that we can use names for nodes instead of UUIDs for most operations :)