Sunday, 1 July 2018

LCA2019 Call for Proposals (CFP) is now open!

On behalf of the LCA2019 team we are pleased to announce that the Call
for Proposals for linux.conf.au 2019 is now open! This Call for
Proposals will close on July 30.

linux.conf.au is one of the best-known community driven Free and Open
Source Software conferences in the world. In 2019 we welcome you to
join us in Christchurch, New Zealand on Monday 21 January through to
Friday 25 January.

For full details including those not covered by this announcement
visit https://linux.conf.au/call-for-papers/

== IMPORTANT DATES ==

* Call for Proposals Opens: 2 July 2018
* Call for Proposals Closes: 30 July 2018 (no extensions)
* Notifications from the programme committee: early-September 2018
* Conference Opens: 21 January 2019

== HOW TO SUBMIT ==

Create an account or login at https://linux.conf.au/dashboard/ and
click the link to submit your proposal.

== ABOUT LINUX.CONF.AU ==

linux.conf.au is a conference where people gather to learn about the
entire world of Free and Open Source Software, directly from the
people who contribute.  Many of these contributors give scheduled
presentations, but much interaction occurs in-between and after formal
sessions between all attendees. Our aim is to create a deeply
technical conference where we bring together industry leaders and
experts on a wide range of subjects.

linux.conf.au welcomes submissions from first-time and seasoned
speakers, from all free and open technology communities, and all walks
of life. We respect and encourage diversity at our conference.

== CONFERENCE THEME ==

Our theme for linux.conf.au 2019 is "The Linux of Things". Building on
the role that Linux plays in our everyday lives, we will address
IoT-related opportunities and concerns from the purely technical
through environmental, health, privacy, security and more. Please let
this inspire you, but not restrict you - we will still have many talks
about other interesting things in our community.

For some suggestions to get you started with your proposal ideas
please visit the CFP page on the linux.conf.au website.

== PROPOSAL TYPES ==

We're accepting submissions for three different types of proposal:

* Presentation (45 minutes): These are generally presented in lecture
  format and form the bulk of the available conference slots.
* Tutorial (100 minutes): These are generally presented in a classroom
  format. They should be interactive or hands-on in nature. Tutorials
  are expected to have a specific learning outcome for attendees.
* Miniconf (full-day): Single-track mini-conferences that run for the
  duration of a day on either Monday or Tuesday. We provide the room,
  and you provide the speakers. Together, you can explore a field in
  Free and Open Source software in depth.

== PROPOSER RECOGNITION ==

In recognition of the value that presenters and miniconf organisers
bring to our conference, once a proposal is accepted, one presenter or
organiser per proposal is entitled to:
* Free registration, which holds all of the benefits of a Professional
  Delegate Ticket
* A complimentary ticket to the Speakers' Dinner, with additional
  tickets for significant others and children available for purchase.
* 50% off the advertised price for sponsorship at the White-Flippered
  Blue Penguin tier.

If your proposal includes more than one presenter or organiser, these
additional people will be entitled to:
* Professional or hobbyist registration at the Early Bird rate,
  regardless of whether the Early Bird rate is generally available
* Speakers' dinner tickets available for purchase at cost

Important Note for miniconf organisers: These discounts apply to the
organisers only. All participants in your miniconf must arrange or
purchase tickets for themselves via the regular ticket sales process
or they may not be able to attend!

As a volunteer-run non-profit conference, linux.conf.au does not pay
speakers to present at the conference; but you may be eligible for
financial assistance.

== FINANCIAL ASSISTANCE ==

linux.conf.au is able to provide limited financial assistance for some speakers.

Financial assistance may be provided to cover expenses that might
otherwise prohibit a speaker from attending such as:
* Cost of flight
* Accommodation
* Other accessibility related costs

To be considered for assistance you can indicate this when making your
proposal. We will try to accommodate as many requests for assistance
as possible within our limited budget.

== ACCESSIBILITY ==

linux.conf.au aims to be accommodating to everyone who wants to attend
or present at the conference. We recognise that some people face
accessibility challenges. If you have special accessibility
requirements, you can provide that information when submitting your
proposal so that we can plan to properly accommodate you.

We recognise that childcare and meeting dietary requirements also fall
under the general principle of making it possible for everyone to
participate, and will be announcing our offering for these in the near
future; if you have concerns or needs in these areas, or in any area
that would impact your ability to participate, please let us when
submitting your proposal.

== CODE OF CONDUCT ==

By agreeing to present at or attend the conference you are agreeing to
abide by the terms and conditions
(https://linux.conf.au/attend/terms-and-conditions/). We require all
speakers and delegates to have read, understood, and act according to
the standards set forth in our Code of Conduct
(https://linux.conf.au/attend/code-of-conduct/).

== RECORDING ==

To increase the number of people that can view your presentation,
linux.conf.au will record your talk and make it publicly available
after the event. We plan to release recordings of every talk at the
conference under a Creative Commons Share-Alike Licence. When
submitting your proposal you may note that you do not wish to have
your talk released, although we prefer and encourage all presentations
to be recorded.

== LICENSING ==

If the subject of your presentation is software, you must ensure the
software has an Open Source Initiative-approved licence at the time of
the close of our CFP.

Thursday, 10 November 2016

Speaking at the OpenStack Summit in Barcelona

Last month I had the wonderful opportunity to visit Barcelona, Spain to attend the OpenStack Ocata Summit where I was able to speak about the work we've been doing integrating Ironic into OpenStack-Ansible.


I really enjoyed the experience, and very much enjoyed co-presenting with Andy McCrae - a great guy and an very clever and hard-working developer. If I ever make it to the UK (and see more than the inside of LHR), I'm sure going to go visit!

And here's a few happy snaps from Barcelona - lovely place, wish I could've stayed more than 6 days :-)











Thursday, 6 October 2016

Fixing broken Debian packages

In my job we make use of Vidyo for videoconferencing, but today I ran into an issue after re-imaging my Ubuntu 16.04 desktop.

The latest version of vidyodesktop requires libqt4-gui, which doesn't exist in Ubuntu anymore. This always seems to be a problem with non-free software targeting multiple versions of multiple operating systems.

You can work around the issue, doing something like:

sudo dpkg -i --ignore-depends=libqt4-gui VidyoDesktopInstaller-*.deb

but then you get the dreaded unmet dependencies roadblock which prevents you from future package manager updates and operations. i.e.

You might want to run 'apt-get -f install' to correct these:
 vidyodesktop : Depends: libqt4-gui (>= 4.8.1) but it is not installable
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

It's a known problem, and it's been well documented. The suggested solution was to modify the VidyoDesktopInstaller-*.deb package, but I didn't want to do that (because when the next version comes out, it will need to be handraulicly fixed too - and that's an ongoing burden I'm not prepared to live with). So I went looking for another solution - and found Debian's equivs package (and thanks to tonyb for pointing me in the right direction!)

So what we want to do is to create a dummy Debian package that will satisfy the libqt4-gui requirement.  So first off, let's uninstall vidyodesktop, and install equivs:

sudo apt-get -f install
sudo apt-get install equivs

Next, let's make a fake package:

mkdir -p ~/src/fake-libqt4-gui
cd  ~/src/fake-libqt4-gui
cat << EOF > fake-libqt4-gui
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libqt4-gui
Version: 1:100
Maintainer: Michael Davies <michael@the-davies.net>
Architecture: all
Description: fake libqt4-gui to keep vidyodesktop happy
EOF

And now, let's build and install the dummy package:

equivs-build fake-libqt4-gui
sudo dpkg -i libqt4-gui_100_all.deb

And now vidyodesktop installs cleanly!

sudo dpkg -i VidyoDesktopInstaller-*.deb

Tuesday, 10 May 2016

Planet Linux Australia... rebooted

Recently Linux Australia needed to move its infrastructure to a different place, and so we took the opportunity to build a fresh new instance of the Planet Linux Australia blog aggregator.

It made me realise how crusty the old site had become, how many things I had planned to do which I had left undone, and how I hadn't applied simple concepts such as Infrastructure as Code which have become accepted best-practices in the time since I originally set this up.

Of course things have changed in this time.  People blog less now, so I've also taken the opportunity to remove what appear to be dead blogs from the aggregator.   If you have a blog of interest to the Linux Australia community, you can ask to be added via emailing planet at linux dot org dot au. All you need is a valid Atom or RSS feed.

The other thing that is that the blog aggregator software we use hasn't seen an update since 2011. It started out as PlanetPlanet, then moved on to Venus, and so I've taken a fork to hopefully improve this some more when I find my round tuit. Fortunately I don't still need to run it under python 2.4 which is getting a little long in the tooth.

Finally, the config for Planet Linux Australia is up on github.  Just like the venus code itself, pull requests welcome.  Share and Enjoy :-)

Sunday, 20 September 2015

Mocking python objects and object functions using both class-level and function-level mocks

Had some fun solving an issue with partitions larger than 2Tb, and came across a little gotcha when it comes to mocking in python when a) you want to mock both an object and a function in that object, and b) when you want to mock.patch.object at both the test class and test method level.

Say you have a function you want to test that looks like this:


def make_partitions(...):
        ...
        dp = disk_partitioner.DiskPartitioner(...)
        dp.add_partition(...)
        ...

where the DiskPartitioner class looks like this:


class DiskPartitioner(object):

    def __init__(self, ...):
        ...

    def add_partition(self, ...):
        ...


and you have existing test code like this:

@mock.patch.object(utils, 'execute')
class MakePartitionsTestCase(test_base.BaseTestCase):
    ...


and you want to add a new test function, but adding a new patch just for your new test.

You want to verify that the class is instantiated with the right options, and you need to mock the add_partition method as well. How do you use the existing test class (with the mock of the execute function), add a new mock for the DiskPartitioner.add_partition function, and the __init__ of the DiskPartitioner class?

After a little trial and error, this is how:

    @mock.patch.object(disk_partitioner, 'DiskPartitioner',
                       autospec=True)
    def test_make_partitions_with_gpt(self, mock_dp, mock_exc):

        # Need to mock the function as well
        mock_dp.add_partition = mock.Mock(return_value=None)
        ...
        disk_utils.make_partitions(...)   # Function under test
        mock_dp.assert_called_once_with(...)
        mock_dp.add_partition.assert_called_once_with(...)


Things to note:

1) The ordering of the mock parameters to test_make_partitions_with_gpt isn't immediately intuitive (at least to me).  You specify the function level mocks first, followed by the class level mocks.

2) You need to manually mock the instance method of the mocked class.  (i.e. the add_partition function)


You can see the whole enchilada over here in the review.

Thursday, 10 September 2015

Ironic on a NUC - part 2 - Running services, building and deploying images, and testing

This is a continuation of the previous post on Ironic on a NUC - setting things up.  If you're following along at home, read that first.

Creating disk images for deployment

Now let's build some images for use with Ironic.  First off, we'll need a deploy ramdisk image for the initial load, and we'll also need the image that we want to deploy to the hardware.  We can build these using diskimage builder, part of the triple-o effort.

So let's do that in a virtual environment:

mrda@irony:~/src$ mkvirtualenv dib
(dib)mrda@irony:~/src$ pip install diskimage-builder six

And because we want to use some of the triple-o elements, we'll refer to these as we do the build. Once the images are built we'll put them in a place where tftpd-hpa can serve them.

(dib)mrda@irony:~/src$ export ELEMENTS_PATH=~/src/tripleo-image-elements/elements
(dib)mrda@irony:~/src$ mkdir images
(dib)mrda@irony:~/src$ cd images
(dib)mrda@irony:~/src/images$ disk-image-create ubuntu baremetal localboot dhcp-all-interfaces local-config -o my-image
(dib)mrda@irony:~/src/images$ ramdisk-image-create ubuntu deploy-ironic -o deploy-ramdisk
(dib)mrda@irony:~/src/images$ cp -rp * /tftpboot

Starting Ironic services

I like to do my development in virtualenvs, so we'll run our services there. Let's start with ironic-api.

(dib)mrda@irony:~/src/images$ deactivate
mrda@irony:~/src/images$ cd ~/src/ironic/
mrda@irony:~/src/ironic (master)$ tox -evenv --notest
mrda@irony:~/src/ironic (master)$ source .tox/venv/bin/activate
(venv)mrda@irony:~/src/ironic (master)$ ironic-api -v -d --config-file etc/ironic/ironic.conf.local

Now in a new terminal window for our VM, let's run ironic-conductor:

mrda@irony:~/src/images$ cd ~/src/ironic/
mrda@irony:~/src/ironic (master)$ source .tox/venv/bin/activate
(venv)mrda@krypton:~/src/ironic (master)$ python setup.py develop
(venv)mrda@krypton:~/src/ironic (master)$ ironic-conductor -v -d --config-file etc/ironic/ironic.conf.local

(If you get an error about unable to load the pywsman library, follow the workaround over here in a previous blog post)

Running Ironic Client

Let's open a new window on the VM for running an ironic command-line client to exercise what we've built:

mrda@irony:~$ cd src/python-ironicclient/
mrda@irony:~/src/python-ironicclient (master)$ tox -evenv --notest
mrda@irony:~/src/python-ironicclient (master)$ source .tox/venv/bin/activate

Now we need to fudge authentication, and point at our running ironic-api:

(venv)mrda@irony:~/src/python-ironicclient (master)$ export OS_AUTH_TOKEN=fake-token
(venv)mrda@irony:~/src/python-ironicclient (master)$ export IRONIC_URL=http://localhost:6385/

Let's try it out and see what happens, eh?


(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic driver-list

+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| pxe_amt             | test-host      |
+---------------------+----------------+

Looking good! Let's try registering the NUC as an Ironic node, specifying the deployment ramdisk:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-create -d pxe_amt -i amt_password='<the-nuc-amin-password>' -i amt_username='admin' -i amt_address='10.0.0.251' -i deploy_ramdisk='file:///tftpboot/deploy-ramdisk.initramfs' -i deploy_kernel='file:///tftpboot/deploy-ramdisk.kernel' -n thenuc
+--------------+--------------------------------------------------------------------------+
| Property     | Value                                                                    |
+--------------+--------------------------------------------------------------------------+
| uuid         | 924a5447-930e-4d27-837e-6dd5d5f10e16                                     |
| driver_info  | {u'amt_username': u'admin', u'deploy_kernel': u'file:///tftpboot/deploy- |
|              | ramdisk.kernel', u'amt_address': u'10.0.0.251', u'deploy_ramdisk':       |
|              | u'file:///tftpboot/deploy-ramdisk.initramfs', u'amt_password':           |
|              | u'******'}                                                               |
| extra        | {}                                                                       |
| driver       | pxe_amt                                                                  |
| chassis_uuid |                                                                          |
| properties   | {}                                                                       |
| name         | thenuc                                                                   |
+--------------+--------------------------------------------------------------------------+

Again more success!  Since we're not using Nova to manage or kick-off the deploy, we need to tell ironic where the instance we want deployed is, along with some of the instance information:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-update thenuc add instance_info/image_source='file:///tftpboot/my-image.qcow2' instance_info/kernel='file:///tftpboot/my-image.vmlinuz' instance_info/ramdisk='file:///tftpboot/my-image.initrd' instance_info/root_gb=10
+------------------------+-------------------------------------------------------------------------+
| Property               | Value                                                                   |
+------------------------+-------------------------------------------------------------------------+
| target_power_state     | None                                                                    |
| extra                  | {}                                                                      |
| last_error             | None                                                                    |
| updated_at             | None                                                                    |
| maintenance_reason     | None                                                                    |
| provision_state        | available                                                               |
| clean_step             | {}                                                                      |
| uuid                   | 924a5447-930e-4d27-837e-6dd5d5f10e16                                    |
| console_enabled        | False                                                                   |
| target_provision_state | None                                                                    |
| provision_updated_at   | None                                                                    |
| maintenance            | False                                                                   |
| inspection_started_at  | None                                                                    |
| inspection_finished_at | None                                                                    |
| power_state            | None                                                                    |
| driver                 | pxe_amt                                                                 |
| reservation            | None                                                                    |
| properties             | {}                                                                      |
| instance_uuid          | None                                                                    |
| name                   | thenuc                                                                  |
| driver_info            | {u'amt_username': u'admin', u'amt_password': u'******', u'amt_address': |
|                        | u'10.0.0.251', u'deploy_ramdisk': u'file:///tftpboot/deploy-            |
|                        | ramdisk.initramfs', u'deploy_kernel': u'file:///tftpboot/deploy-        |
|                        | ramdisk.kernel'}                                                        |
| created_at             | 2015-09-10T00:55:27+00:00                                               |
| driver_internal_info   | {}                                                                      |
| chassis_uuid           |                                                                         |
| instance_info          | {u'ramdisk': u'file:///tftpboot/my-image.initrd', u'kernel':            |
|                        | u'file:///tftpboot/my-image.vmlinuz', u'root_gb': 10, u'image_source':  |
|                        | u'file:///tftpboot/my-image.qcow2'}                                     |
+------------------------+-------------------------------------------------------------------------+

Let's see what we've got:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-list
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name   | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| f8af4d4e-e3da-4a04-9596-8e4fef15e4eb | thenuc | None          | None        | available          | False       |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+


We now need to create a network port in ironic, and associate it with the mac address of the NUC.  But I'm lazy, so let's extract the node UUID first:

(venv)mrda@irony:~/src/python-ironicclient (master)$ NODEUUID=$(ironic node-list | tail -n +4 | head -n -1 | awk -F "| " '{print $2}')
(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic port-create -n $NODEUUID -a <nuc-mac-address>
+-----------+--------------------------------------+
| Property  | Value                                |
+-----------+--------------------------------------+
| node_uuid | 924a5447-930e-4d27-837e-6dd5d5f10e16 |
| extra     | {}                                   |
| uuid      | c6dddc3d-b9b4-4fbc-99e3-18b8017c7b01 |
| address   | <nuc-mac-address>                    |
+-----------+--------------------------------------+


So let's validate everything we've done, before we try this out in anger:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-validate thenuc
+------------+--------+---------------+
| Interface  | Result | Reason        |
+------------+--------+---------------+
| boot       | True   |               |
| console    | None   | not supported |
| deploy     | True   |               |
| inspect    | None   | not supported |
| management | True   |               |
| power      | True   |               |
| raid       | None   | not supported |
+------------+--------+---------------+

And one more thing to do before we really start things rolling - ensure the NUC is listening to us:

(venv)mrda@irony:~/src/python-ironicclient (master)$ telnet 10.0.0.251 16992
Trying 10.0.0.251...
Connected to 10.0.0.251.
Escape character is '^]'.
^]close

telnet> close
Connection closed.

You might have to try that a couple of times to wake up the AMT interface, but it's important that you do to ensure you don't get a failed deploy.

And then we take the node active, which will DHCP the deploy ramdisk, which will in turn write the user image to the disk - if everything goes well.  This will also take quite a long time, so time to go make that cup of tea :-)

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-provision-state thenuc active

Your NUC should have just booted into your user image and should be ready for you to use!

Postscript #1:

Actually, that's not what really happened.  It's what I would have liked to happen.

But there were some issues. Firstly, ironic-conductor complained about not being able to find 'ironic-rootwrap'.  And then once I symlinked that into place, it couldn't find the config for rootwrap, so I symlinked that into place.  Then it complained that iscsiadm didn't have the correct permissions in rootwrap to do it's thing...

So I gave up, and did the thing I didn't want to.  Back on the VM I ended up doing a "sudo python setup.py install" in the ironic directory so everything got installed into the correct system place and then I could restart ironic-conductor.

It should all work in develop mode, but clearly it doesn't, so in the interests of getting something up and going (and finish this blog post :) I did the quick solution and installed system-wide. Perhaps I'll circle back and work out why someday :)

Postscript #2:

When doing this, the deployment can fail for a number of reasons.  To recover, you need to delete the enrolled node and start again once you've worked out what the problem is, and worked out how to fix it.  To do that you need to do:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-maintenance thenuc on
(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-delete thenuc

Isn't it great that we can use names for nodes instead of UUIDs for most operations :)

Ironic on a NUC - part 1 - Setting things up

Just because a few people have asked, here is what I did to get a standalone Ironic installation going and running in an Intel NUC.

Why a NUC?  Well, the Intel NUC is a cute little piece of hardware that is well suited as a test lab machine that can sit on my desk.  I'm using a DC53427HYE, which is an i5 with vPro.  vPro is a summary term for a bunch of Intel technologies, including AMT (Active Management Technology). This allows us to remotely manage this desktop for things like power management - think of this as an analogy to IPMI for servers.

Getting the VM ready

I like to do my development in VMs, after all, isn't that what the cloud is for? :-) So first off using your virtualisation technology of choice, build a VM with Ubuntu 14.04.2 server on it.  I've allocated 2Gb RAM and 30Gb disk.  The reason for the larger than average disk is so that I have room for building ramdisk and deployment disk images. I've called this box 'irony'.

On the VM you'll need a few extra things installed once you've got the base OS installed:

mrda@irony:~$ sudo apt-get install python-openwsman ack-grep python-dev python-pip libmysqlclient-dev libxml2-dev git rabbitmq-server mysql-server isc-dhcp-server tftpd-hpa syslinux syslinux-common libxslt1-dev qemu-utils libpq-dev python-yaml open-iscsi

mrda@irony:~$ sudo pip install virtualenvwrapper six tox mysql-python


Thinking about the network

For this set up, I'm going to run separate networks for the control plane and data plane.  I've added a USB NIC to the NUC so I can separate the networks. My public net connection to the internet will be on the 192.168.1.X network, whereas the service net control plane will be on 10.x.x.x.  To do this I've added a new network interface to the VM, changed the networking to bridging for both NICs, and assigned eth0 and eth1 appropriately, and updated /etc/network/interfaces in the VM, so the right adapter is on the right network.  It ended up looking like this in /etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary (public) network interface
auto eth0
iface eth0 inet dhcp
        gateway 192.168.1.1

# Control plane
auto eth1
iface eth1 inet static
        address 10.0.0.5
        netmask 255.255.255.0


Setting up DHCP 

We need to make sure we're listening for DHCP requests on the right interface

mrda@irony:~$ sudo sed -i 's/INTERFACES=""/INTERFACES="eth1"/' /etc/default/isc-dhcp-server

Now configure your DHCP server to hand out an address to the NUC, accounting for some of the uniqueness of the device :) The tail of my /etc/dhcp/dhcpd.conf looks a bit like this:

allow duplicates;
ignore-client-uids true;
authoritative;

subnet 10.0.0.0 netmask 255.255.255.0 {

    group {
        host nuc {
            hardware ethernet <your-nucs-mac-address>;
            fixed-address 10.0.0.251; # NUC's IP address
            allow booting;
            allow bootp;
            next-server <this-servers-ip-address>;
            filename "pxelinux.0";
        }
    }
}

There some more background on this in a previous blog post.

Setting up TFTP

mrda@irony:~$ sudo mkdir /tftpboot
mrda@irony:~$ sudo chmod a+rwx /tftpboot/

We'll need to configure tftpd-hpa rather specifically, so /etc/default/tftpd-hpa looks like this:

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/tftpboot"
TFTP_ADDRESS="[::]:69"
TFTP_OPTIONS="-vvvv --map-file /tftpboot/map-file"

We'll also need to create /tftpboot/map-file which will need to look like this:

re ^(/tftpboot/) /tftpboot/\2
re ^/tftpboot/ /tftpboot/
re ^(^/) /tftpboot/\1
re ^([^/]) /tftpboot/\1

This is because of a weird combination of the feature sets of tftpd-hpa, isc-dhcp-server, ironic and diskimage-builder. Basically the combination of relative and dynamic paths are incompatible, and we need to work around the limitations by setting up a map-file. This would be a nice little patch one-day to send upstream to one or more of these projects. Of course, if you're deploying ironic in a production Openstacky way where you use neutron and dnsmasq, you don't need the map file - it's only when you configure all these things handrolicly that you face this problem.

And we'll want to make sure the PXE boot stuff is all in place ready to be served over TFTP.

mrda@irony:~$ sudo cp /usr/lib/syslinux/pxelinux.0 /tftpboot/
mrda@irony:~$ sudo cp /usr/lib/syslinux/chain.c32 /tftpboot/

And now let's start these services

mrda@irony:~$ service tftpd-hpa restart
mrda@irony:~$ service isc-dhcp-server restart


Installing Ironic

Just install straight from github (HEAD is always installable, right?)

mrda@irony:~$ mkdir ~/src; cd ~/src
mrda@irony:~/src$ git clone https://github.com/openstack/ironic.git 
mrda@irony:~/src$ git clone https://github.com/openstack/python-ironicclient.git 
mrda@irony:~/src$ git clone https://github.com/openstack/tripleo-image-elements.git


Configuring Ironic

Now we'll need to configure ironic to work standalone. There's a few config options that'll need to be changed from the default including changing the authentication policy, setting the right driver for AMT, setting a hostname and turning off that pesky power state syncing task.

mrda@irony:~$ cd src/ironic/
mrda@irony:~/src/ironic (master)$ cp etc/ironic/ironic.conf.sample etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#auth_strategy=keystone/auth_strategy=noauth/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#enabled_drivers=pxe_ipmitool/enabled_drivers=pxe_amt/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#host=.*/host=test-host/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#sync_power_state_interval=60/sync_power_state_interval=-1/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s%#api_url=<None>%api_url=http://10.0.0.5:6385/%" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#dhcp_provider=neutron/dhcp_provider=none/" etc/ironic/ironic.conf.local

There's also the little matter of making sure the image directories are ready:

mrda@irony:~/src/ironic (master)$ sudo mkdir -p /var/lib/ironic/images
mrda@irony:~/src/ironic (master)$ sudo mkdir -p /var/lib/ironic/master_images
mrda@irony:~/src/ironic (master)$ sudo chmod a+rwx /var/lib/ironic/images
mrda@irony:~/src/ironic (master)$ sudo chmod a+rwx /var/lib/ironic/master_images


Initialising the database

Since we've decided to use MySQL instead of SQLite, we'll need to setup the schema and update the database connection string.

mrda@irony:~/src/ironic (master)$ mysql -u root -p -e "create schema ironic"
mrda@irony:~/src/ironic (master)$ sed -i "s/#connection=.*/connection=mysql:\/\/root:<database-password>@localhost\/ironic/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ ironic-dbsync --config-file etc/ironic/ironic.conf.local create_schema

And that's everything that needs to be done to prepare the VM for running ironic. The next post will cover starting the ironic services, building images for deployment, and poking ironic from the command line.