Sunday, 20 September 2015

Mocking python objects and object functions using both class-level and function-level mocks

Had some fun solving an issue with partitions larger than 2Tb, and came across a little gotcha when it comes to mocking in python when a) you want to mock both an object and a function in that object, and b) when you want to mock.patch.object at both the test class and test method level.

Say you have a function you want to test that looks like this:


def make_partitions(...):
        ...
        dp = disk_partitioner.DiskPartitioner(...)
        dp.add_partition(...)
        ...

where the DiskPartitioner class looks like this:


class DiskPartitioner(object):

    def __init__(self, ...):
        ...

    def add_partition(self, ...):
        ...


and you have existing test code like this:

@mock.patch.object(utils, 'execute')
class MakePartitionsTestCase(test_base.BaseTestCase):
    ...


and you want to add a new test function, but adding a new patch just for your new test.

You want to verify that the class is instantiated with the right options, and you need to mock the add_partition method as well. How do you use the existing test class (with the mock of the execute function), add a new mock for the DiskPartitioner.add_partition function, and the __init__ of the DiskPartitioner class?

After a little trial and error, this is how:

    @mock.patch.object(disk_partitioner, 'DiskPartitioner',
                       autospec=True)
    def test_make_partitions_with_gpt(self, mock_dp, mock_exc):

        # Need to mock the function as well
        mock_dp.add_partition = mock.Mock(return_value=None)
        ...
        disk_utils.make_partitions(...)   # Function under test
        mock_dp.assert_called_once_with(...)
        mock_dp.add_partition.assert_called_once_with(...)


Things to note:

1) The ordering of the mock parameters to test_make_partitions_with_gpt isn't immediately intuitive (at least to me).  You specify the function level mocks first, followed by the class level mocks.

2) You need to manually mock the instance method of the mocked class.  (i.e. the add_partition function)


You can see the whole enchilada over here in the review.

Thursday, 10 September 2015

Ironic on a NUC - part 2 - Running services, building and deploying images, and testing

This is a continuation of the previous post on Ironic on a NUC - setting things up.  If you're following along at home, read that first.

Creating disk images for deployment

Now let's build some images for use with Ironic.  First off, we'll need a deploy ramdisk image for the initial load, and we'll also need the image that we want to deploy to the hardware.  We can build these using diskimage builder, part of the triple-o effort.

So let's do that in a virtual environment:

mrda@irony:~/src$ mkvirtualenv dib
(dib)mrda@irony:~/src$ pip install diskimage-builder six

And because we want to use some of the triple-o elements, we'll refer to these as we do the build. Once the images are built we'll put them in a place where tftpd-hpa can serve them.

(dib)mrda@irony:~/src$ export ELEMENTS_PATH=~/src/tripleo-image-elements/elements
(dib)mrda@irony:~/src$ mkdir images
(dib)mrda@irony:~/src$ cd images
(dib)mrda@irony:~/src/images$ disk-image-create ubuntu baremetal localboot dhcp-all-interfaces local-config -o my-image
(dib)mrda@irony:~/src/images$ ramdisk-image-create ubuntu deploy-ironic -o deploy-ramdisk
(dib)mrda@irony:~/src/images$ cp -rp * /tftpboot

Starting Ironic services

I like to do my development in virtualenvs, so we'll run our services there. Let's start with ironic-api.

(dib)mrda@irony:~/src/images$ deactivate
mrda@irony:~/src/images$ cd ~/src/ironic/
mrda@irony:~/src/ironic (master)$ tox -evenv --notest
mrda@irony:~/src/ironic (master)$ source .tox/venv/bin/activate
(venv)mrda@irony:~/src/ironic (master)$ ironic-api -v -d --config-file etc/ironic/ironic.conf.local

Now in a new terminal window for our VM, let's run ironic-conductor:

mrda@irony:~/src/images$ cd ~/src/ironic/
mrda@irony:~/src/ironic (master)$ source .tox/venv/bin/activate
(venv)mrda@krypton:~/src/ironic (master)$ python setup.py develop
(venv)mrda@krypton:~/src/ironic (master)$ ironic-conductor -v -d --config-file etc/ironic/ironic.conf.local

(If you get an error about unable to load the pywsman library, follow the workaround over here in a previous blog post)

Running Ironic Client

Let's open a new window on the VM for running an ironic command-line client to exercise what we've built:

mrda@irony:~$ cd src/python-ironicclient/
mrda@irony:~/src/python-ironicclient (master)$ tox -evenv --notest
mrda@irony:~/src/python-ironicclient (master)$ source .tox/venv/bin/activate

Now we need to fudge authentication, and point at our running ironic-api:

(venv)mrda@irony:~/src/python-ironicclient (master)$ export OS_AUTH_TOKEN=fake-token
(venv)mrda@irony:~/src/python-ironicclient (master)$ export IRONIC_URL=http://localhost:6385/

Let's try it out and see what happens, eh?


(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic driver-list

+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| pxe_amt             | test-host      |
+---------------------+----------------+

Looking good! Let's try registering the NUC as an Ironic node, specifying the deployment ramdisk:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-create -d pxe_amt -i amt_password='<the-nuc-amin-password>' -i amt_username='admin' -i amt_address='10.0.0.251' -i deploy_ramdisk='file:///tftpboot/deploy-ramdisk.initramfs' -i deploy_kernel='file:///tftpboot/deploy-ramdisk.kernel' -n thenuc
+--------------+--------------------------------------------------------------------------+
| Property     | Value                                                                    |
+--------------+--------------------------------------------------------------------------+
| uuid         | 924a5447-930e-4d27-837e-6dd5d5f10e16                                     |
| driver_info  | {u'amt_username': u'admin', u'deploy_kernel': u'file:///tftpboot/deploy- |
|              | ramdisk.kernel', u'amt_address': u'10.0.0.251', u'deploy_ramdisk':       |
|              | u'file:///tftpboot/deploy-ramdisk.initramfs', u'amt_password':           |
|              | u'******'}                                                               |
| extra        | {}                                                                       |
| driver       | pxe_amt                                                                  |
| chassis_uuid |                                                                          |
| properties   | {}                                                                       |
| name         | thenuc                                                                   |
+--------------+--------------------------------------------------------------------------+

Again more success!  Since we're not using Nova to manage or kick-off the deploy, we need to tell ironic where the instance we want deployed is, along with some of the instance information:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-update thenuc add instance_info/image_source='file:///tftpboot/my-image.qcow2' instance_info/kernel='file:///tftpboot/my-image.vmlinuz' instance_info/ramdisk='file:///tftpboot/my-image.initrd' instance_info/root_gb=10
+------------------------+-------------------------------------------------------------------------+
| Property               | Value                                                                   |
+------------------------+-------------------------------------------------------------------------+
| target_power_state     | None                                                                    |
| extra                  | {}                                                                      |
| last_error             | None                                                                    |
| updated_at             | None                                                                    |
| maintenance_reason     | None                                                                    |
| provision_state        | available                                                               |
| clean_step             | {}                                                                      |
| uuid                   | 924a5447-930e-4d27-837e-6dd5d5f10e16                                    |
| console_enabled        | False                                                                   |
| target_provision_state | None                                                                    |
| provision_updated_at   | None                                                                    |
| maintenance            | False                                                                   |
| inspection_started_at  | None                                                                    |
| inspection_finished_at | None                                                                    |
| power_state            | None                                                                    |
| driver                 | pxe_amt                                                                 |
| reservation            | None                                                                    |
| properties             | {}                                                                      |
| instance_uuid          | None                                                                    |
| name                   | thenuc                                                                  |
| driver_info            | {u'amt_username': u'admin', u'amt_password': u'******', u'amt_address': |
|                        | u'10.0.0.251', u'deploy_ramdisk': u'file:///tftpboot/deploy-            |
|                        | ramdisk.initramfs', u'deploy_kernel': u'file:///tftpboot/deploy-        |
|                        | ramdisk.kernel'}                                                        |
| created_at             | 2015-09-10T00:55:27+00:00                                               |
| driver_internal_info   | {}                                                                      |
| chassis_uuid           |                                                                         |
| instance_info          | {u'ramdisk': u'file:///tftpboot/my-image.initrd', u'kernel':            |
|                        | u'file:///tftpboot/my-image.vmlinuz', u'root_gb': 10, u'image_source':  |
|                        | u'file:///tftpboot/my-image.qcow2'}                                     |
+------------------------+-------------------------------------------------------------------------+

Let's see what we've got:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-list
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name   | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| f8af4d4e-e3da-4a04-9596-8e4fef15e4eb | thenuc | None          | None        | available          | False       |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+


We now need to create a network port in ironic, and associate it with the mac address of the NUC.  But I'm lazy, so let's extract the node UUID first:

(venv)mrda@irony:~/src/python-ironicclient (master)$ NODEUUID=$(ironic node-list | tail -n +4 | head -n -1 | awk -F "| " '{print $2}')
(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic port-create -n $NODEUUID -a <nuc-mac-address>
+-----------+--------------------------------------+
| Property  | Value                                |
+-----------+--------------------------------------+
| node_uuid | 924a5447-930e-4d27-837e-6dd5d5f10e16 |
| extra     | {}                                   |
| uuid      | c6dddc3d-b9b4-4fbc-99e3-18b8017c7b01 |
| address   | <nuc-mac-address>                    |
+-----------+--------------------------------------+


So let's validate everything we've done, before we try this out in anger:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-validate thenuc
+------------+--------+---------------+
| Interface  | Result | Reason        |
+------------+--------+---------------+
| boot       | True   |               |
| console    | None   | not supported |
| deploy     | True   |               |
| inspect    | None   | not supported |
| management | True   |               |
| power      | True   |               |
| raid       | None   | not supported |
+------------+--------+---------------+

And one more thing to do before we really start things rolling - ensure the NUC is listening to us:

(venv)mrda@irony:~/src/python-ironicclient (master)$ telnet 10.0.0.251 16992
Trying 10.0.0.251...
Connected to 10.0.0.251.
Escape character is '^]'.
^]close

telnet> close
Connection closed.

You might have to try that a couple of times to wake up the AMT interface, but it's important that you do to ensure you don't get a failed deploy.

And then we take the node active, which will DHCP the deploy ramdisk, which will in turn write the user image to the disk - if everything goes well.  This will also take quite a long time, so time to go make that cup of tea :-)

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-provision-state thenuc active

Your NUC should have just booted into your user image and should be ready for you to use!

Postscript #1:

Actually, that's not what really happened.  It's what I would have liked to happen.

But there were some issues. Firstly, ironic-conductor complained about not being able to find 'ironic-rootwrap'.  And then once I symlinked that into place, it couldn't find the config for rootwrap, so I symlinked that into place.  Then it complained that iscsiadm didn't have the correct permissions in rootwrap to do it's thing...

So I gave up, and did the thing I didn't want to.  Back on the VM I ended up doing a "sudo python setup.py install" in the ironic directory so everything got installed into the correct system place and then I could restart ironic-conductor.

It should all work in develop mode, but clearly it doesn't, so in the interests of getting something up and going (and finish this blog post :) I did the quick solution and installed system-wide. Perhaps I'll circle back and work out why someday :)

Postscript #2:

When doing this, the deployment can fail for a number of reasons.  To recover, you need to delete the enrolled node and start again once you've worked out what the problem is, and worked out how to fix it.  To do that you need to do:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-maintenance thenuc on
(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-delete thenuc

Isn't it great that we can use names for nodes instead of UUIDs for most operations :)

Ironic on a NUC - part 1 - Setting things up

Just because a few people have asked, here is what I did to get a standalone Ironic installation going and running in an Intel NUC.

Why a NUC?  Well, the Intel NUC is a cute little piece of hardware that is well suited as a test lab machine that can sit on my desk.  I'm using a DC53427HYE, which is an i5 with vPro.  vPro is a summary term for a bunch of Intel technologies, including AMT (Active Management Technology). This allows us to remotely manage this desktop for things like power management - think of this as an analogy to IPMI for servers.

Getting the VM ready

I like to do my development in VMs, after all, isn't that what the cloud is for? :-) So first off using your virtualisation technology of choice, build a VM with Ubuntu 14.04.2 server on it.  I've allocated 2Gb RAM and 30Gb disk.  The reason for the larger than average disk is so that I have room for building ramdisk and deployment disk images. I've called this box 'irony'.

On the VM you'll need a few extra things installed once you've got the base OS installed:

mrda@irony:~$ sudo apt-get install python-openwsman ack-grep python-dev python-pip libmysqlclient-dev libxml2-dev git rabbitmq-server mysql-server isc-dhcp-server tftpd-hpa syslinux syslinux-common libxslt1-dev qemu-utils libpq-dev python-yaml open-iscsi

mrda@irony:~$ sudo pip install virtualenvwrapper six tox mysql-python


Thinking about the network

For this set up, I'm going to run separate networks for the control plane and data plane.  I've added a USB NIC to the NUC so I can separate the networks. My public net connection to the internet will be on the 192.168.1.X network, whereas the service net control plane will be on 10.x.x.x.  To do this I've added a new network interface to the VM, changed the networking to bridging for both NICs, and assigned eth0 and eth1 appropriately, and updated /etc/network/interfaces in the VM, so the right adapter is on the right network.  It ended up looking like this in /etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary (public) network interface
auto eth0
iface eth0 inet dhcp
        gateway 192.168.1.1

# Control plane
auto eth1
iface eth1 inet static
        address 10.0.0.5
        netmask 255.255.255.0


Setting up DHCP 

We need to make sure we're listening for DHCP requests on the right interface

mrda@irony:~$ sudo sed -i 's/INTERFACES=""/INTERFACES="eth1"/' /etc/default/isc-dhcp-server

Now configure your DHCP server to hand out an address to the NUC, accounting for some of the uniqueness of the device :) The tail of my /etc/dhcp/dhcpd.conf looks a bit like this:

allow duplicates;
ignore-client-uids true;
authoritative;

subnet 10.0.0.0 netmask 255.255.255.0 {

    group {
        host nuc {
            hardware ethernet <your-nucs-mac-address>;
            fixed-address 10.0.0.251; # NUC's IP address
            allow booting;
            allow bootp;
            next-server <this-servers-ip-address>;
            filename "pxelinux.0";
        }
    }
}

There some more background on this in a previous blog post.

Setting up TFTP

mrda@irony:~$ sudo mkdir /tftpboot
mrda@irony:~$ sudo chmod a+rwx /tftpboot/

We'll need to configure tftpd-hpa rather specifically, so /etc/default/tftpd-hpa looks like this:

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/tftpboot"
TFTP_ADDRESS="[::]:69"
TFTP_OPTIONS="-vvvv --map-file /tftpboot/map-file"

We'll also need to create /tftpboot/map-file which will need to look like this:

re ^(/tftpboot/) /tftpboot/\2
re ^/tftpboot/ /tftpboot/
re ^(^/) /tftpboot/\1
re ^([^/]) /tftpboot/\1

This is because of a weird combination of the feature sets of tftpd-hpa, isc-dhcp-server, ironic and diskimage-builder. Basically the combination of relative and dynamic paths are incompatible, and we need to work around the limitations by setting up a map-file. This would be a nice little patch one-day to send upstream to one or more of these projects. Of course, if you're deploying ironic in a production Openstacky way where you use neutron and dnsmasq, you don't need the map file - it's only when you configure all these things handrolicly that you face this problem.

And we'll want to make sure the PXE boot stuff is all in place ready to be served over TFTP.

mrda@irony:~$ sudo cp /usr/lib/syslinux/pxelinux.0 /tftpboot/
mrda@irony:~$ sudo cp /usr/lib/syslinux/chain.c32 /tftpboot/

And now let's start these services

mrda@irony:~$ service tftpd-hpa restart
mrda@irony:~$ service isc-dhcp-server restart


Installing Ironic

Just install straight from github (HEAD is always installable, right?)

mrda@irony:~$ mkdir ~/src; cd ~/src
mrda@irony:~/src$ git clone https://github.com/openstack/ironic.git 
mrda@irony:~/src$ git clone https://github.com/openstack/python-ironicclient.git 
mrda@irony:~/src$ git clone https://github.com/openstack/tripleo-image-elements.git


Configuring Ironic

Now we'll need to configure ironic to work standalone. There's a few config options that'll need to be changed from the default including changing the authentication policy, setting the right driver for AMT, setting a hostname and turning off that pesky power state syncing task.

mrda@irony:~$ cd src/ironic/
mrda@irony:~/src/ironic (master)$ cp etc/ironic/ironic.conf.sample etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#auth_strategy=keystone/auth_strategy=noauth/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#enabled_drivers=pxe_ipmitool/enabled_drivers=pxe_amt/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#host=.*/host=test-host/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#sync_power_state_interval=60/sync_power_state_interval=-1/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s%#api_url=<None>%api_url=http://10.0.0.5:6385/%" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ sed -i "s/#dhcp_provider=neutron/dhcp_provider=none/" etc/ironic/ironic.conf.local

There's also the little matter of making sure the image directories are ready:

mrda@irony:~/src/ironic (master)$ sudo mkdir -p /var/lib/ironic/images
mrda@irony:~/src/ironic (master)$ sudo mkdir -p /var/lib/ironic/master_images
mrda@irony:~/src/ironic (master)$ sudo chmod a+rwx /var/lib/ironic/images
mrda@irony:~/src/ironic (master)$ sudo chmod a+rwx /var/lib/ironic/master_images


Initialising the database

Since we've decided to use MySQL instead of SQLite, we'll need to setup the schema and update the database connection string.

mrda@irony:~/src/ironic (master)$ mysql -u root -p -e "create schema ironic"
mrda@irony:~/src/ironic (master)$ sed -i "s/#connection=.*/connection=mysql:\/\/root:<database-password>@localhost\/ironic/" etc/ironic/ironic.conf.local
mrda@irony:~/src/ironic (master)$ ironic-dbsync --config-file etc/ironic/ironic.conf.local create_schema

And that's everything that needs to be done to prepare the VM for running ironic. The next post will cover starting the ironic services, building images for deployment, and poking ironic from the command line.








Thursday, 23 July 2015

Virtualenv and library fun

Doing python development means using virtualenv, which is wonderful.  Still, sometimes you find a gotcha that trips you up.

Today, for whatever reason, inside a venv inside a brand new Ubuntu 14.04 install,  I could not see a system-wide install of pywsman (installed via sudo apt-get install python-openwsman)

For example:
mrda@host:~$ python -c 'import pywsman'
# Works

mrda@host:~$ tox -evenv --notest
(venv)mrda@host:~$ python -c 'import pywsman'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named pywsman
# WAT?

Let's try something else that's installed system-wide
(venv)mrda@host:~$ python -c 'import six'
# Works

Why does six work, and pywsman not?
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/six*
-rw-r--r-- 1 root root  1418 Mar 26 22:57 /usr/lib/python2.7/dist-packages/six-1.5.2.egg-info
-rw-r--r-- 1 root root 22857 Jan  6  2014 /usr/lib/python2.7/dist-packages/six.py
-rw-r--r-- 1 root root 22317 Jul 23 07:23 /usr/lib/python2.7/dist-packages/six.pyc
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/*pywsman*
-rw-r--r-- 1 root root  80590 Jun 16  2014 /usr/lib/python2.7/dist-packages/pywsman.py
-rw-r--r-- 1 root root 293680 Jun 16  2014 /usr/lib/python2.7/dist-packages/_pywsman.so

The only thing that comes to mind is that pywsman wraps a .so

A work-around is to tell venv that it should use the system-wide install of pywsman, like this:

# Kill the old venv first
(venv)mrda@host:~$ deactivate
mrda@host:~$ rm -rf .tox/venv

Now startover
mrda@host:~$ tox -evenv --notest --sitepackages pywsman
(venv)mrda@host:~$ python -c "import pywsman"
# Fun and Profit!


Wednesday, 22 July 2015

DHCP and NUCs

I've put together a little test network at home for doing some Ironic testing on hardware using NUCs.  So far it's going quite well, although one problem that had me stumped for a while was getting the NUC to behave itself when obtaining an IP address with DHCP.

Each time I booted the network, a different IP address from the pool was being allocated (i.e. the next one in the DHCP address pool).

There's already a documented problem with isc-dhcp-server for devices where the BMC and host share a NIC (including the same MAC address), but this was even worse because on closer examination a different Client UID is being presented as part of the DHCPDISCOVER for the node each time. (Fortunately the NUC's BMC doesn't do this as well).

So I couldn't really find a solution online, but the answer was there all the time in the man page - there's a cute little option "ignore-client-uids true;" that ensures only the MAC address is used for DHCP lease matching, and not Client UID.  Turning this on means now that on each deploy the NUC receives the same IP address - and not just for the node, but also for the BMC - it works around the aforementioned bug as well.  Woohoo!

There's still one remaining problem, I can't seem to get a fixed IP address returned in the DHCPOFFER, I have to configure a dynamic pool instead (which is fine because this is a test network with limited nodes in it).  One to resolve another day...

Tuesday, 28 April 2015

OpenStack Hint of the Day: Wed Apr 29

When running tox and you get something like this:

mrda@garner:~/src/python-ironicclient (review/michael_davies/file-caching)$ tox -e py34
py34 runtests: PYTHONHASHSEED='3098345924'
py34 runtests: commands[0] | python setup.py testr --slowest --testr-args=
running testr
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./ironicclient/tests/unit}  --list 
db type could not be determined
error: testr failed (3)
ERROR: InvocationError: '/home/mrda/src/python-ironicclient/.tox/py34/bin/python setup.py testr --slowest --testr-args='
________________________________________________________________________________________________ summary _________________________________________________________________________________________________
ERROR:   py34: commands failed

The solution is to "rm -rf .testrepository/" and try again.

(Thanks to this little reference hidden away https://wiki.openstack.org/wiki/Python3#tox.2Ftestr_error:_db_type_could_not_be_determined)