Thursday, 23 July 2015

Virtualenv and library fun

Doing python development means using virtualenv, which is wonderful.  Still, sometimes you find a gotcha that trips you up.

Today, for whatever reason, inside a venv inside a brand new Ubuntu 14.04 install,  I could not see a system-wide install of pywsman (installed via sudo apt-get install python-openwsman)

For example:
mrda@host:~$ python -c 'import pywsman'
# Works

mrda@host:~$ tox -evenv --notest
(venv)mrda@host:~$ python -c 'import pywsman'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named pywsman
# WAT?

Let's try something else that's installed system-wide
(venv)mrda@host:~$ python -c 'import six'
# Works

Why does six work, and pywsman not?
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/six*
-rw-r--r-- 1 root root  1418 Mar 26 22:57 /usr/lib/python2.7/dist-packages/six-1.5.2.egg-info
-rw-r--r-- 1 root root 22857 Jan  6  2014 /usr/lib/python2.7/dist-packages/six.py
-rw-r--r-- 1 root root 22317 Jul 23 07:23 /usr/lib/python2.7/dist-packages/six.pyc
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/*pywsman*
-rw-r--r-- 1 root root  80590 Jun 16  2014 /usr/lib/python2.7/dist-packages/pywsman.py
-rw-r--r-- 1 root root 293680 Jun 16  2014 /usr/lib/python2.7/dist-packages/_pywsman.so

The only thing that comes to mind is that pywsman wraps a .so

A work-around is to tell venv that it should use the system-wide install of pywsman, like this:

# Kill the old venv first
(venv)mrda@host:~$ deactivate
mrda@host:~$ rm -rf .tox/venv

Now startover
mrda@host:~$ tox -evenv --notest --sitepackages pywsman
(venv)mrda@host:~$ python -c "import pywsman"
# Fun and Profit!


Wednesday, 22 July 2015

DHCP and NUCs

I've put together a little test network at home for doing some Ironic testing on hardware using NUCs.  So far it's going quite well, although one problem that had me stumped for a while was getting the NUC to behave itself when obtaining an IP address with DHCP.

Each time I booted the network, a different IP address from the pool was being allocated (i.e. the next one in the DHCP address pool).

There's already a documented problem with isc-dhcp-server for devices where the BMC and host share a NIC (including the same MAC address), but this was even worse because on closer examination a different Client UID is being presented as part of the DHCPDISCOVER for the node each time. (Fortunately the NUC's BMC doesn't do this as well).

So I couldn't really find a solution online, but the answer was there all the time in the man page - there's a cute little option "ignore-client-uids true;" that ensures only the MAC address is used for DHCP lease matching, and not Client UID.  Turning this on means now that on each deploy the NUC receives the same IP address - and not just for the node, but also for the BMC - it works around the aforementioned bug as well.  Woohoo!

There's still one remaining problem, I can't seem to get a fixed IP address returned in the DHCPOFFER, I have to configure a dynamic pool instead (which is fine because this is a test network with limited nodes in it).  One to resolve another day...