Planet LA

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 15 weeks 2 days ago

Michael Still: On Selecting a Well Engaged Open Source Vendor

April 15, 2018 - 23:00

Aptira is in an interesting position in the Open Source market, because we don’t usually sell software. Instead, our customers come to us seeking assistance with deciding which OpenStack to use, or how to embed ONAP into their nationwide networks, or how to move their legacy networks to the software defined future. Therefore, our most common role is as a trusted advisor to help our customers decide which Open Source products to buy.

(My boss would insist that I point out here that we do customisation of Open Source for our customers, and have assisted many in the past with deploying pure upstream solutions. Basically, we do what is the right fit for the customer, and aren’t obsessed with fitting customers into pre-defined moulds that suit our partners.)

That makes it important that we recommend products from companies that are well engaged with their upstream Open Source communities. That might be OpenStack, or ONAP, or even something like Open Daylight. This raises the obvious question – what makes a company well engaged with an upstream project?

Read more over at my employer’s blog

Categories: Aligned Planets

Michael Still: Configuring docker to use rexray and Ceph for persistent storage

April 15, 2018 - 21:00

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working…

First off, I needed to install rexray:

    root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh Selecting previously unselected package rexray. (Reading database ... 177547 files and directories currently installed.) Preparing to unpack rexray_0.9.0-1_amd64.deb ... Unpacking rexray (0.9.0-1) ... Setting up rexray (0.9.0-1) ... rexray has been installed to /usr/bin/rexray REX-Ray ------- Binary: /usr/bin/rexray Flavor: client+agent+controller SemVer: 0.9.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: 2a7458dd90a79c673463e14094377baf9fc8695e Formed: Thu, 04 May 2017 07:38:11 AEST libStorage ---------- SemVer: 0.6.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9 Formed: Thu, 04 May 2017 07:36:11 AEST

Which is of course horrid. What that script seems to have done is install a deb’d version of rexray based on an alien’d package:

    root@labosa:~/rexray# dpkg -s rexray Package: rexray Status: install ok installed Priority: extra Section: alien Installed-Size: 36140 Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1> Architecture: amd64 Version: 0.9.0-1 Depends: libc6 (>= 2.3.2) Description: Tool for managing remote & local storage. A guest based storage introspection tool that allows local visibility and management from cloud and storage platforms. . (Converted from a rpm package by alien version 8.86.)

If I was building anything more than a test environment I think I’d want to do a better job of installing rexray than this, so you’ve been warned.

Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren’t mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

    root@labosa:/etc# apt-get install ceph-common root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph . The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established. ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts. rbdmap 100% 92 0.1KB/s 00:00 ceph.conf 100% 681 0.7KB/s 00:00 ceph.client.admin.keyring 100% 63 0.1KB/s 00:00 ceph.client.glance.keyring 100% 64 0.1KB/s 00:00 ceph.client.cinder.keyring 100% 64 0.1KB/s 00:00 ceph.client.cinder-backup.keyring 71 0.1KB/s 00:00 root@labosa:/etc# modprobe rbd

You also need to configure rexray. My first attempt looked like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml libstorage: service: ceph

And the rexray output sure made it look like it worked…

    root@labosa:/etc# rexray service start ? rexray.service - rexray Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago Main PID: 477423 (rexray) Tasks: 5 Memory: 1.5M CPU: 9ms CGroup: /system.slice/rexray.service ??477423 /usr/bin/rexray start -f May 29 10:14:07 labosa systemd[1]: Started rexray.

Which looked good, but /var/log/syslog said:

    May 29 10:14:08 labosa rexray[477423]: REX-Ray May 29 10:14:08 labosa rexray[477423]: ------- May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0 May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64 May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0 May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST May 29 10:14:08 labosa rexray[477423]: libStorage May 29 10:14:08 labosa rexray[477423]: ---------- May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0 May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64 May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0 May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9 May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="error starting libStorage server" error.driver=ceph time=1496016848215 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="daemon failed to initialize" error.driver=ceph time=1496016848216 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="error starting rex-ray" error.driver=ceph time=1496016848216

That’s because the service is called rbd it seems. So, the config file ended up looking like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml libstorage: service: rbd rbd: defaultPool: rbd

Now to install docker:

    root@labosa:/var/log# sudo apt-get update root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \ linux-image-extra-virtual root@labosa:/var/log# sudo apt-get install apt-transport-https \ ca-certificates curl software-properties-common root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - root@labosa:/var/log# sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" root@labosa:/var/log# sudo apt-get update root@labosa:/var/log# sudo apt-get install docker-ce

Now let’s make a rexray volume.

    root@labosa:/var/log# rexray volume ls ID Name Status Size root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \ --opt=size=1 A size of 1 here means 1gb mysql root@labosa:/var/log# rexray volume ls ID Name Status Size rbd.mysql mysql available 1

Let’s start the container.

    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \ -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql Unable to find image 'mysql:latest' locally latest: Pulling from library/mysql 10a267c67f42: Pull complete c2dcc7bb2a88: Pull complete 17e7a0445698: Pull complete 9a61839a176f: Pull complete a1033d2f1825: Pull complete 0d6792140dcc: Pull complete cd3adf03d6e6: Pull complete d79d216fd92b: Pull complete b3c25bdeb4f4: Pull complete 02556e8f331f: Pull complete 4bed508a9e77: Pull complete Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5 Status: Downloaded newer image for mysql:latest ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b

And now to prove that persistence works and that there’s nothing up my sleeve…

    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \ sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \ -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.00 sec) mysql> create database demo; Query OK, 1 row affected (0.03 sec) mysql> use demo; Database changed mysql> create table foo(val char(5)); Query OK, 0 rows affected (0.14 sec) mysql> insert into foo(val) values ('a'), ('b'), ('c'); Query OK, 3 rows affected (0.08 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> select * from foo; +------+ | val | +------+ | a | | b | | c | +------+ 3 rows in set (0.00 sec)

Now let’s re-create the container and prove the data remains.

    root@labosa:/var/log# docker stop some-mysql some-mysql root@labosa:/var/log# docker rm some-mysql some-mysql root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \ -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql 99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05 root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \ sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\ P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> use demo; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from foo; +------+ | val | +------+ | a | | b | | c | +------+ 3 rows in set (0.00 sec)

So there you go.

Categories: Aligned Planets

Michael Still: I think I found a bug in python’s unittest.mock library

April 15, 2018 - 21:00

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we’ve used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what “methods” were called on the mock.

However, you use the same mock object later to make assertions about what was called. Herein is the problem — the mock object doesn’t know if you’re the code under test, or the code that’s making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

Here’s an example:

#!/usr/bin/python3 from unittest import mock class foo(object): def dummy(a, b): return a + b @mock.patch.object(foo, 'dummy') def call_dummy(mock_dummy): f = foo() f.dummy(1, 2) print('Asserting a call should work if the call was made') mock_dummy.assert_has_calls([mock.call(1, 2)]) print('Assertion for expected call passed') print() print('Asserting a call should raise an exception if the call wasn\'t made') mock_worked = False try: mock_dummy.assert_has_calls([mock.call(3, 4)]) except AssertionError as e: mock_worked = True print('Expected failure, %s' % e) if not mock_worked: print('*** Assertion should have failed ***') print() print('Asserting a call where the assertion has a typo should fail, but ' 'doesn\'t') mock_worked = False try: mock_dummy.typo_assert_has_calls([mock.call(3, 4)]) except AssertionError as e: mock_worked = True print('Expected failure, %s' % e) print() if not mock_worked: print('*** Assertion should have failed ***') print(mock_dummy.mock_calls) print() if __name__ == '__main__': call_dummy()

If I run that code, I get this:

$ python3 mock_assert_errors.py Asserting a call should work if the call was made Assertion for expected call passed Asserting a call should raise an exception if the call wasn't made Expected failure, Calls not found. Expected: [call(3, 4)] Actual: [call(1, 2)] Asserting a call where the assertion has a typo should fail, but doesn't *** Assertion should have failed *** [call(1, 2), call.typo_assert_has_calls([call(3, 4)])]

So, we should have been told that typo_assert_has_calls isn’t a thing, but we didn’t notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

I don’t really have a solution to this right now (I’m home sick and not thinking straight), but it would be interesting to see what other people think.

Categories: Aligned Planets

Michael Still: Python3 venvs for people who are old and grumpy

April 15, 2018 - 21:00

I’ve been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn’t a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad…

First, install the dependencies:

    git clone git://github.com/yyuu/pyenv.git .pyenv echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc echo 'eval "$(pyenv init -)"' >> ~/.bashrc git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv source ~/.bashrc

Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

    mkdir -p ~/.virtualenvs/pyenv-infrasot cd ~/.virtualenvs/pyenv-infrasot pyenv virtualenv system infrasot

You can see your installed venvs like this:

    $ pyenv versions * system (set by /home/user/.pyenv/version) infrasot

Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

    $ pyenv activate infrasot $ ... stuff you're doing ... $ pvenv deactivate

I’ll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

Categories: Aligned Planets

Michael Still: Giving serial devices meaningful names

April 15, 2018 - 21:00

This is a hack I’ve been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$ cat /etc/udev/rules.d/60-local.rules KERNEL=="ttyUSB*", \ ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \ ATTRS{serial}=="A8003Ye7", \ SYMLINK+="radish"

This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to “/dev/radish”.

You find out the vendor and product ID from lsusb like this:

$ lsusb Bus 003 Device 003: ID 0624:0201 Avocent Corp. Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that’s great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more… difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \ ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \ PROGRAM="/usr/bin/usbtest /dev/%k", \ SYMLINK+="%c"

This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is — in my case either a currentcost or a solar panel inverter.

Categories: Aligned Planets

Michael Still: Hugo nominees for 2018

April 15, 2018 - 21:00

Lifehacker kindly pointed out that the Hugo nominees are out for 2018. They are:

  • The Collapsing Empire, by John Scalzi. I’ve read this one and liked it.
  • New York 2140, by Kim Stanley Robinson. I’ve had a difficult time with Kim’s work in the past, but perhaps I’ll one day read this.
  • Provenance, by Ann Leckie. I liked Ancillary Justice, but failed to fully read the sequel, so I guess we’ll wait and see on this one.
  • Raven Stratagem, by Yoon Ha Lee. I know nothing!
  • Six Wakes, by Mur Lafferty. Again, I know nothing about this book or this author.

So a few there to consider in the future.

Categories: Aligned Planets

Michael Still: The Collapsing Empire

April 15, 2018 - 21:00

This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don’t know that and are busy having petty trade wars instead. It isn’t a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire…

Title: The Collapsing Empire
Author: John Scalzi
Genre: Fiction
Publisher: Tor Books
Release Date: March 21, 2017
Pages: 336

Our universe is ruled by physics and faster than light travel is not possible—until the discovery of The Flow, an extra-dimensional field we can access at certain points in space-time that transport us to other worlds, around other stars. Humanity flows away from Earth, into space, and in time forgets our home world and creates a new empire, the Interdependency, whose ethos requires that no one human outpost can survive without the others. It’s a hedge against interstellar war—and a system of control for the rulers of the empire. The Flow is eternal—but it is not static. Just as a river changes course, The Flow changes as well, cutting off worlds from the rest of humanity. When it’s discovered that The Flow is moving, possibly cutting off all human worlds from faster than light travel forever, three individuals -- a scientist, a starship captain and the Empress of the Interdependency—are in a race against time to discover what, if anything, can be salvaged from an interstellar empire on the brink of collapse. “John Scalzi is the most entertaining, accessible writer working in SF today.” —Joe Hill "If anyone stands at the core of the American science fiction tradition at the moment, it is Scalzi." —The Encyclopedia of Science Fiction, Third Edition

Categories: Aligned Planets

Michael Still: Things I read today: the best description I’ve seen of metadata routing in neutron

April 15, 2018 - 21:00

I happened upon a thread about OVN’s proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I’m just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.

Categories: Aligned Planets

Michael Still: Escaping from blosxom

April 15, 2018 - 21:00

I’ve been running my personal blog on a very hacked version of blosxom for a hilariously long time, and its time to escape. I’ve therefore started converting all of the content to wordpress here, and will eventually redirect the old domain to here as well.

Why blogging when its so 2000? I’m increasingly disinterested in social media like Facebook and Twitter. I figure if I’m going to note something down that looks like it might be useful to others I’ll put it on ye olde blog instead.

I’m sure the conversion isn’t perfect, and I’ve decided not to migrate very old content that simply not interesting any more (linux kernel patches from 2004 for example). If you find a post which has converted badly, just comment on it and I’ll do something about it. I am very sure that pretty much no one will do that thing however.

Categories: Aligned Planets

Michael Still: Nova vendordata deployment, an excessively detailed guide

April 15, 2018 - 21:00

Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

User provided data

The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

Nova provided data

Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

Deployer provided data

There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot — the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user’s behalf.

Nova supports a mechanism to add “vendordata” to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

  • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don’t change between instances, such as the location of the corporate puppet server.
  • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.

Tell me more about DynamicJSON

Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

To use DynamicJSON, you configure it like this:

  • Add “DynamicJSON” to the vendordata_providers configuration option. This can also include “StaticJSON” if you’d like.
  • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.

The format for an entry in vendordata_dynamic_targets is like this:

<name>@<url>

Where name is a short string not including the ‘@’ character, and where the URL can include a port number if so required. An example would be:

testing@http://127.0.0.1:125

Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

openstack/2016-10-06/vendor_data2.json

For each dynamic target, there will be an entry in the JSON file named after that target. For example:

{ "testing": { "value1": 1, "value2": 2, "value3": "three" } }

Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

The following data is passed to your REST service as a JSON encoded POST:

  • project-id: the UUID of the project that owns the instance
  • instance-id: the UUID of the instance
  • image-id: the UUID of the image used to boot this instance
  • user-data: as specified by the user at boot time
  • hostname: the hostname of the instance
  • metadata: as specified by the user at boot time

Deployment considerations

Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request — you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

This behaviour is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

Deploying the same vendordata service

There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

$ git clone http://github.com/mikalstill/vendordata $ cd vendordata $ apt-get install virtualenvwrapper $ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed) $ mkvirtualenv vendordata $ pip install -r requirements.txt

We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn’t what you’re using:

[keystone_authtoken] insecure = False auth_plugin = password auth_url = http://172.29.236.100:35357 auth_uri = http://172.29.236.100:5000 project_domain_id = default user_domain_id = default project_name = service username = nova password = 5dff06ac0c43685de108cc799300ba36dfaf29e4 region_name = RegionOne

Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json $ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`

We then include that token in a test request to the vendordata service:

curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/

Configuring nova to use the external metadata service

Now we’re ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

[api] vendordata_providers=DynamicJSON vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888

Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo

We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

# cat openstack/latest/vendor_data2.json | python -m json.tool { "testing": { "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing." } }

Categories: Aligned Planets

Michael Still: So you want to setup a Ceph dev environment using OSA

April 15, 2018 - 21:00

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I’ve never seen before called a “Scenario”. Basically this means that you need to export an environment variable called “SCENARIO” before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph

Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

    --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml 2017-05-26 08:55:07.803635173 +1000 +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml 2017-05-26 08:58:30.417019878 +1000 @@ -338,7 +338,9 @@ # foo: 1234 # bar: 5678 # -ceph_conf_overrides: {} +ceph_conf_overrides: + global: + osd_pool_default_pg_num: 8 ############# @@ -373,4 +375,4 @@ # Set this to true to enable File access via NFS. Requires an MDS role. nfs_file_gw: true # Set this to true to enable Object access via NFS. Requires an RGW role. -nfs_obj_gw: false \ No newline at end of file +nfs_obj_gw: false

That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I’ll never need to think about it again which is nice.

Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

    root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1 root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd health HEALTH_OK monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0} election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1 osdmap e20: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects 102156 kB used, 3070 GB / 3070 GB avail 40 active+clean root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 2.99817 root default -2 2.99817 host labosa 0 0.99939 osd.0 up 1.00000 1.00000 1 0.99939 osd.1 up 1.00000 1.00000 2 0.99939 osd.2 up 1.00000 1.00000

Categories: Aligned Planets

OpenSTEM: Australia and the Commonwealth Games

April 15, 2018 - 15:05
Australia has been doing exceptionally well at the 2018 Commonwealth Games, held at the Gold Coast, Queensland. We can be very proud of our athletes, not only for their sporting prowess, but also because of their friendly demeanour and wonderful examples of the spirit of sportsmanship. I’m sure we all felt proud when the Australian […]
Categories: Aligned Planets

Ben Martin: My little robotic pals

April 13, 2018 - 15:30
Years ago I decided to build an indoor robot with multiple kinects for navigation and a robotic arm for manipulation. It was an interesting time working out how to do this and what is needed to get a mobile base to map and navigate a static and dynamic indoor space. Any young players reading this might think that ROS can just magically make this all happen. There are some interesting issues to discover building your own base and some, um, "issues" shall we say that you will need to address that are not in the books or docs. I won't spoil it here for the new players other than to say be prepared to be persistent. 


There are two active wheels at the front and a single drag wheel at the back about 12 inches behind the front wheels. I wrote the code to control the arm myself as custom ROS nodes. A great trick here is you can inject sinusoidal movement by injecting a shim ROS node to take one target and smoothly move towards it.

Now I have a new friend for outdoor activity, the "hound bot". The little furry friend is still sans hair but has gps, imu, rc control override, and a ps4 eye camera mounted for depth perception and mapping. Taking a leaf out of one of the big car makers book and only using cameras for navigation. But for me it is about cost since a good lidar is still much to expensive for the hound.


The hound is a sort of monocoque where the copper looking square part at the front is part of a 1/4 inch aircraft grade alloy solid welded chassis that extends the lenght of the robot. The hound can do about 20km/h and is around 20kg in heft. The electronics bay in the middle is protected by a reinforced carbon fibre layup that I did. Mixing material for fun and slight weight loss.

One great part about doing this "because I want to" is that I am unbounded. Academic institutions might say that building robust alloy shells is not a worthwhile task and only the abstract algorithms matter. I get to pick and choose what matters based purely on what is interesting, what is hard to do (yay!), and what will help me get the robot to perform a task that I want.

The hound will get gripper(s) so it can autonomously "fetch" things for me such as the mail or go find and pick up objects on the lawn.
Categories: Aligned Planets

Donna Benjamin: Leadership, and teamwork.

April 13, 2018 - 07:02
Friday, April 13, 2018 - 04:09

I'm angry and defensive. I don't know why. So I'm trying hard to figure that out right now.

Here's some words.

I'm writing these words for myself to try and figure this out.
I'm hoping these words might help make it clear.
I'm fearful these words will make it worse.

But I don't want to be silent about this.

Content Warning: This post refers to genocide.

This is about a discussion at the teamwork and leadership workshop at DrupalCon. For perhaps 5 mins within a 90 minute session we talked about Hitler. It was an intensely thought provoking, and uncomfortable 5 minute conversation. It was nuanced. It wasn't really tweetable.

On Holocaust memorial day, it seems timely to explore whether or not we should talk about Hitler when exploring the nature of leadership. Not all leaders are good. Call them dictators, call them tyrants, call them fascists, call them evil. Leadership is defined differently by different cultures, at different times, and in different contexts.

Some people in the room were upset and disgusted that we had that conversation. I'm really very deeply sorry about that.

Some of them then talked about it with others afterwards, which is great. It was a confronting conversation, and one, frankly, we should all be having as genocide and fascism exist in very real ways in the very real world.

But some of those they spoke with, who weren't there, seem to have extrapolated from that conversation that it was something different to what I experienced in the room. I feel they formed opinions that I can only call, well, what words can I call those opinions? Uninformed? Misinformed? Out of context? Wrong? That's probably unfair, it's just my perspective. But from those opinions, they also made assumptions, and turned those assumptions into accusations.

One person said they were glad they weren't there, but clearly happy to criticise us from afar on twitter. I responded that I thought it was a shame they didn't come to the workshop, but did choose to publicly criticise our work. Others responded to that saying this was disgusting, offensive, unacceptable and inappropriate that we would even consider having this conversation. One accused me of trying to shut down the conversation.

So, I think perhaps the reason I'm feeling angry and defensive, is I'm being accused of something I don't think I did.

And I want to defend myself.

I've studied World War Two and the Genocide that took place under Hitler's direction.

My grandmother was arrested in the early 1930's and held in a concentration camp. She was, thankfully, released and fled Germany to Australia as a refugee before the war was declared. Her mother was murdered by Hitler. My grandfather's parents and sister were also murdered by Hitler.

So, I guess I feel like I've got a pretty strong understanding of who Hitler was, and what he did.

So when I have people telling me, that it's completely disgusting to even consider discussing Hitler in the context of examining what leadership is, and what it means? Fuck that. I will not desist. Hitler was a monster, and we must never forget what he was, or what he did.

During silent reflection on a number of images, I wrote this note.

"Hitler was a powerful leader. No question. So powerful, he destroyed the world."

When asked if they thought Hitler was a leader or not, most people in the room, including me, put up their hand. We were wrong.

The four people who put their hand up to say he was NOT a leader were right.

We had not collectively defined leadership at that point. We were in the middle of a process doing exactly that.

The definition we were eventually offered is that leaders must care for their followers, and must care for people generally.

At no point, did anyone in that room, consider the possibility that Hitler was a "Good Leader" which is the misinformed accusation I most categorically reject.

Our facilitator, Adam Goodman, told us we were all wrong, except the four who rejected Hitler as an example of a Leader, by saying, that no, he was not a leader, but yes, he was a dictator, yes he was a tyrant. But he was not a leader.

Whilst I agree, and was relieved by that reframing, I would also counter argue that it is English semantics.

Someone else also reminded us, that Hitler was elected. I too, was elected to the board of the Drupal Association, I was then appointed to one of the class Director seats. My final term ends later this year, and frankly, right now, I'm kind of wondering if I should leave right now.

Other people shown in the slide deck were Oprah Winfrey, Angela Merkel, Rosa Parks, Serena Williams, Marin Alsop, Sonia Sotomayor, a woman in military uniform, and a large group of women protesting in Tahrir Square in Egypt.

It also included Gandhi, and Mandela.

I observed that I felt sad I could think of no woman that I would list in the same breath as those two men.

So... for those of you who judged us, and this workshop, from what you saw on twitter, before having all the facts?
Let me tell you what I think this was about.

This wasn't about Hitler.

This was about leadership, and learning how we can be better leaders. I felt we were also exploring how we might better support the leaders we have, and nurture the ones to come. And I now also wonder how we might respectfully acknowledge the work and effort of those who've come and gone, and learn to better pass on what's important to those doing the work now.

We need teamwork. We need leadership. It takes collective effort, and most of all, it takes collective empathy and compassion.

Dries Buytaert was the final image in the deck.

Dries shared these 5 values and their underlying principles with us to further explore, discuss and develop together.

Prioritize impact
Impact gives us purpose. We build software that is easy, accessible and safe for everyone to use.

Better together
We foster a learning environment, prefer collaborative decision-making, encourage others to get involved and to help lead our community.

Strive for excellence
We constantly re-evaluate and assume that change is constant.

Treat each other with dignity and respect
We do not tolerate intolerance toward others. We seek first to understand, then to be understood. We give each other constructive criticism, and are relentlessly optimistic.

Enjoy what you do
Be sure to have fun.

I'm sorry to say this, but I'm really not having fun right now. But I am much clearer about why I'm feeling angry.

Photo Credit "Protesters against Egyptian President Mohamed Morsi celebrate in Tahrir Square in Cairo on July 3, 2013. Egypt's armed forces overthrew elected Islamist President Morsi on Wednesday and announced a political transition with the support of a wide range of political and religious leaders." Mohamed Abd El Ghany Reuters.

Categories: Aligned Planets

James Morris: Linux Security Summit North America 2018 CFP Announced

April 12, 2018 - 11:01

The CFP for the 2018 Linux Security Summit North America (LSS-NA) is announced.

LSS will be held this year as two separate events, one in North America
(LSS-NA), and one in Europe (LSS-EU), to facilitate broader participation in
Linux Security development. Note that this CFP is for LSS-NA; a separate CFP
will be announced for LSS-EU in May. We encourage everyone to attend both
events.

LSS-NA 2018 will be held in Vancouver, Canada, co-located with the Open Source Summit.

The CFP closes on June 3rd and the event runs from 27th-28th August.

To make a CFP submission, click here.

Categories: Aligned Planets

BlueHackers: Post-work: the radical idea of a world without jobs | The Guardian

April 10, 2018 - 16:26
The long read: Work has ruled our lives for centuries, and it does so today more than ever. But a new generation of thinkers insists there is an alternative
Categories: Aligned Planets

Lev Lafayette: Net Promoter Score: The Most Useless Metric of All

April 3, 2018 - 17:05

A number of organisations use a customer service metric known as "Net Promoter", first suggested in the Harvard Business Review. Indeed, it is so common that apparently two-thirds of Fortune 500 companies are using the metric. It simply asks a single question: "How likely is it that you would recommend [company X] to a friend or colleague?". The typical scoring for the answer is a one to ten scale, with a value of 9 or 10 considered a "promoter" score, a 7 or 8 a "neutral" score, and a 0 to 6 a "detractor" score. The Net Promoter Score is calculated by subtracting the percentage of responders who are Detractors from the percentage of responders who are Promoters. It is a simple and blunt instrument and it's entirely the wrong tool to use.

To begin with, it fails of the most elementary mathematics. There is nothing to be gained from providing a score that provides an 11 point range from 0-10, yet only calculates a score from values of promoter, neutral, and detractor. In the Net Promoter system, a score of 6 is just as much a detractor as responder who provides a score of 0, despite what should be a glaringly obvious difference in reaction. It is stunning that a journal with the alleged quality of the Harvard Business Review didn't notice this - let alone the authors of the article.

Secondly, it conflates subjective responses with a quantitative value. What does a score of "6" mean anyway? According to the designers of the NPS, it's a detractor, a fail. Yet there is no guarantee that a responder interprets the value that way. In most assessment systems a "6" is a pass - and more to the point a "7" or "8" is considered a distinction grade; the latter would result in a cum laude or even magna cum laude in most universities. But in the NPS, it is merely a "neutral" result. The problem being of course, unless the individual is provided qualitative guidance with the values (which most organisations or applications don't do), there is no way of determining what their subjective score of 0-10 really reflects. Numerical values cannot be translated to qualitative values unless all parties are provided a means for correlation.

Thirdly, a single-value NPS provides no information to act upon. What does it mean that a respondent would or would not recommend a company, product, or service? Even assuming that the graduation is in place that matches values with scale, and qualitative assessment to numerical values, the answers still providing nothing to act upon. Is it the company or service as a whole that has resulted in the evaluation? Is it a part of company or service? Could it be, for a detractor, that the product or service was something that they thought they needed, but actually didn't? Unless the score is supplemented with an opportunity for a responder to explain their evaluation, there is no way that it creates an opportunity for action.

Given these errors, it is perhaps unsurprising that an unmodified "Net Promoter" method of measuring customer satisfaction ranked last in an extensive study by Schneider et al. in terms of predictive capability. Granted, some information is better than no information, and people do prefer shorter surveys to longer surveys. But as designed in its pure form, using a Net Promoter score is almost as bad as not collecting respondent data at all. A short survey which breaks up the item being reviewed into equal composite components, which guides subjective values to numerical values, which provides an opportunity for free-text qualitative information, and which measures metrics along the scale (with mean and distribution) will always be far more effective measurement of both a respondent's satisfaction, and an organisation's opportunity for action. As it is writ, the NPS should be avoided in all circumstances.

Categories: Aligned Planets

Francois Marier: Looking back on starting Libravatar

April 3, 2018 - 10:55

As noted on the official Libravatar blog, I will be shutting the service down on 2018-09-01.

It has been an incredible journey but Libravatar has been more-or-less in maintenance mode for 5 years, so it's somewhat outdated in its technological stack and I no longer have much interest in doing the work that's required every two years when migrating to a new version of Debian/Django. The free software community prides itself on transparency and so while it is a difficult decision to make, it's time to be upfront with the users who depend on the project and admit that the project is not sustainable in its current form.

Many things worked well

The most motivating aspect of running Libravatar has been the steady organic growth within the FOSS community. Both in terms of traffic (in March 2018, we served a total of 5 GB of images and 12 GB of 302 redirects to Gravatar), integration with other sites and projects (Fedora, Debian, Mozilla, Linux kernel, Gitlab, Liberapay and many others), but also in terms of users:

In addition, I wanted to validate that it is possible to run a FOSS service without having to pay for anything out-of-pocket, so that it would be financially sustainable. Hosting and domain registrations have been entirely funded by the community, thanks to the generosity of sponsors and donors. Most of the donations came through Gittip/Gratipay and Liberapay. While Gratipay has now shut down, I encourage you to support Liberapay.

Finally, I made an effort to host Libravatar on FOSS infrastructure. That meant shying away from popular proprietary services in order to make a point that these convenient and well-known services aren't actually needed to run a successful project.

A few things didn't pan out

On the other hand, there were also a few disappointments.

A lot of the libraries and plugins never implemented DNS federation. That was the key part of the protocol that made Libravatar a decentralized service but unfortunately the rest of the protocol was must easier to implement and therefore many clients stopped there.

In addition, it turns out that while the DNS system is essentially a federated caching system for IP addresses, many DNS resolvers aren't doing a good job caching records and that created unnecessary latency for clients that chose to support DNS federation.

The main disappointment was that very few people stepped up to run mirrors. I designed the service so that it could scale easily in the same way that Linux distributions have coped with increasing user bases: "ftp" mirrors. By making the actual serving of images only require Apache and mod_rewrite, I had hoped that anybody running Apache would be able to add an extra vhost to their setup and start serving our static files. A few people did sign up for this over the years, but it mostly didn't work. Right now, there are no third-party mirrors online.

The other aspect that was a little disappointing was the lack of code contributions. There were a handful from friends in the first couple of months, but it's otherwise been a one-man project. I suppose that when a service works well for what people use it for, there are less opportunities for contributions (or less desire for it). The fact dev environment setup was not the easiest could definitely be a contributing factor, but I've only ever had a single person ask about it so it's not clear that this was the limiting factor. Also, while our source code repository was hosted on Github and open for pull requests, we never even received a single drive-by contribution, hinting at the fact that Github is not the magic bullet for community contributions that many people think it is.

Finally, it turns out that it is harder to delegate sysadmin work (you need root, for one thing) which consumes the majority of the time in a mature project. The general administration and maintenance of Libravatar has never moved on beyond its core team of one. I don't have a lot of ideas here, but I do want to join others who have flagged this as an area for "future work" in terms of project sustainability.

Personal goals

While I was originally inspired by Evan Prodromou's vision of a suite of FOSS services to replace the proprietary stack that everybody relies on, starting a free software project is an inherently personal endeavour: the shape of the project will be influenced by the personal goals of the founder.

When I started the project in 2011, I had a few goals:

This project personally taught me a lot of different technologies and allowed me to try out various web development techniques I wanted to explore at the time. That was intentional: I chose my technologies so that even if the project was a complete failure, I would still have gotten something out of it.

A few things I've learned

I learned many things along the way, but here are a few that might be useful to other people starting a new free software project:

  • Speak about your new project at every user group you can. It's important to validate that you can get other people excited about your project. User groups are a great (and cheap) way to kickstart your word of mouth marketing.

  • When speaking about your project, ask simple things of the attendees (e.g. create an account today, join the IRC channel). Often people want to support you but they can't commit to big tasks. Make sure to take advantage of all of the support you can get, especially early on.

  • Having your friends join (or lurk on!) an IRC channel means it's vibrant, instead of empty, and there are people around to field simple questions or tell people to wait until you're around. Nobody wants to be alone in a channel with a stranger.

Thank you

I do want to sincerely thank all of the people who contributed to the project over the years:

  • Jonathan Harker and Brett Wilkins for productive hack sessions in the Catalyst office.
  • Lars Wirzenius, Andy Chilton and Jesse Noller for graciously hosting the service.
  • Christian Weiske, Melissa Draper, Thomas Goirand and Kai Hendry for running mirrors on their servers.
  • Chris Forbes, fr33domlover, Kang-min Liu and strk for writing and maintaining client libraries.
  • The Wellington Perl Mongers for their invaluable feedback on an early prototype.
  • The #equifoss group for their ongoing suppport and numerous ideas.
  • Nigel Babu and Melissa Draper for producing the first (and only) project stikers, as well as Chris Cormack for spreading so effectively.
  • Adolfo Jayme, Alfredo Hernández, Anthony Harrington, Asier Iturralde Sarasola, Besnik, Beto1917, Daniel Neis, Eduardo Battaglia, Fernando P Silveira, Gabriele Castagneti, Heimen Stoffels, Iñaki Arenaza, Jakob Kramer, Jorge Luis Gomez, Kristina Hoeppner, Laura Arjona Reina, Léo POUGHON, Marc Coll Carrillo, Mehmet Keçeci, Milan Horák, Mitsuhiro Yoshida, Oleg Koptev, Rodrigo Díaz, Simone G, Stanislas Michalak, Volkan Gezer, VPablo, Xuacu Saturio, Yuri Chornoivan, yurchor and zapman for making Libravatar speak so many languages.

I'm sure I have forgotten people who have helped over the years. If your name belongs in here and it's not, please email me or leave a comment.

Categories: Aligned Planets

Simon Lyall: Audiobooks – March 2018

April 2, 2018 - 11:03

The Actor’s Life: A survival guide by Jenna Fischer

Combination of advice for making it as an actor and a memoir of her experiences. Interesting and enjoyable 8/10

One Man’s Wilderness: An Alaskan Odyssey by Sam Keith

Based on the journals of Richard Proenneke who built a cabin in the Alaskan wilderness and lived there for 16 month (& returned in later years). Interesting & I’m a little inspired 7/10

The Interstellar Age: The Story of the NASA Men and Women Who Flew the Forty-Year Voyager Mission by Jim Bell

Pretty much what the title says. Very positive throughout and switching between the science and profiles of the people smoothly. 8/10

Richard Nixon: The Life by John A Farrell

Comprehensive but balanced biography. Doesn’t shy away from Nixon’s many many problems but also covers his accomplishments and positive side (especially early in his career). 8/10

The Adventures of Sherlock Holmes, Book I – Arthur Conan Doyle – Read by David Timson

4 Stories unabridged. Reading is good but drop a point since the music is distracting at fast playback. 7/10

Death by Black Hole: And Other Cosmic Quandaries by Neil deGrasse Tyson

42 Essays on mainly space-related topics. Some overlap but pretty good, 10 years old so missing a few newer developments but good introduction. 8/10

The Sports Gene: Inside the Science of Extraordinary Athletic Performance by David Epstein

Good wide-ranging book on nature vs nurture in sports performance, how genes for athletic performance are not that simple & how little we know. 9/10

The Residence: Inside the Private World of the White House by Kate Andersen Brower

Gossipy account from interviewing various ex-staff ( maids, cooks, butlers). A different angle than from what I get from other accounts. 7/10

Tanker Pilot: Lessons from the Cockpit by Mark Hasara

Account of the author flying & planning aerial refueling operations during the Gulf wars & elsewhere. A bit of business advice but that is unobtrusive. No actual politics 7/10

The Big Short: Inside the Doomsday Machine by Michael Lewis

Account of various people who made billions shorting the mortgage market in the run up to 2008. Fun and easy for layman to follow. 8/10

Driverless: Intelligent Cars and the Road Ahead by Hod Lipson

Listening to it the week a driverless car first killed a pedestrian. Fairly good intro/history/overview although fast changing topic so will go out of date quickly. 7/10

Journeys in English by Bill Bryson

A series of radio shows. I found the music & random locations annoying. Had to slow it down due to varied voices, accents and words. Interesting despite that, 7/10

Categories: Aligned Planets

Ben Martin: The Gantry is attached!

April 2, 2018 - 10:42
Now the fourth axis finally looks at home on the CNC plate.  The new gantry sides are almost 100mm taller than the old ones and share a similar shape. While the gantry was off the machine was a good time to attach the new Z-Axis which gains a similar amount of Z travel. Final adjustment of where the spindle sits in its holder are still needed but it makes sense for the cutting edge to be fairly high up when the Z-Axis is fully retracted as shown.




After a day of great success early on a day of great problem solving arrived before the attachment was possible. The day of great success involved testing the two new sides to see if or how well they attached to the mount points at the base of the machine. These holes in the gantry were hand marked, drilled, and tapped so there was some good chance that they were off target enough to not work well. But those all went fine.

The second success was mounting the Z-Axis to the existing points on the gantry. I had in the back of my mind the thought that one side (the three holes on the bottom of the mount) to line up and attach fine but the top holes to be out of alignment. Both of these plates, seen in horizontal in the image above, were made by CNC so the holes should be where I intended. Though these plates were both mounted to the Z-Axis and the bottom plate goes right through to the lower steel bracket so the alignment might not have been 100%. I registered both plates to the smooth side of the spindle backing plate so the alignment in that axis should have been ok. To great surprise and joy the top holes also aligned perfectly and the second phase fell into place.

It was only when putting the new sides onto the gantry that interesting things started to happen. I will have a new blog post on that part soon and likely a video of the problems and solutions for that part. One thing I will say now is that it helps to have washers, bolts, and spare skate bearings on hand for this process depending on how you have designed your far side gantry upright.



Categories: Aligned Planets