Joining is easy.
Subscribe to our announce list. We'll send you a confirmation email in reply. Come back, enter the confirmation password, and you're done!
That we did over 1000m of climbing in the first 10km of running, including 4km of flat at the start and a descent to the bottom of a 500m valley in the middle says something. This event lived up to the SkyRace tag really well. Also the victorian alpine region is amazingly pretty and Bright is a great town to hang around in.
Photos and a few words from my day out are here on my Buffalo Stampede 2015 page. Thanks to Paul for his entry and to Dave, Julie and Alex for the company. It was fun to catch up with Hanny and Graham down there too.
One of the cool features of POWER8 processors is the ability to run in either big- or little-endian mode. Several distros are already available in little-endian, but up until recently Petitboot has remained big-endian. While it has no effect on the OS, building Petitboot little-endian has its advantages, such as making support for vendor tools easier. So it should just be a matter of compiling Petitboot LE right? Well…Switching Endianess
Endianess, and several other things besides, are controlled by the Machine State Register (MSR). Each processor in a machine has an MSR, and each bit of the MSR controls some aspect of the processor such as 64-bit mode or enabling interrupts. To switch endianess we set the LE bit (63) to 1.
When a processor first starts up it defaults to big-endian (bit 63 = 0). However the processor doesn’t actually know the endianess of the kernel code it is about to execute - either it is big-endian and everything is fine, or it isn’t and the processor will very quickly try to execute an illegal instruction.
The solution to this is an amazing little snippet of code in arch/powerpc/boot/ppc_asm.h (follow the link to see some helpful commenting):1 2 3 4 5 6 7 8 9 10 11 #define FIXUP_ENDIAN tdi 0, 0, 0x48; b $+36; .long 0x05009f42; .long 0xa602487d; .long 0x1c004a39; .long 0xa600607d; .long 0x01006b69; .long 0xa6035a7d; .long 0xa6037b7d; .long 0x2400004c
By some amazing coincidence if you take the opcode for tdi 0, 0, 0x48 and flip the order of the bytes it forms the opcode for b . + 8. So if the kernel is big-endian, the processor will jump to the next instruction after this snippet. However if the kernel is little-endian we execute the next 8 instructions. These are written in reverse so that if the processor isn’t in the right endian it interprets them backwards, executing the instructions shown in the linked comments above, resulting in MSRLE being set to 1.
When booting a little-endian kernel all of the above works fine - but there is a problem for Petitboot that will become apparent a little further down…Petitboot’s Secret Sauce
The main feature of Petitboot is that it is a full (but small!) Linux kernel and userspace which scans all available devices and presents possible boot options. To boot an available operating system Petitboot needs to start executing the OS’s kernel, which it accomplishes via kexec. Simply speaking kexec loads the target kernel into memory, shuts the current system down most of the way, and at the last moment sets the instruction pointer to the start of the target kernel. From there it’s like booting any other kernel, including the FIXUP_ENDIAN section above.We’ve Booted! Wait…
So our LE Petitboot kernel boots fine thanks to FIXUP_ENDIAN, we kexec into some other kernel.. and everything falls to pieces.
The problem is we’ve unwittingly changed one of the assumptions of booting a kernel; namely that MSRLE defaults to zero. When kexec-ing from an LE kernel we start executing the next kernel in LE mode. This itself is ok, the FIXUP_ENDIAN macro will handle the switch if needed. The problem is that the FIXUP_ENDIAN macro is relatively recent, first entering the kernel in early 2014. So if we’re booting, say, an old Fedora 19 install with a v3.9 kernel - things go very bad, very quickly.
The solution seems pretty straightforward: find where we jump into the next kernel, and just before that make sure we reset the LE bit in the MSR. That’s exactly what this patch to kexec-lite does.
That worked up until I tested on a machine with more than one CPU. Remembering that the MSR is processor-specific, we also have to reset the endianess of each secondary CPU
Now things are looking good! All the CPUs are reset to big-endian, the target kernel boots fine, and then… ‘recursive interrupts?!’
Skipping the debugging process that led to this (hint: mambo is actually a pretty cool tool), these were the sequence of steps leading up to the problem:
And then we very busily execute nothing until the machine is killed. I spend some time staring incredulously at my screen, then appeal to a higher authority who replies with “What is the HILE set to?”
Cracking open the PowerISA reveals this tidbit:
The Hypervisor Interrupt Little-Endian (HILE) bit is a bit in an implementation-dependent register or similar mechanism. The contents of the HILE bit are copied into MSRLE by interrupts that set MSRHV to 1 (see Section 6.5), to establish the Endian mode for the interrupt handler. The HILE bit is set, by an implementation-dependent method, during system initialization, and cannot be modified after system initialization.
To be fair, there are use cases for taking exceptions in a different endianess. The problem is that while HILE gets switched on when setting MSRLE to 1, it doesn’t get turned off when MSRLE is set to zero. In particular the line “…cannot be modified after system initialization.” led to a fair amount of hand wringing from myself and whoever would listen; if we can’t reset the HILE bit, we simply can’t use little-endian kernels for Petitboot.
Luckily while on some other systems the machinations of the firmware might be a complete black box, Petitboot runs on OPAL systems - which means the firmware source is right here. In particular we can see here the OPAL call to opal_reinit_cpus which among other things resets the HILE bit.
This is actually what turns on the HILE bit in the first place, and is meant to be called early on in boot since it also clobbers a large amount of state. Luckily for us we don’t need to hold onto any state since we’re about to jump into a new kernel. We just need to choose an appropriate place where we can be sure we won’t take an exception before we get into the next kernel: thus the final patch to support PowerNV machines.
Geo s always a good event and this year I really enjoyed just joining in for the fun and letting the others worry about Nav and a bunch of other stuff. I have to admit the lack of paddling in the last 2 years made that bit hard, however the event was a lot of fun as always and surprisingly felt pretty good all the way through. Maybe my running fitness helped me get through comfortably.
Photos and some words from the race are online on my Geoquest 2015 album. Thanks to Dane, Lee and Cam for the company, thanks to the awesome support crew and it was good to be back.
As I was planning to do another 100 not long after I was not overly keen on a solo entry, however at Gangles's birthday KV and I managed to convince him to compete in the event with us in a team of 3. This would be his first long run (over 20 km, doing the last leg) and KV was stepping up for the first leg (I had the middle two to get done). I got some celebratory t-shirts made up as Gangles' (Adam) middle name is William KV and I decided to call the team Wild Bill Bo Jangles and crew. (I promise it made sense to us)
So we got to join in the fun and run with many of our friends and other people on the day. I took some photos and they are online in my Sri Chinmoy Trail Ultra 2015 album.
On our last day in Tasmania (after the OSDC conference, about which I’ll do other posts shortly), Claire and I visited the wonderful Lauderdale Primary School in Hobart, where I did a version of our free Robotics Incursion with two year 5/6 classes, having a chat about robots, robotics, and more – and having our autonomous caterpillar and hexapod robots stroll around the sports hall….
The students were really engaged, they had thoughtful questions and great ideas – and the feedback from the kids as well as the teachers was that the session was fun as well as educational. Good!
We often do this incursion as a neat way for schools, teachers and students to get to know us before undertaking a bigger program such as the Robotics & Programming one. But, when we’re travelling somewhere with the robots anyway, it’s great to visit a local school. All our facilitators hold a current “working with children” card, so getting something like this organised is really quite straightforward.
Today is George Boole‘s 200th birthday. He lived from 2 November 1815 to 8 December 1864, so he was only 49 when he died!
In 2015, University College Cork (Ireland) celebrates the bicentenary of George Boole’s birth. Born in Lincoln, Boole was a mathematical genius who was largely self-taught. His appointment as the first Professor of Mathematics at the college in 1849 provided the opportunity to develop his most important work, An Investigation of the Laws of Thought.
Boole is a pivotal figure who can be described as the ‘father of the information age’. His invention of Boolean algebra and symbolic logic pioneered a new mathematics. His legacy surrounds us everywhere, in the computers, information storage and retrieval, electronic circuits and controls that support life, learning and communications in the 21st century.
Check out the georgeboole.com site for video and lots more information about George Boole and his wonderful achievements!
The main advantage I find is I at least can easily find the links to refer to without needing to see a directory listing on the website. In this case I headed down to Adelaide to hang out with friends there and also run in the Heysen 105 run. Feeling the need to do another 100km ultra this year and the short holiday in Adelaide helped attract me to this one. Report and photos for my Heysen 105 2015 run are online.
Nice part of the world and I had fun both in the event and hanging out with friends in Adelaide. The coopers brewery tour is also rather excellent.
Docker’s default storage driver on most Ubuntu installs is AUFS.
Don’t use it. Use Overlay instead. Here’s why.
First, some background. I’m testing the performance of the basic LAMP stack on POWER. (LAMP is Linux + Apache + MySQL/MariaDB + PHP, by the way.) To do more reliable and repeatable tests, I do my builds and tests in Docker containers. (See my previous post for more info.)
Each test downloads the source of Apache, MariaDB and PHP, and builds them. This should be quick: the POWER8 system I’m building on has 160 hardware threads and 128 GB of memory. But I was finding that it was only just keeping pace with a 2 core Intel VM on BlueMix.
Why? Well, my first point of call was to observe a compilation under top. The header is below.
Over 70% of CPU time is spent in the kernel?! That’s weird. Let’s dig deeper.
My next port of call for analysis of CPU-bound workloads is perf. perf top reports astounding quantities of time in spin-locks:
perf top -g gives us some more information: the time is in system calls. open() and stat() are the key culprits, and we can see a number of file system functions are in play in the call-chains of the spinlocks.
Why are open and stat slow? Well, I know that the files are on an AUFS mount. (docker info will tell you what you’re using if you’re not sure.) So, being something of a kernel hacker, I set out to find out why. This did not go well. AUFS isn’t upstream, it’s a separate patch set. Distros have been trying to deprecate it for years. Indeed, RHEL doesn’t ship it. (To it’s credit, Docker seems to be trying to move away from it.)
Wanting to avoid the minor nightmare that is an out-of-tree patchset, I looked at other storage drivers for Docker. This presentation is particularly good. My choices are pretty simple: AUFS, btrfs, device-mapper or Overlay. Overlay was an obvious choice: it doesn’t need me to set up device mapper on a cloud VM, or reformat things as btrfs.
It’s also easy to set up on Ubuntu:
export/save any docker containers you care about.
add --storage-driver=overlay option to DOCKER_OPTS in /etc/default/docker, and restart docker (service docker restart)
import/load the containters you exported
verify that things work, then clear away your old storage directory (/var/lib/docker/aufs).
Having moved my base container across, I set off another build.
The first thing I noticed is that images are much slower to create with Overlay. But once that finishes, and a compile starts, things run much better:
The compiles went from taking painfully long to astonishingly fast. Winning.
So in conclusion:
If you use Docker for something that involves open()ing or stat()ing files
If you want your machine to do real work, rather than spin in spinlocks
If you want to use code that’s upstream and thus much better supported
If you want something less disruptive than the btrfs or dm storage drivers
…then drop AUFS and switch to Overlay today.
There is a terrible ailment sweeping the land. Sufferers find themselves compelled to watch, listen, read or generally consume media output that causes high blood pressure, anger and a desperate, overwhelming urge to tweet about how much they truly loathe the media thing they are consuming.
There seems to be spikes of Rage Watching specifically around Monday nights at 9:30pm with smaller occurances occuring on Sunday mornings (replays on Sunday afternoons). More recently there has been an uptick of Rage Watching on Wednesday nights by people who feel it absolutely necessary to tell the world exactly how bad the ABC show "Kitchen Cabinet" is for either a) Having an evil person on as a guest or b) Not spending 22 minutes using Kitchin impliments to torture said evil person into confessing they are indeed an evil person and will do better from now on.
Why? Why do you watch these programmes if you know they're going to be terrible? You already know that you're not going to like either the show, or the person being interviewed, or in the case of the Bolt Report everything about it.
Instead be calm, turn off the tv, or switch on Netflix and binge watch your way through a series. Save your rage for when it is actually useful.Blog Catagories: media
I wrote a simple program ramp-io, based on the redshift code, to read and write the xrandr gamma ramps for Linux / X11. This enables me to define my own gamma ramps, and switch ramps quickly from the command line. My preferred ramp is red-inv, dim inverse video with a low colour temperature (more red, less blue), and I set the LCD hardware brightness to maximum to reduce LED PWM flicker. I find this is relatively easy on the eyes for work, compared to the normal glaring white backgrounds.
6th Floor, 200 Victoria St. Carlton VIC 3053Link: http://luv.asn.au/meetings/map
Please note that due to the Melbourne Cup this month's meeting is on Wednesday
• Russell Coker, Computer Science and SELinux
• Lev Lafayette, Parallel Programming
200 Victoria St. Carlton VIC 3053 (formerly the EPA building)
Late arrivals, please call (0490) 049 589 for access to the venue.
Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.
Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.November 4, 2015 - 18:30
Recently at Percona Live Amsterdam I gave a talk titled Databases in the Hosted Cloud (I’m told I got a 4/5 rating for this talk). It was before AWS re:Invent, so obviously some of the details in the talk have changed. For one, now there is also Amazon RDS for MariaDB. But there has also been other changes, i.e. HP’s Public Cloud (HP Helion Public Cloud) will sunset January 31 2016.
That’s a slide from my deck. I basically have to caution users as to what’s going on in the cloud world when it comes to their databases. And this one slide shows news reports about HP possibly wanting to exit the cloud world back in April 2015. See: HP Comes to Terms With the Cloud, HP: We’re not leaving the public cloud, and of course the HP blog post from Bill Hilf: HP Helion Strategy to Deliver Hybrid IT Continues Strong.
The tune has of course changed in October 2015: A new model to deliver public cloud. I find this to be quite sad considering they were all very gung ho about pushing OpenStack forward several OSCONs ago. I know many people who made this happen (many ex-MySQL’ers went on to HP to work on OpenStack). I can only feel for them. I guess their important work continues in OpenStack as a whole and all this ends up being part of the HP Helion private cloud.
I think its also worth noting the improvements that Percona Server 5.5 received thanks to HPCloud to make it easier to manage in the cloud:
This pretty much leaves only Rackspace Cloud Databases as being a large OpenStack based offering of databases in the public cloud space, doesn’t it?
HPCloud offered 3 Availability Zones (AZs) per region, and had 2 regions — US-East (Virginia) and US-West. It’s worth remembering that US-West was the only place you could use the Relational DB MySQL service. You also got Percona Server 5.5. You enjoyed 50% off pricing while it was in public beta.
All this is basically over. Here’s wishing the team well, a big thanks to them for helping make MySQL better and in case you’re looking for more articles to read: H-P Winds Down Cloud-Computing Project.
My new startup just released our MVP – this is the story of what got me here.
I love creating new applications that let people do their work better or in a manner that wasn’t possible before.
My first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.
The story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.
Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994!) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.
Many of the use cases that we explored are now part of products or continue to be challenges: finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.
This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.
In 2001 I had the idea of replicating the Web for videos: i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.
Around the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.
As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.
The opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.
Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.
So we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.
Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.
The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.
With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.
We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome!
With Coviu you can share and annotate:
All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.
This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.
Today I received about five emails with the subject: 3 Big Announcements from MariaDB. Maybe you did as well (else, read it online). October has brought on some very interest announcements, and I think my priority for the big announcements vary a little: