Aligned Planets

Matthew Oliver: Keystone Federated Swift – False Federation

Planet LA - March 27, 2018 - 20:05

This is the second post in my series of posts on Swift in a Keystone federated environment, and the first post where I’ll walk through the first environment. The environment I’m calling ‘False Federation’. For details on these series of posts including the rationalisation see my last introductory post.

 

False Federation

This first environment doesn’t actually use Keystone federation, instead it uses an existing ability of Swift to have more then 1 authentication middleware in the proxy pipeline. Which is why I’m calling this ‘False Federation’.

Swift Reseller’s and the reseller_prefix

Swift, in an OpenStack environment, talks to Keystone for identity management through Keystone’s authtoken and the Swift keystoneauth middlewares. However Keystone isn’t required. Swift was designed to be a complete standalone storage solution, in fact many Swift deployments use different (like swauth) and sometimes custom authentication middlewares. This way people can easily integrate Swift into their own environments.

If you’ve spend anytime setting up authentication middlewares (like keystoneauth) in Swift, you’ve undoubtedly come across Swift’s reseller_prefix option, and maybe thought to yourself why?

 

As I mentioned earlier from the start Swift was designed to be an end to end standalone storage system. One of the features it has always supported is the idea of more then 1 authentication middleware in the pipeline. And if you have more then 1, then you need a way to distinguish which authentication middleware handles what account. This is what the reseller_prefix does. Swift will match the reseller_prefix prefixed to the account name with the authentication middleware who is to handle it.

This is actually a really powerful feature. It means you could resell your storage solution to other parties to manage accounts, or connect up different parts of your organisation, if say for some reason you have more then 1 source you want to use as an authentication service.

Some authentication middleware’s like Keystoneauth can even cover more then 1 reseller_prefix, this is how service tokens tend to be deployed, so a service can have it’s own namespace of a users for isolation and the data is safe from accidental deletion.

And yes, it’s also possible to set an empty reseller_prefix.

 

Multiple Keystone middlewares

Having got the idea of reseller_prefixes out of the way, this is the first potential solution and the idea behind ‘False Federation’. If you have a large Swift cluster, you could place the required authentication middlewares for each separate OpenStack environment you want to connect it to.

 

NOTE: The are 2 middlewares needed to connect to a single Keystone instance, Keystones authtoken and then Swifts keystoneauth. Other authentication middleware, like swauth and many custom ones, are only 1 middleware. So a little less confusing.

 

Before I get into the configuration I should also mention before you run off and give it a go. The current upstream keystoneauth in Swift doesn’t support being placed multiple times in a pipeline. Why? Because of the way places itself in the wsgi environment. But never fear, I have written a patch to correct this behavior specifically for these set’s of experiments, and when I get a chance to clean it up and write some tests I’ll push it upstream. In the meantime you can grab hold of the patch here.

 

I’m not going into huge amounts of detail on how to connect to Keystone, the Swift documentation and installation guides to that too well. And really your just duplicating exactly that, but to each Keystone endpoint you want to connect. If you need detailed instructions, then let me know. They say an image is worth more then a 1000 words. So here is a how it’s done in 1 pretty diagram:

The run down is:

  • Edit your proxy-server.conf on each node, and create ‘[filter:authtoken]’ and ‘[filter:keystoneauth]’ sections for each Keystone endpoint. Noting the names of the filters have to be different.
  • Each ‘[filter:authtoken]’ will point to an endpoint, and it’s corresponding ‘[filter:keystoneauth]’ will have a different reseller_prefix which will need to be matched in the Object Storage endpoint on the keystone servers service catalog. (see project documentation)
  • You then place these filters in the proxy pipeline. When placing a pair the authtoken must come before it’s keystoneauth other. But the pair’s ketstoneauth must also appear before then next authtoken (like in the picture).

 

NOTE: I’ve left of a bunch of middleware options in the picture to keep it small and readable.

 

Now if I send the following GET requests:
GET /v1/KEY_matt/pictures/cat.png
GET /v1/AUTH_matt/pictures/cat.png

 

The first would be authenticated on the blue keystone (or via ‘authtoken1 keystoneauth1’) and the second with the green keystone (or via ‘authtoken2 keystoneauth2’).

 

Cons

This approach was to demonstrate what Swift could already do. But there are some limitations to this approach. Which as always depends on your situation. Keystone’s authtoken middelware will always go and try an authenticate. So would add a bunch of latency to each request going through the proxy. If they are close maybe that’s ok. But if this was a geographical cluster with keystones all around the world then… ouch. If using a custom middleware, you’d just skip reseller_prefixes that don’t relate to you (like keystoneauth does).

 

Maybe you could have a different Swift proxy in each “region” that only points to the local keystone, so you are only authenticating locally.. ok. But then a user can’t come and access their data if they happen to be in a different region.. even though your talking to the same cluster.

So really what we want to do is take advantage of Keystone federation, where we only ever have to talk to 1 instance, the local one for the region a Swift proxy lives. That way we get the speed and the ability to access our data from anywhere.

 

Next time…

So the next post we’ll add real keystone federation, but assume each federation environment is it’s own cluster, including each has it’s own Swift cluster. In which case we could take advantage of another Swift feature, container sync.

Then the final post would be what we really want, 1 large Swift cluster with multiple Federated keystone OpenStack clusters. But that will involve fiddling with the federation sync metadata and need a more detailed explanation on how Swift authentication works. So first I want to cover what Swift can do simply with the tools it comes with!

Categories: Aligned Planets

OpenSTEM: New Mirobot v3 arrival in Australia

Planet LA - March 27, 2018 - 00:05
Here’s our batch of brand new Mirobot v3 kits on their arrival in Australia, dozens stacked. Since the v3 have a neat acrylic frame, I think I’ll do a proper “unboxing” and first build video of one soon, so you can see for yourself what this is about. Many classes of year 5 and 6 […]
Categories: Aligned Planets

Linux Users of Victoria (LUV) Announce: LUV Main April 2018 Meeting: Write docs like a software developer using the Linux toolchain

Planet LA - March 24, 2018 - 22:03
Start: Apr 3 2018 18:30 End: Apr 3 2018 20:30 Start: Apr 3 2018 18:30 End: Apr 3 2018 20:30 Location:  Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053 Link:  http://www.melbourne.vic.gov.au/community/hubs-bookable-spaces/kathleen-syme-lib...

PLEASE NOTE NEW LOCATION

Tuesday, April 3, 2018

6:30 PM to 8:30 PM
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Linux Users of Victoria is a subcommittee of Linux Australia.

April 3, 2018 - 18:30

read more

Categories: Aligned Planets

Matthew Oliver: Keystone Federated Swift – A series of posts

Planet LA - March 23, 2018 - 14:05

Matt Treinish and I proposed a presentation at the OpenStack Summit in Vancouver in May, it was accepted but on standby. Which simply means we have a lightening talk slot (10 minutes), but may be bumped up to a full slot based on how other presenters go (visa issues, pull outs, etc).

Anyway, 10 minutes wont do the topic justice, so I thought what better then to also post details as I work through them here. Some of what I say may end up in the presentation, or may not. All I know is I’ve been asked a few times how to setup Swift in a Keystone federated environment. Let’s face it, Swift scales to a global cluster no worries, however other OpenStack components may have trouble doing the same. So federating a bunch of different regions and treating them as their own clouds makes heaps of sense. Great, then what’s the best way of integrating Swift into this federated environment?

 

My current Idea is to walk through 3 initial topologies. The first I’ll call ‘false federation’ where we can simply use Swift’s ability to use multiple authentication middlewares as different resellers to be able to authenticate to multiple keystone endpoints. For those playing along at home, the keystone middleware currently doesn’t let you do this, but I have a trivial patch that fixes this.. and plan to push it upstream as soon as I have a chance to clean it up and add tests.

 

The second, is separate swift clusters in each cloud. But using Swifts container sync to move objects so you still have access to your data on any cloud you visit… eventually.

 

And finally the third is what we’d all want, I large swift cluster, that all clouds talk to, so no matter where you are, there your data is. Plus gives better durability, dispersion, and everything we want out of a Swift cluster. The trick here will be making sure the same swift account name is used no matter which keystone your talk to, and assume this will come down to how you configure what you share during federated token exchange. I’ll leave this as the last post and we still need to play to iron it out.. but obviously is the dream.

These diagrams are obviously overly simplistic, but I hope you get the idea.

The next post will be the ‘False federation’ approach seeing as I already have a swift keystoneauth middleware patch that solves this.

Categories: Aligned Planets

Tim Serong: You won’t find us on Facebook

Planet LA - March 20, 2018 - 22:05

I made these back in August 2016 (complete with lovingly hand-drawn thumb and middle finger icons), but it seems appropriate to share them again now. The images are CC-BY-SA, so go nuts, or you can grab them in sticker form from Redbubble.

Categories: Aligned Planets

Ben Martin: Waiting for the other Gantry to drop

Planet LA - March 19, 2018 - 17:46
When cutting the first side of the gantry I used the "traditional" hold down method of clamps and the like on the edges. That works ok when you have a much larger piece of alloy than the part you are cutting. In this case there isn't much spare in the height axis (left right as shown) and as you can see very little in the x axis (up/down in the below image). My clamping allowed for more vibration on the first cutting than I like so I changed how I went about the second side of the gantry.

For the second gantry, after flipping things in the software so that I was coming in from the other side I drilled out 4 m6 holes and countersank them.


This way the bolts (m6x40) were almost flush with the work piece. These bolts go straight through the plywood and connect with t-slot nuts in the alloy bed of the cnc. So there isn't much ability to use bolts that are too long for this application. Counter sinking the bolts helps on a machine with limited Z travel as using some non stubby drill bits really locks down the amount of free play and clearance you can get. The downside of this work holding is that you are left with 4 m6 holes that don't really need to be in the final product.

In this case it doesn't matter as I can use them and a new plate to mount one or two cameras on the back gantry facing forwards. I have found that the best vantage for CNC viewing is when not in the same room and looking at the video streams.

In future jobs I might move the countersunk bolts to the edge so they are not on the final work piece.

So now all I have to do is free this piece from the waste, tap a bunch of m5 holes, drill and tap 5 holes on 3 sides of the new gantry pieces and I'm getting close to loading it on.
Categories: Aligned Planets

Francois Marier: Dynamic DNS on your own domain

Planet LA - March 19, 2018 - 06:51

I recently moved my dynamic DNS hostnames from dyndns.org (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (fmarier.org).

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *.dyn.fmarier.org.

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. 1.2.3.4) for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under dyn.fmarier.org to No-IP:

dyn NS ns1.no-ip.com. dyn NS ns2.no-ip.com. dyn NS ns3.no-ip.com. dyn NS ns4.no-ip.com. dyn NS ns5.no-ip.com. Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

ssl=yes protocol=noip use=web, web=checkip.dyndns.com, web-skip='IP Address' server=dynupdate.no-ip.com login=myusername password='Password1!' machinename.dyn.fmarier.org

and the following in /etc/default/ddclient:

run_dhclient="false" run_ipup="false" run_daemon="true" daemon_interval="3600"

Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the checkip.dyndns.com server will ban your IP address.

Testing

To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short machinename.dyn.fmarier.org
Categories: Aligned Planets

Chris Smart: Auto apply latest package updates on OpenWrt (LEDE Project)

Planet LA - March 18, 2018 - 10:03

Running Linux on your router and wifi devices is fantastic, but it’s important to keep them up-to-date. This is how I auto-update my devices with the latest packages from OpenWrt (but not firmware, I still do that manually when there’s a new release).

This is a very simple shell script which uses OpenWrt’s package manager to fetch a list of updates, and then install them, rebooting the machine if that was successful. The log file is served up over http, in case you want to get the log easily to see what’s been happening (assuming you’re running uhttpd service).

Make a directory to hold the script.
root@firewall:~# mkdir -p /usr/local/sbin

Make the script.
root@firewall:~# cat > /usr/local/sbin/update-system.sh << \EOF
#!/bin/ash
opkg update
PACKAGES="$(opkg list-upgradable |awk '{print $1}')"
if [ -n "${PACKAGES}" ]; then
  opkg upgrade ${PACKAGES}
  if [ "$?" -eq 0 ]; then
    echo "$(date -I"seconds") - update success, rebooting" \
>> /www/update.result
    exec reboot
  else
    echo "$(date -I"seconds") - update failed" >> /www/update.result
  fi
else
  echo "$(date -I"seconds") - nothing to update" >> /www/update.result
fi
EOF

Make the script executable and touch the log file.
root@firewall:~# chmod u+x /usr/local/sbin/update-system.sh
root@firewall:~# touch /www/update.result

Give it a run manually, if you want.
root@firewall:~# /usr/local/sbin/update-system.sh

Next schedule the script in cron.
root@firewall:~# crontab -e

My cron entry looks like this, to run at 2am every day.

0 2 * * * /usr/local/sbin/update-system.sh

Now just start and enable cron.
root@firewall:~# /etc/init.d/cron start
root@firewall:~# /etc/init.d/cron enable

Download a copy of the log from another machine.
chris@box:~$ curl http://router/update.result
2018-03-18T10:14:49+1100 - nothing to update

That’s it! Now if you have multiple devices you can do the same, but maybe just set the cron entry for a different time of the night.

Categories: Aligned Planets

Donna Benjamin: DrupalCon Nashville

Planet LA - March 17, 2018 - 22:02
Saturday, March 17, 2018 - 22:01

I'm going to Nashville!!

That is all. Carry on. Or... better yet - you should come too!

https://events.drupal.org/nashville2018

Categories: Aligned Planets

Russell Coker: Racism in the Office

Planet LA - March 17, 2018 - 00:02

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time.

I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time.

The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist).

Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible.

So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white).

His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does.

Of course he tried to shout me down about the issue of DNA, telling me that he studied Psychology at a university in London and knows how DNA works, demanding to know my qualifications, and asserting that any scientist would support him. I don’t have a medical degree, but I have spent quite a lot of time attending lectures on medical research including from researchers who deliberately change DNA to study how this changes the biological processes of the organism in question.

I offered him the opportunity to star in a Youtube video about this, I’d record everything he wants to say about DNA. But he regarded that offer as an attempt to “shame” him because of his “controversial” views. It was a strange and sudden change from “any scientist will support me” to “it’s controversial”. Unfortunately he didn’t give up on his attempts to convince me that he wasn’t racist and that black people are lesser.

The next odd thing was when he asked me “what do you call them” (black people), “do you call them Afro-Americans when they are here”. I explained that if an American of African ancestry visits Australia then you would call them Afro-American, otherwise not. It’s strange that someone goes from being so certain of so many things to not knowing the basics. In retrospect I should have asked whether he was aware that there are black people who aren’t African.

Then I sought opinions from other people at the party regarding DNA modifications. While I didn’t expect to immediately convince him of the error of his ways it should at least demonstrate that I’m not the one who’s in a minority regarding this issue. As expected there was no support for the ideas of DNA modifying. During that discussion I mentioned radiation as a cause of DNA changes. He then came up with the idea that radiation from someone’s mouth when they shout at you could change your DNA. This was the subject of some jokes, one man said something like “my parents shouted at me a lot but didn’t make me a mutant”.

The other people had some sensible things to say, pointing out that psychological trauma changes the way people raise children and can have multi-generational effects. But the idea of events 3000 years ago having such effects was ridiculed.

By this time people were starting to leave. A heated discussion of racism tends to kill the party atmosphere. There might be some people who think I should have just avoided the discussion to keep the party going (really I didn’t want it and tried to end it). But I’m not going to allow a racist to think that I agree with them, and if having a party requires any form of agreement to racism then it’s not a party I care about.

As I was getting ready to leave the man said that he thought he didn’t explain things well because he was tipsy. I disagree, I think he explained some things very well. When someone goes to such extraordinary lengths to criticise all black people after a discussion of white cops killing unarmed black people I think it shows their character. But I did offer some friendly advice, “don’t drink with people you work with or for or any other people you want to impress”, I suggested that maybe quitting alcohol altogether is the right thing to do if this is what it causes. But he still thought it was wrong of me to call him racist, and I still don’t care. Alcohol doesn’t make anyone suddenly think that black people are inherently dangerous (even when unarmed) and therefore deserving of being shot by police (disregarding the fact that police can take members of the Mafia alive). But it does make people less inhibited about sharing such views even when it’s clear that they don’t have an accepting audience.

Some Final Notes

I was not looking for an argument or trying to entrap him in any way. I refrained from asking him about other races who have experienced violence in the past, maybe he would have made similar claims about other non-white races and maybe he wouldn’t, I didn’t try to broaden the scope of the dispute.

I am not going to do anything that might be taken as agreement or support of racism unless faced with the threat of violence. He did not threaten me so I wasn’t going to back down from the debate.

I gave him multiple opportunities to leave the debate. When I insisted that he find statistics to support his cause I hoped and expected that he would depart. Instead he came back with a page about the latest racist dog-whistle in Australian politics which had no correlation with anything we had previously discussed.

I think the fact that this debate happened says something about Australian and British culture. This man apparently hadn’t had people push back on such ideas before.

Related posts:

  1. Anarchy in the Office Some of the best examples I’ve seen of anarchy working...
  2. Servers in the Office I just had a conversation with someone who thinks that...
  3. a good security design for an office One issue that is rarely considered is how to deal...
Categories: Aligned Planets

OpenSTEM: Vale Stephen Hawking

Planet LA - March 16, 2018 - 12:05
Stephen Hawking was born on the 300th anniversary of Galileo Galilei‘s death (8 March 1942), and died on the anniversary of Albert Einstein‘s birth (14 March).   Having both reached the age of 76, Hawking actually lived a few months longer than Einstein, in spite of his health problems.  By the way, what do you call it when […]
Categories: Aligned Planets

David Rowe: Measuring SDR Noise Figure in Real Time

Planet LA - March 12, 2018 - 12:04

I’m building a sensitive receiver for FreeDV 2400A signals. As a first step I tried a HackRF with an external Low Noise Amplifier (LNA), and attempted to measure the Noise Figure (NF) using the system Mark and I developed two years ago.

However I was getting results that didn’t make sense and were not repeatable. So over the course of a few early morning sessions I came up with a real time NF measurement system, and wrinkled several bugs out of it. I also purchased a few Airspy SDRs, and managed to measure NF on them as well as the HackRF.

It’s a GNU Octave script called nf_from_stdio.m that accepts a sample stream from stdio. It assumes the signal contains a sine wave test tone from a calibrated signal generator, and noise from the receiver under test. By sampling the test tone it can establish the gain of the receiver, and by sampling the noise spectrum an estimate of the noise power.

The script can be driven from command line utilities like hackrf_transfer or airspy_rx or via software receivers like gqrx that can send SSB-demodaulted samples over UDP. Instructions are at the top of the script.

Equipment

I’m working from a home workbench, with rudimentary RF skills, a strong signal processing background and determination. I do have a good second hand signal generator (Marconi 2031), that cost AUD$1000 at a Hamfest, and a Rigol 815 Spec An (generously donated by Mel K0PFX, and Jim, N0OB) to support my FreeDV work. Both very useful and highly recommended. I cross-checked the sig-gen calibrated output using an oscilloscope and external attenuator (within 0.5dB). The Rigol is less accurate in amplitude (1.5dB on its specs), but useful for relative measurements, e.g. comparing cable attenuation.

For the NF test method I have used a calibrated signal source is required. I performed my tests at 435MHz using a -100dBm carrier generated from the Marconi 2031 sig-gen.

Usage and Results

The script accepts real samples from a SSB demod, or complex samples from an IQ source. Tune your receiver so that the sinusoidal test tone is in the 2000 to 4000 Hz range as displayed on Fig 2 of the script. In general for minimum NF turn all SDR gains up to maximum. Check Fig 1 to ensure the signal is not clipping, reduce the baseband gain if necessary.

Noise is measured between 5000 and 10000 Hz, so ensure the receiver passband is flat in that region. When using gqrx, I drag the filter bandwidth out to 12000 Hz.

The noise estimates are less stable than the tone power estimate, leading to some sample/sample variation in the NF estimate. I take the median of the last five estimates.

I tried supplying samples to nf_from_stdio using two methods:

  1. Using gqrx in UDP mode to supply samples over UDP. This allows easy tuning and the ability to adjust the SDR gains in real time, but requires a few steps to set up
  2. Using a “single” command line approach that consists of a chain of processing steps concatenated together. Once your signal is tuned you can start the NF measurements with a single step.

Instructions on how to use both methods are at the top of nf_from_stdio.m

Here are some results using both gqrx and command line methods, with and without an external (20dB gain/1dB NF) LNA. They were consistent across two laptops.

SDR Gqrx LNA Cmd Line LNA Cmd Line no LNA AirSpy Mini 2.0 2.2 7.9 AirSpy R2 1.7 1.7 7.0 HackRF One 2.6 3.4 11.1

The results with LNA are what we would expect for system noise figures with a good LNA at the front end.

The “no LNA” Airspy NF results are curious – the Airspy specs state a NF of just 3.5dB. So we contacted Airspy via Twitter and email to see how they measured their stated NF. We haven’t received a response to date. I posted to the Airspy mailing list and one gentleman (Dave – WØLEV) kindly replied and has measured noise figures of 4dB using calibrated noise sources and attenuators.

Looking into the data sheets for the Airspy, it appears the R820T tuner at the front end of the Airspy has a NF of 3.5dB. However a system NF will always be worse than the first device, as other devices (e.g. the ADC) also inject noise.

Other possibilities for my figures are measurement error, ambient noise sources at my site, frequency dependent NF, or variations in individual R820T samples.

In our past work we have used Bit Error Rate (BER) results as an independent method of confirming system noise figure. We found a close match between theoretical and measured BER when testing with and without a LNA. I’ll be repeating similar low level BER tests with FreeDV 2400A soon.

Real Time Noise Figure

It’s really nice to read the system noise figure in real time. For example you can start it running, then experiment with grounding, tightening connectors, or moving the SDR away from the laptop, or connect/disconnect a LNA in real time and watch the results. Really helps catch little issues in these difficult to perform tests. After all – we are measuring thermal noise, a very weak signal.

Some of the NF problems I could find and remove with a real time measurement:

  • The Airspy mini is nearly 1dB worse on the front left USB port than the rear left USB port on my X220 Thinkpad!
  • The Airspy mini really likes USB extension cables with ferrite clamps – without the ferrite I found the LNA was ineffective in reducing the NF – being swamped by conducted laptop noise I guess.
  • Loose connectors can make the noise figure a few dB worse. Wiggle and tighten them all.
  • Position of SDR/LNA near the radio and other bench equipment.
  • My magic touch can decrease noise figure! Grounding effect I guess?

Development Bugs

I had to work through several problems before I started getting sensible numbers. This was quite discouraging for a while as the numbers were jumping all over the place. However its fair to say measuring NF is a tough problem. From what I can Google its an uncommon measurement for people in home workshops.

These bugs are worth mentioning as traps for anyone else attempting home NF measurements:

  1. Cable loss: I found a 1.5dB loss is some cable I was using between the sig gen and the SDR under test. I Measured the loss by comparing a few cables connected between my sig gen and spec an. While the 815 is not accurate in terms of absolute calibration (rated at 1.5dB), it can still be used for comparative measurements. The cable loss can be added to the calculations or just choose a low loss cable.
  2. Filter shape: I had initially placed the test tone under 1000Hz. However I noticed that the gqrx signal had a few dB of high pass filtering in this region (Fig 2 below). Not an issue for regular USB demodulation, but a few dB really matters for NF! So I moved the test tone to the 2-4kHz region where the gqrx output was nice and flat.
  3. A noisy USB port, especially without a clamp, on the Airspy Mini (photo below). Found by trying different SDRs and USB ports, and finally a clamp. Oh Boy, never expected that one. I was connecting the LNA and the NF was stuck at 4dB – swamped by noise from the USB Port I guess.
  4. Compression: Worth checking the SDR output is not clipped or in compression. I adjusted the sig gen output up and down 3dB, and checked the power estimate from the script changed by 3dB. Also worth monitoring Fig 1 from the script, make sure it’s not hitting the limits. The HackRF needed it’s baseband gain reduced, but the Airspys were OK.
  5. I used latest Airspy tools built from source (rather than Ubuntu 17 package) to get stdout piping working properly and not have other status information from printfs injected into the sample stream!

Credits

Thanks Mark, for the use of your RF hardware, and I’d also like to mention the awesome CSDR tools and fantastic gqrx software – both very handy for SDR work.

Categories: Aligned Planets

Donna Benjamin: I said, let me tell you now

Planet LA - March 10, 2018 - 10:02
Saturday, March 10, 2018 - 09:56

Ever since I heard this month’s #AusGlamBlog theme was “Happiness” I’ve had that Happy song stuck in my head.

“Clap along if you know what happiness is to you”

I’m new to the library world as a professional, but not new to libraries. A sequence of fuzzy memories swirl in my mind when I think of libraries.

First, was my local public library children’s cave filled with books that glittered with colour like jewels.

Next, I recall the mesmerising tone and timbre of the librarian’s voice at primary school. Each week she transported us into a different story as we sat, cross legged in front of her, in some form of rapture.

Coming into closer focus I recall opening drawers in the huge wooden catalogue in the library at high school. Breathing in the deeply lovely, dusty air wafting up whilst flipping through those tiny cards was a tactile delight. Some cards were handwritten, some typewritten, some plastered with laser printed stickers.

And finally, I remember relishing the peace and quiet afforded by booking one of 49 carrel study booths at La Trobe University.

I love libraries. Libraries make me happy.

The loss of libraries makes me sad. I think of Alexandria, and more recently in Timbuktu, and closer to home, I mourn the libraries lost to the dreaming by the ravages of destructive colonial force on this little continent so many of us now call home.

Preservation and digitisation, and open collections give me hope. There can only ever be one precious original of a thing, but facsimiles, and copies and 3D blueprints increasingly means physical things can now too be shared and studied without needing to handle, or risk damaging the original.

Sending precious things from collection to collection is fraught with danger. The revelations of what Australian customs did to priceless plant specimens from France & New Zealand still gives me goosebumps of horror.

Digital. Copies. Catalogues, Circulation, Fines, Holds, Reserves, and Serial patterns. I’m learning new things about the complexities under the surface as I start to work seriously with the Koha Community Integrated Library System. I first learned about the Koha ILS more than a decade ago, but I'm only now getting a chance to work with it. It brings my secret love of libraries and my publicly proclaimed love of open source together in a way I still can’t believe is possible.

So yeah.

OH HAI! I’m Donna, and I’m here to help.

“Clap along if you feel like that's what you wanna do”

Categories: Aligned Planets

OpenSTEM: Amelia Earhart in the news

Planet LA - March 9, 2018 - 16:05
Recently Amelia Earhart has been in the news once more, with publication of a paper by an American forensic anthropologist, Richard Jantz. Jantz has done an analysis of the measurements made of bones found in 1940 on the island of Nikumaroro Island in Kiribati. Unfortunately, the bones no longer survive, but they were analysed in […]
Categories: Aligned Planets

Craig Sanders: brawndo-installer

Planet LA - March 7, 2018 - 18:04

Tired of being oppressed by the slack-arse distro package maintainers who waste time testing that new versions don’t break anything and then waste even more time integrating software into the system?

Well, so am I. So I’ve fixed it, and it was easy to do. Here’s the ultimate installation tool for any program:

brawndo() { curl $1 | sudo /usr/bin/env bash }

I’ve never written a shell script before in my entire life, I spend all my time writing javascript or ruby or python – but shell’s not a real language so it can’t be that hard to get right, can it? Of course not, and I just proved it with the amazing brawndo installer (It’s got what users crave – it’s got electrolyes!)

So next time some lame sysadmin recommends that you install the packaged version of something, just ask them if apt-get or yum or whatever loser packaging tool they’re suggesting has electrolytes. That’ll shut ’em up.

brawndo-installer is a post from: Errata

Categories: Aligned Planets

Linux Users of Victoria (LUV) Announce: LUV March 2018 Workshop: Comparing window managers

Planet LA - March 7, 2018 - 18:03
Start: Mar 17 2018 12:30 End: Mar 17 2018 16:30 Start: Mar 17 2018 12:30 End: Mar 17 2018 16:30 Location:  Infoxchange, 33 Elizabeth St. Richmond Link:  http://luv.asn.au/meetings/map

Comparing window managers

We'll be looking at several of the many window managers available on Linux.

We're still looking for more people who can talk about the window manager they are using, what they like and dislike about it, and maybe demonstrate a little.

Please email me at <president@luv.asn.au> with the name of your window manager if you think you could help!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 17, 2018 - 12:30

read more

Categories: Aligned Planets

Simon Lyall: Audiobooks – Background and February 2018 list

Planet LA - March 7, 2018 - 10:03

Audiobooks

I started listening to audiobooks around the start of January 2017 when I started walking to work (I previously caught the bus and read a book or on my phone).

I currently get them for free from the Auckland Public Library using the Overdrive app on Android. However while I download them to my phone using the Overdrive app I listen to the using Listen Audiobook Player . I switched to the alternative player mainly since it supports playback speeds greater the 2x normal.

I’ve been posting a list the books I listened to at the end of each month to twitter ( See list from Jan 2018, Dec 2017, Nov 2017 ) but I thought I’d start posting them here too.

I mostly listen to history with some science fiction and other topics.

Books listened to in February 2018

The Three-Body Problem by Cixin Liu – Pretty good Sci-Fi and towards the hard-core end I like. Looking forward to the sequels 7/10

Destiny and Power: The American Odyssey of George Herbert Walker Bush by Jon Meacham – A very nicely done biography, comprehensive and giving a good positive picture of Bush. 7/10

Starship Troopers by Robert A. Heinlein – A pretty good version of the classic. The story works well although the politics are “different”. Enjoyable though 8/10

Uncommon People: The Rise and Fall of the Rock Stars 1955-1994 by David Hepworth – Read by the Author (who sounds like a classic Brit journalist). A Story or two plus a playlist from every year. Fascinating and delightful 9/10

The Long Haul: A Trucker’s Tales of Life on the Road by Finn Murphy – Very interesting and well written about the author’s life as a long distance mover. 8/10

Mornings on Horseback – David McCullough – The Early life of Teddy Roosevelt, my McCullough book for the month. Interesting but not as engaging as I’d have hoped. 7/10

The Battle of the Atlantic: How the Allies Won the War – Jonathan Dimbleby – Overview of the Atlantic Campaign of World War 2. The author works to stress it was on of the most important fronts and does pretty well 7/10

 

 

 

Categories: Aligned Planets

Russell Coker: WordPress Multisite on Debian

Planet LA - March 5, 2018 - 20:02

WordPress (a common CMS for blogs) is designed to be copied to a directory that Apache can serve and run by a user with no particular privileges while managing installation of it’s own updates and plugins. Debian is designed around the idea of the package management system controlling everything on behalf of a sysadmin.

When I first started using WordPress there was a version called “WordPress MU” (Multi User) which supported multiple blogs. It was a separate archive to the main WordPress and didn’t support all the plugins and themes. As a main selling point of WordPress is the ability to select from the significant library of plugins and themes this was a serious problem.

Debian WordPress

The people who maintain the Debian package of WordPress have always supported multiple blogs on one system and made it very easy to run in that manner. There’s a /etc/wordpress directory for configuration files for each blog with names such as config-etbe.coker.com.au.php. This allows having multiple separate blogs running from the same tree of PHP source which means only one thing to update when there’s a new version of WordPress (often fixing security issues).

One thing that appears to be lacking with the Debian system is separate directories for “media”. WordPress supports uploading images (which are scaled to several different sizes) as well as sound and apparently video. By default under Debian they are stored in /var/lib/wordpress/wp-content/uploads/YYYY/MM/filename. If you have several blogs on one system they all get to share the same directory tree, that may be OK for one person running multiple blogs but is obviously bad when several bloggers have independent blogs on the same server.

Multisite

If you enable the “multisite” support in WordPress then you have WordPress support for multiple blogs. The administrator of the multisite configuration has the ability to specify media paths etc for all the child blogs.

The first problem with this is that one person has to be the multisite administrator. As I’m the sysadmin of the WordPress servers in question that’s an obvious task for me. But the problem is that the multisite administrator doesn’t just do sysadmin tasks such as specifying storage directories. They also do fairly routine tasks like enabling plugins. Preventing bloggers from installing new plugins is reasonable and is the default Debian configuration. Preventing them from selecting which of the installed plugins are activated is unreasonable in most situations.

The next issue is that some core parts of WordPress functionality on the sub-blogs refer to the administrator blog, recovering a forgotten password is one example. I don’t want users of other blogs on the system to be referred to my blog when they forget their password.

A final problem with multisite is that it makes things more difficult if you want to move a blog to another system. Instead of just sending a dump of the MySQL database and a copy of the Apache configuration for the site you have to configure it for which blog will be it’s master. If going between multisite and non-multisite you have to change some of the data about accounts, this will be annoying on both adding new sites to a server and moving sites from the server to a non-multisite server somewhere else.

I now believe that WordPress multisite has little value for people who use Debian. The Debian way is the better way.

So I had to back out the multisite changes. Fortunately I had a cron job to make snapshots of the BTRFS subvolume that has the database so it was easy to revert to an older version of the MySQL configuration.

Upload Location

update etbe_options set option_value='/var/lib/wordpress/wp-content/uploads/etbe.coker.com.au' where option_name='upload_path';

It turns out that if you don’t have a multisite blog then there’s no way of changing the upload directory without using SQL. The above SQL code is an example of how to do this. Note that it seems that there is special case handling of a value of ‘wp-content/uploads‘ and any other path needs to be fully qualified.

For my own blog however I choose to avoid the WordPress media management and use the following shell script to create suitable HTML code for an image that links to a high resolution version. I use GIMP to create the smaller version of the image which gives me a lot of control over how to crop and compress the image to ensure that enough detail is visible while still being small enough for fast download.

#!/bin/bash set -e if [ "$BASE" = "" ]; then   BASE="http://www.coker.com.au/blogpics/2018" fi while [ "$1" != "" ]; do   BIG=$1   SMALL=$(echo $1 | sed -s s/-big//)   RES=$(identify $SMALL|cut -f3 -d\ )   WIDTH=$(($(echo $RES|cut -f1 -dx)/2))px   HEIGHT=$(($(echo $RES|cut -f2 -dx)/2))px   echo "<a href=\"$BASE/$BIG\"><img src=\"$BASE/$SMALL\" width=\"$WIDTH\" height=\"$HEIGHT\" alt=\"\" /></a>"   shift done

Related posts:

  1. Creating WordPress Packages deb http://www.coker.com.au wheezy wordpress I maintain Debian packages of a...
  2. permalinks in wordpress, Apache redirection, and other blog stuff When I first put my new blog online I didn’t...
  3. WordPress Plugins I’ve just added the WordPress Minify [1] plugin to my...
Categories: Aligned Planets

Russell Coker: Compromised Guest Account

Planet LA - March 5, 2018 - 14:02

Some of the workstations I run are sometimes used by multiple people. Having multiple people share an account is bad for security so having a guest account for guest access is convenient.

If a system doesn’t allow logins over the Internet then a strong password is not needed for the guest account.

If such a system later allows logins over the Internet then hostile parties can try to guess the password. This happens even if you don’t use the default port for ssh.

This recently happened to a system I run. The attacker logged in as guest, changed the password, and installed a cron job to run every minute and restart their blockchain mining program if it had been stopped.

In 2007 a bug was filed against the Debian package openssh-server requesting that the AllowUsers be added to the default /etc/ssh/sshd_config file [1]. If that bug hadn’t been marked as “wishlist” and left alone for 11 years then I would probably have set it to only allow ssh connections to the one account that I desired which always had a strong password.

I’ve been a sysadmin for about 25 years (since before ssh was invented). I have been a Debian Developer for almost 20 years, including working on security related code. The fact that I stuffed up in regard to this issue suggests that there are probably many other people making similar mistakes, and probably most of them aren’t monitoring things like system load average and temperature which can lead to the discovery of such attacks.

Related posts:

  1. Guest/Link Post Spam I’ve been getting a lot of spam recently from people...
  2. SE Linux Play Machine and Passwords My SE Linux Play Machine has been online again since...
  3. Can you run SE Linux on a Xen Guest? I was asked “Can you run SELinux on a XEN...
Categories: Aligned Planets

Francois Marier: Redirecting an entire site except for the certbot webroot

Planet LA - March 2, 2018 - 15:41

In order to be able to use the webroot plugin for certbot and automatically renew the Let's Encrypt certificate for libravatar.org, I had to put together an Apache config that would do the following on port 80:

  • Let /.well-known/acme-challenge/* through on the bare domain (http://libravatar.org/).
  • Redirect anything else to https://www.libravatar.org/.

The reason for this is that the main Libravatar service listens on www.libravatar.org and not libravatar.org, but that cerbot needs to ascertain control of the bare domain.

This is the configuration I ended up with:

<VirtualHost *:80> DocumentRoot /var/www/acme <Directory /var/www/acme> Options -Indexes </Directory> RewriteEngine on RewriteCond "/var/www/acme%{REQUEST_URI}" !-f RewriteRule ^(.*)$ https://www.libravatar.org/ [last,redirect=301] </VirtualHost>

The trick I used here is to make the redirection RewriteRule conditional on the requested file (%{REQUEST_URI}) not existing in the /var/www/acme directory, the one where I tell certbot to drop its temporary files.

Here are the relevant portions of /etc/letsencrypt/renewal/www.libravatar.org.conf:

[renewalparams] authenticator = webroot account = <span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=posts%2Fredirecting-entire-site-except-certbot-webroot&amp;page=webroot_map" rel="nofollow">?</a>webroot map</span> libravatar.org = /var/www/acme www.libravatar.org = /var/www/acme
Categories: Aligned Planets
Syndicate content