Joining is easy.
Subscribe to our announce list. We'll send you a confirmation email in reply. Come back, enter the confirmation password, and you're done!
For those who may not know, I recently moved from Melbourne, Victoria to Canberra, Australian Capital Territory (ACT) and am now living in a house in the inner north-west. Of course, being a geek, I wanted to get the internet connected as soon as possible! After such a smooth transition I’d expected some problems and this is where they all cropped up.
In Melbourne I had an Internode ADSL connection and before I moved I called them up to relocate this service. This, of course, relied on getting an active Telstra line at the new house. I knew it would take a bit of time to relocate the service, so in the interim I bought a Telstra wi-fi internet device. This is actually a ZTE MF30 and supports up to 5 connections via wi-fi, so I can get both my iPhone and laptop on at the same time. Quite simply, this device is brilliant at what it does and I couldn’t be happier with it.
So, at the moment I’m online via the Telstra device, which is just as well really, as I soon encounter communication issue number 1: Optus.
It appears that Optus have a woeful network in Canberra. I have an iPhone 3GS, which I know can only use 850MHz and 2100MHz 3G networks. Optus uses 900MHz and 2100MHz for their 3G, so the iPhone will only work in Optus 2100MHz coverage. In Melbourne I never had a problem getting on the internet at good speeds.
When I looked at the Optus overage maps for ACT and click on “3G Single band” (the 2100MHz network coverage), it shows the inner north-west being well covered. It really isn’t. Both from home and at work in Belconnen, I can barely get two bars of GSM phone signal. The connectivity is so bad that I can barely make phone calls and send SMSs. Occasionally, I get the “Searching…” message which tells me that it has completely lost GSM connectivity. This never happened in Melbourne, where I had 4-5 bars of signal pretty much all the time.
The 3G connection drops in and out so often that I have to be standing in exactly the right location to be able to access the internet on my iPhone. Even this afternoon in Kingston in the inner south, I wasn’t able to get onto the internet and post to Twitter. I had to use the Telstra device, which hasn’t missed a beat in any location for network connectivity, to establish a connection. This really isn’t good enough for the middle of Canberra. I am seriously considering calling Optus, lodging a complaint and trying to get out of my 2 year contract (which has another 10 months to run), so I can switch over to Telstra. I never thought I’d say this, but I actually want to use a Telstra service!!!
Communications issue number 2: TransACT. From what I can find out TransACT have a cable TV network which also has telephone and internet capabilities. When this network was established about a decade ago, it was revolutionary and competitive. Today the network has been expanded to support ADSL connections, but there is no ability to get a naked service as all connections require an active phone service. Additionally, as a quick look at some of the internet connectivity plans show, after factoring in the required phone service, it is a costly service for below average download allowances.
When I moved into the house, the process of relocating the Internode ADSL service from Melbourne to Canberra triggered a visit from a Telstra technician. However, he wasn’t able to find a physical Telstra line into the house. Being an older suburb of Canberra, this house will have a Telstra cable. Or rather will have had as apparently it is not unknown for TransACT installers to cut the Telstra cables out as “You won’t need THAT anymore!”
So now I have to pay for a new cable to be installed from the house to the “Telstra network boundary” (presumably the street or nearest light pole where it can be connected to Telstra’s infrastructure). Then we have to pay again for a new Telstra connection at a cost of $299. Considering that if the Telstra cable had been left in place, the connection cost would be $55, this is turning into quite an expensive proposition just to get a naked DSL service.
All in all I am not impressed with the state of communications in Australia’s capital city, Canberra. All I can say is please, please, please bring on the National Broadband Network (NBN)!
The Korean MySQL Power User Group gets a special guest speaker next weekend (Oct 31 2015 – 4pm – 4:33’s offices in Gangnam — nearest train stop is Samseong station, Line 2 — post requires Cafe Naver login) — Mark Callaghan (Small Datum, @markcallaghan, and formerly High Availability MySQL). I’ve been to many of their meetups, and I think this is a great opportunity for many DBAs to learn more about how Mark helps make MySQL and MongoDB better for users at Facebook. I’m sure he’ll also talk about RocksDB.
After that, as usual, there will be a DBA Dinner. This time the tab gets picked up by OSS Korea. See you next Saturday — Halloween in Seoul will have added spice!
Last week we had the MySQL Meetup with MariaDB Developers in Amsterdam, which went on easily for about 3.5 hours. Thanks to all for listening (these were lightning talks, not with a strict 5 minute clock with Q&A thrown in), and Daniël van Eeden for organising this at the eBay offices (whom kindly provided pizza, beer and soft drinks as well). We had many talks, and I’ve managed to put up most of the slides into a Google Drive folder, so feel free to access the bucket.
Georg Richter had prepared a presentation but decided not to give it, since we already had quite a lot of talks and discussion throughout the sessions. If you’re interested in MariaDB Connectors, the presentation is worth a read.
P/S: for some pictures, I live tweeted them:
— Colin Charles (@bytebot) October 12, 2015
— Colin Charles (@bytebot) October 12, 2015
— Colin Charles (@bytebot) October 12, 2015
We all know and understand how important passwords are. We all know that we should be using strong passwords.
What’s a strong password? Something that uses:
So, to put it mildly, it really annoys me when I come across services that don’t allow me to use strong passwords. If I possibly could, I’d boycott these services, but sometimes that’s just not possible.
For example, my internet banking is limited to a password of between 6-8 characters. WTF?! This is hardly a secure password policy!
Another financial service I use is limited to 15 characters and doesn’t allow most of the punctuation set. Why? Is it too difficult to extend your database validation rules to cover all of the character set?
Ironically, I didn’t have a problem with Posterous, Facebook or Twitter (and others) in using properly secure passwords. So, these free services give me a decent level of security, but Australian financial services companies can’t. It’s stupidity in the extreme.
A while back I posted up a few of the issues I was having with Ubuntu 10.04 “Lucid Lynx”.
I’m now using the latest version (for the next few weeks), Ubuntu 11.10 “Oneric Ocelot”. And while it works well on my new laptop, it suffers from three pretty annoying issues.
These things are quite frustrating, and while I am pretty confident that the power issues will be resolved, I really hope that the other problems are addressed for the next version which is due 26 April 2012. From those bug reports and blog posts, it looks like they will be, which is heartening.
We need to have a standard for management of user accounts.
Given the number of high profile companies that have been cracked into lately, I have been going through the process of closing accounts for services I no longer use.
Many of these accounts were established when I was more trusting and included real data. However now, unless I am legally required to, I no longer use my real name or real data.
But I have been bitterly disappointed by the inability of some companies to shut down old accounts. For example, one service told me that “At this time, we do not directly delete user accounts…”. I also couldn’t change my username. Another service emailed my credentials in plain text.
To protect the privacy and security of all users, an enforceable standard needs to be established covering management of user accounts. It needs to be applied across the board to all systems connected to the internet. I know how ridiculous this sounds, and that many sites wouldn’t use it, but high profile services should be able to support something like this.
Included in the standard should be:
This is a short list from my frustrations today. Please comment to help me flesh this out with other things that should be done on a properly supported user account management system.
And please let me know of your experiences with companies that were unable to properly protect your privacy and security.
I’ve given up on Blogger and returned to WordPress. I’ll update the look and feel from the defaults and try to update it a bit more often!
I have bitten the bullet and upgraded to a laptop with 1366×768 display resolution anyway.
But on a 13.3 inch screen. So it actually works pretty well.
It is a system worth about $2500 that I got for around $700. And no, it didn’t fall off the back of a truck! It fell off the back of the Dell Outlet Store.
It’s also mil-spec hardened (or something) which means that it’s almost child-proof!
It does 1080p video and with 4 cores (2 physical and 2 virtual ‘hyper-threading’) video editing works well. Really well.
I want to post up a full review at some stage, but it may not be soon.
Recently I moved house.
I hate moving. Not just for the having to pack everything into boxes at one end then then unpack everything at the destination (which for this move I didn’t have to do!), but mostly because I have to go through the pain that is changing my address.
It turns out that I interact with a lot of organisations, from finance institutions (banks, credit card companies, car insurance, house insurance, health insurance, etc), to official organisations (driver licencing, Medicare, electoral, organ donor register, etc), to community (Red Cross blood donor, 3RRRFM, etc) and mundane organisations (Costco, etc). And that’s just a fraction of them.
I was thinking that, rather than having to fill in what feels like a million forms and waste time that could be spent being a productive public servant or dad for my kid, why isn’t there a central contact details database that I update once? I’m sure that smarter minds than mine have considered this, but I think an opportunity exists for some organisation (government or private) to do this. In the day and age of ‘over-sharing’, are people still averse to putting their address, phone number and email details into a central database? Login security could be addressed using two-factor authentication, such as used by Google Authenticator, or sending a one-time code via SMS or email.
Many services, such as Twitter and Facebook, are set up to authorise other apps to access them. An example of this is when I used my Facebook account to sign up for Freecycle which operates as a Yahoo Group. I ‘authorised’ Facebook to talk to Yahoo. I’ve also authorised Twicca on my Android smartphone to talk to my Twitter account.
In the same way, in this theoretical single contact details database, I could let the various companies and organisations that I interact with, access my updated contact details. Maybe they could poll this database once a week to look for updated details. I understand they’d have many different backend CRM systems so there may be some manipulation required, but nothing that’s too hard to fix with a bit of scripting.
I could also remove their access when I cease using their services. If I’m not longer banking with Bank A, then I revoke their access so they can’t find out how to contact me.
Does this sound sensible or silly? If sensible why hasn’t Google or someone done this already?
Yesterday I went to the second half of BarCamp Canberra 2012 (I was busy in the morning and couldn’t make it).
As per usual for a BarCamp there were many great ideas being discussed. Someone (Craig?) suggested that we all go home and write blog posts about our own great ideas. So here goes …
My ideas is this: to build a website to facilitate the transfer of mobile phone credit from people who have a surplus to people who need it.
My wife and I are currently using Telstra pre-paid and every so often when it gets near the expiry date, if there’s any unused credit we transfer some (or all) of that to the other account. Telstra call this ‘CreditMe2U’ and my understanding is that it can be used on any post- or pre-paid accounts. There’s a few limitations, such a maximum of $10 per day and some limit per month.
I see the site facilitating someone posting up that they need, say $5 credit. Anyone should be able to do this for any reason. The request could be as little as just a phone number and an amount.
Someone else, who has surplus credit, would transfer them some credit from their account, and then mark that the transaction has happened. This ensures that the requester doesn’t get flooded with credit transfers and multiple people who have surplus credit don’t end up helping just one person. The requester would also not be able to make another request for 24 hours (based on phone number).
I would be reluctant to require people to register for accounts, as I think that would kill it entirely. It should be able to be truly anonymous. I would also be really keen to see that the site is not indexed in any way (robots.txt, archive.org exclusions, etc), so that numbers can’t be linked with requests.
I’m not sure if carriers other than Telstra have this option, but it’s worth investigating.
While there would be obvious ways to ‘game’ this system, and it’s not a fully thought through idea, it could become so with some feedback. So, what do you all think?
var redsarray = new Array();
var yellowsarray = new Array();
var greensarray = new Array();
var graysarray = new Array();
<button type=”button” onclick=”changecolors()”;>Button</button>
I am now posting from our Internode naked DSL connection. To be honest, this has been working for many months, I have been slack in posting this follow up!
The Telstra guy did come back and install the line. But only after we ordered a full phone line, dial tone and all, at around $30/month. Not to mention the $299 installation fee.
After that was installed, Internode activated the ADSL. Even that took multiple calls to get the technicians back to the exchange as things went wrong.
After that was all sorted out, it was then converted to a ‘naked ADSL’ service. Effectively cancelling the dial tone service.
The rampant stupidity of the Australian communications system is truly breathtaking. And expensive. What should have been a very simple thing to get going – a naked ADSL line – proved to be extremely difficult and expensive.
But now we have Internode naked ADSL and NodePhone. Finally.
(As an interesting side note, we retained our Melbourne based phone NodePhone (VoIP) number. When the Mitchell chemical fire occurred the other day and half of Canberra was on alert, we received a call on the VoIP number, as it is registered at this address. Both mine and my wife’s mobile phones are through Optus, also registered at this address and didn’t get an SMS or call. Either the emergency alerting system or Optus messed up there. I’d be guessing the latter.)
Unfortunately, we are so far away from the exchange that we only get around 500 KB a second (half a MB a second). Back in Melbourne, close to the exchange, I was getting 2.2 MB a second, so around four times faster).
But at least we have it
The Mirobot v2 logo turtle robotics kits will be here shortly. These are the updated version of the kits we have been using at primary schools (year 4-6) this year in our Robotics and Programming workshops. The new model doesn’t require little pegs any more, the structure now holds itself together with a beautiful designed slot mechanism. Kudos to Ben Pirt for an awesome design!
The robot frames are made of lasercut MDF, and the circuit board is Arduino controlled. All aspects of the design is open and available. The robot can be used to draw, but now also comes with bump sensors and line following capabilities. Communication is through wifi over a raw or web socket. There are a number of programming and control options, from Scratch-style visual systems to a brand new Python library!
By default the v2 comes with a pre-soldered circuit board, but especially for OpenSTEM Ben is offering a non-soldered PCB so we can continue doing the soldering part with classes also. We have found this to be both a great enabler for students, as well as teach that people can build things almost from scratch. But you choose… we keep both the soldered and un-soldered kits. Either way, this is a great project to do with your kids at home, quite a few parents of students that do our workshops also continue in this way.
If you order now, we’ll still be able to include you in the first shipment!
Now for Electronics Soldering! If you or your children want to also do some soldering but don’t have the necessary tools yet, we now have sets available. We assemble our own classroom soldering kits ourselves from a number of sources, as sets found in shops have flimsy or awkward stands. We use a solid steel stand, that also features a wire cleaning ball – this works much better than a wet sponge and it is much easier to maintain. We also include a number of other useful items.
You can order the soldering kit together with a Mirobot kit, or on its own.
Shipping of orders including Mirobots will be in November. This is likely to be our final Mirobot order this side of Christmas, so we do recommend you order now if you want to have the kit available over the holidays.
Several folks noticed that all of the known LSM mailing list archives stopped archiving earlier this year. We don’t know why and generally have not had any luck contacting the owners of several archives, including marc and gmane. This is a concern, because the list is generally where Linux kernel security takes place and it’s important to have a public record of it.
The good news is that Paul Moore was finally able to re-register the list with mail-archive.com, and there is once again an active archive here: http://firstname.lastname@example.org/
Please update any links you may have!
I’ve been thinking about modems for a VHF FreeDV mode. The right waveform and a good demodulator is the key to high performance. However it would be nice to make some re-use of existing FM VHF radios. So is it possible to come up with a waveform that can pass through legacy FM radios, but also be optimally demodulated with a SDR?
My first guess was that the problem with legacy radios is the 300Hz High Pass (HP) pass filtering. So I came up with a waveform with has no DC. Brady pointed out this was Manchester Encoding (ME), used in all sorts of applications for just this problem. Each data bit is Manchester encoded to two bits, so a 2400 bit/s bit-stream becomes a 4800 bit/s bit-stream that is then 2FSK modulated. Turns out the ME-2FSK signal doesn’t have much low frequency energy so passes happily through the audio pass band filtering of regular FM radios.
Here is a block diagram of the idea. We have the option to demodulate the signal using a legacy analog radio or, with higher performance, an optimal FSK demod:
This is what the spectrum of the ME-2FSK looks like at the output of the analog FM demodulator before high pass filtering. Notice how there is not much energy beneath 300Hz? So we are not going to lose much due to the 300Hz HP filter.
Here are the time domain modem signals before and after the 300Hz High Pass filter. Pretty similar.
The ME-2FSK scheme works OK in my simulation, so I think it’s possible to squirt 2400 bit/s through a $40 HT with acceptable modem performance using 2FSK. This means we can do VHF FreeDV using your laptop/SM1000 and a $40 radio, and it will work just as well as existing VHF DV modes, and even pass through analog repeaters.
Real gold would be a way to send 4FSK through a HT, that (if you have a SDR) can be optimally decoded at a much lower Eb/No. Unfortunately I couldn’t work out how to do that. For optimal 4FSK you need the tones spaced at the symbol rate Rs. This means -1.5Rs, -0.5Rs, 0.5Rs, 1.5Rs, which won’t fit into 5kHz deviation with Rs=4800. So how about Rs=2400? Well when I tried Rs=2400 through the FM demod the modem appears to be 3dB worse that Rs=4800. I’m not sure why. Possibly deviation, as I get the same results with the 300Hz HP filter removed. Or maybe I messed up the simulation. Oh Well. Working backwards, this suggests one reason the ME 2FSK waveform works so well at Rs=4800 is greater deviation.
Moving to the optimal 4FSK demod approach, here are the outputs of each filter from an optimal 4FSK demod. The pretty colours represent the different filter ouputs. The lower plot is the decimated filter outputs, after sampling at the ideal timing instant.
I’m inclined to use both 4FSK and ME-2FSK. We could run ME-2FSK on links with legacy radios and 4FSK on SDRs that support optimal demodulation. That 6dB Eb/No for optimal 4FSK, combined with Codec 2 running at a lower rate, is a huge gain over current analog and DV systems.
Summary of Candidate VHF Waveforms
I’ve now played with quite a few modem waveforms, and have compared them in the table below. Eb/No is for a BER of 2%, which is roughly where Digital Voice codecs fall over. There are two Eb/No figures, one for an ideal demodulator, the other when using a demod that works through a legacy FM analog radio.Waveform Eb/No (ideal) Eb/No (FM) Comment Read More PSK 3.0 na requires linear PA, complex coherent demod GMSK 5.0 9.0 requires “data” port, complex coherent demod   4FSK 6.0 na simple demod, good fading ME-2FSK 8.5 12.0 simple demod, good fading, $40 HT! DMR 4FSK na 11.0 standardised  AFSK-FM na 16.0 As used in APRS 
The complexity of the demods required for coherent PSK and GMSK is not a show stopper, as we only have to write GPL modem code once. However coherent demodulation means other sources of “implementation loss” such as phase recovery that make the ideal performance hard to achieve. Non-coherent mFSK is rather simple in comparison, we just need a fine timing estimator. Less to go wrong. No phase estimation means fading will have less impact than coherent PSK/GMSK. Fine frequency offsets won’t bother us. mFSK is, however less bandwidth efficient.
GMSK coherently demodulated or through a legacy FM radio looks pretty good, but does require a “data port” with unfiltered access to the FM modem. So no $40 HTs.
Note the distinction between ideal non-coherent 4FSK, and the 4FSK modem used by DMR and similar Digital Voice modes like C4FSK. The latter are not optimal waveforms, and in our simulations under-perform by around 6dB. We can’t find any explanation of why these waveforms were chosen for DMR or C4FM. I am guessing that have been developed with the specific use of legacy FM radio architectures or reduced RF bandwidth in mind.
Running the simulation
I set up a bunch of simulations of various combinations so they all have about 2% BER:
Rs=4800 2FSK ideal demod
EbNodB: 8.5 BER 0.023
Rs=4800 2FSK analog FM demod, not too shabby and pushes 2400bit/s thru a $40 HT!
EbNodB: 12.0 BER 0.021
Rs=2400 2FSK analog FM demod, needs more power for same BER! Che?
EbNodB: 15.0 BER 0.027
Hmm, doesnt improve with no 300Hz HPF, maybe due to less deviation?
EbNodB: 15.0 BER 0.027
Rs=2400 4FSK ideal demod, nice low Eb/No!
EbNodB: 6.0 BER 0.025
It would be great to test the work above in the real world, for example get the ME-2FSK modem software into a form that we can do calibrated noise (or MDS) tests on a real FM radio.
On Sunday night I started the process of upgrading the LUV server to Debian/Jessie from Debian/Wheezy. My initial plan was to just upgrade Apache first but dependencies required upgrading systemd too.
One problem I’ve encountered in the past is that the Wheezy version of systemd will often hang on an upgrade to a newer version. Generally the solution to this is to run “systemctl daemon-reexec” from another terminal. The problem in this case was that not all the libraries needed for systemd had been installed, so systemd could re-exec itself but immediately aborted. The kernel really doesn’t like it when process 1 aborts repeatedly and apparently immediately hanging is the result. At the time I didn’t know this, all I knew was that my session died and the server stopped responding to pings immediately after I requested a reexec.
The LUV server is hosted at VPAC for free. As their staff have actual work to do they couldn’t spend a lot of time working on the LUV server. They told me that the screen was flickering and suspected a VGA cable. I got to the VPAC server room with the spare LUV server (LUV had been given 3 almost identical Sun servers from Barwon Water) at 16:30. By 17:30 I had fixed the core problem (boot with “init=/bin/bash“, mount the root filesystem rw, finish the upgrade of systemd and it’s dependencies, and then reboot normally). That got it into a stage where the Xen server for Wikimedia Au was working but most LUV functionality wasn’t working.
By 23:00 on Monday I had the full list server functionality working for users, this is the main feature that users want when it’s not near a meeting time. I can’t remember whether it was Monday night or Tuesday morning when I got the Drupal site going (the main LUV web site). Last night at midnight I got the last of the Mailman administrative interface going, I admit I could have got it going a bit earlier by putting SE Linux in permissive mode, but I don’t think that the members would have benefited from that (I’ll upload a SE Linux policy package that gets Mailman working on Jessie soon).
Now it’s Wednesday and I’m still fixing some cron jobs. Along the way I noticed some problems with excessive disk space use that I’m fixing now and I’ve also removed some Wikimedia related configuration files that were obsolete and would have prevented anyone from using a wikimedia.org.au address to subscribe to the LUV mailing lists.
Now I believe that everything is working correctly and generally working better than before.Lessons Learned
While Sunday night wasn’t a bad time to start the upgrade it wasn’t the best. If I had started the upgrade on Monday morning there would have been less down-time. Another possibility might be to do the upgrade while near the VPAC office during business hours, I could have started the upgrade while at a nearby cafe and then visited the server room immediately if something went wrong.
Doing an upgrade on a day when there’s no meeting within a week was a good choice. It wasn’t really a conscious choice as I’m usually doing other LUV work near the meeting day which precludes doing other LUV work that doesn’t need to be done soon. But in future it would be best to consciously plan upgrades for a date when users aren’t going to need the service much.
While the Wheezy systemd bug is unlikely to ever be fixed there are work-arounds that shouldn’t result in a broken server. At the moment it seems that the best option would be to kill -9 the systemctl processes that hang until the packages that systemd depends on are installed. The problem is that the upgrade hangs while the new systemctl tries to tell the old systemd to restart daemons. If we can get past that to the stage where the shared objects are installed then it should be ok.
The Apache upgrade from 2.2.x to 2.4.x changed the operation of some access control directives and it took me some time to work out how to fix that. Doing a Google search on the differences between those would have led me to the Apache document about upgrading from 2.2 to 2.4 . That wouldn’t have prevented some down-time of the web sites but would have allowed me to prepare for it and to more quickly fix the problems when they became apparent. Also the rather confusing configuration of the LUV server (supporting many web sites that are no longer used) didn’t help things. I think that removing cruft from an installation before an upgrade would be better than waiting until after things break.
Next time I do an upgrade of such a server I’ll write notes about it while I go. That will give a better blog post about it if it becomes newsworthy enough to be blogged about and also more opportunities to learn better ways of doing it.
Sorry for the inconvenience.
Brady O’Brien has been doing some fine work simulating the 4FSK DMR modem, based on the waveform description in the ETSI spec. It’s not a classic non-coherent 4FSK modem design. Rather it appears designed to easily integrate with legacy analog FM modulators and demodulators.
Here is the block diagram of a regular non-coherent 2FSK demod. For 4FSK there would be 4 arms, but you get the idea:
The DMR modem uses Root Raised Cosine (RRC) filters and a FM modulator and demodulator:
Here are the performance curves produced by fsk4.m:
The best we could do with our simulation is 5-6dB poorer than the theoretical performance of non-coherent 4FSK. This made me suspect we had a bug. However this performance loss compared to theory is consistent with other FSK modems I have simulated that run through legacy analog modulators, rather than using ideal demodulators.
Have we done something wrong? Does anyone have figures for DMR modem Eb/No versus BER? Perhaps with have an error in our simulation. Perhaps the high BER is tolerable for the higher layers of DMR, given the amount of FEC they’ve got it wrapped in. Once you’re over a certain threshold, FEC will take care of it.
Our simulation is consistent with the Minimum Detectable Signal (MDS) figures given for commercial DMR radios, for example 2% BER at a MDS of -120dBm. Our curve above suggests Eb/No=11dB for BER=0.02. Plugging that into a MDS calculation, and assuming a receiver Noise Figure (NF) of 2dB, and the DMR bit rate of 9600 bit/s:
If we had an ideal modem, and Codec 2 at 1200 bit/s, we could get a MDS of -135dBm, or -132dBm with 2400 bit/s over the channel to support two-slot TDMA just like DMR. That’s a huge margin. The modem matters. A lot.
It’s been really nice to have some one else working with me on modem code – thanks Brady! He has done a great job on getting his head around modem implementation. Brady also worked out how to run Octave code on simulation on parallel cores which is a fine innovation. Until now I had been stuck on one core.