Fast Reset, Trusted Boot and the security of /sbin/reboot

In OpenPOWER land, we’ve been doing some work on Secure and Trusted Boot while at the same time doing some work on what we call fast-reset (or fast-reboot, depending on exactly what mood someone was in at any particular time…. we should start being a bit more consistent).

The basic idea for fast-reset is that when the OS calls OPAL reboot, we gather all the threads in the system using a combination of patching the reset vector and soft-resetting them, then cleanup a few bits of hardware (we do re-probe PCIe for example), and reload & restart the bootloader (petitboot).

What this means is that typing “reboot” on the command line goes from a ~90-120+ second affair (through firmware to petitboot, linux distros still take ages to shut themselves down) down to about a 20 second affair (to petitboot).

If you’re running a (very) recent skiboot, you can enable it with a special hidden NVRAM configuration option (although we’ll likely enable it by default pretty soon, it’s proving remarkably solid). If you want to know what that NVRAM option is… Use the source, Luke! (or git history, but I’ve yet to see a neat Star Wars reference referring to git commit logs).

So, there’s nothing like a demo. Here’s a demo with Ubuntu running off an NVMe drive on an IBM S822LC for HPC (otherwise known as Minsky or Garrison) which was running the HTX hardware exerciser, through fast-reboot back into Petitboot and then booting into Ubuntu and auto-starting the exerciser (HTX) again.

Apart from being stupidly quick when compared to a full IPL (Initial Program Load – i.e. boot), since we’re not rebooting out of band, we have no way to reset the TPM, so if you’re measuring boot, each subsequent fast-reset will result in a different set of measurements.

This may be slightly confusing, but it’s not really a problem. You see, if a machine is compromised, there’s nothing stopping me replacing /sbin/reboot with something that just prints things to the console that look like your machine rebooted but in fact left my rootkit running. Indeed, fast-reset and a full IPL should measure different values in the TPM.

It also means that if you ever want to re-establish trust in your OS, never do a reboot from the host – always reboot out of band (e.g. from a BMC). This, of course, means you’re trusting your BMC to not be compromised, which I wouldn’t necessarily do if you suspect your host has been.

Workaround for opal-prd using 100% CPU

opal-prd is the Processor RunTime Diagnostics daemon, the userspace process that on OpenPower systems is responsible for some of the runtime diagnostics. Although a userspace process, it memory maps (as in mmap) in some code loaded by early firmware (Hostboot) called the HostBoot RunTime (HBRT) and runs it, using calls to the kernel to accomplish any needed operations (e.g. reading/writing registers inside the chip). Running this in user space gives us benefits such as being able to attach gdb, recover from segfaults etc.

The reason this code is shipped as part of firmware rather than as an OS package is that it is very system specific, and it would be a giant pain to update a package in every Linux distribution every time a new chip or machine was introduced.

Anyway, there’s a bug in the HBRT code that means if there’s an ECC error in the HBEL (HostBoot Error Log) partition in the system flash (“bios” or “pnor”… the flash where your system firmware lives), the opal-prd process may get stuck chewing up 100% CPU and not doing anything useful. There’s https://github.com/open-power/hostboot/issues/67 for this.

You will notice a problem if the opal-prd process is using 100% CPU and the last log messages are something like:

HBRT: ERRL:>>ErrlManager::ErrlManager constructor.
HBRT: ERRL:iv_hiddenErrorLogsEnable = 0x0
HBRT: ERRL:>>setupPnorInfo
HBRT: PNOR:>>RtPnor::getSectionInfo
HBRT: PNOR:>>RtPnor::readFromDevice: i_offset=0x0, i_procId=0 sec=11 size=0x20000 ecc=1
HBRT: PNOR:RtPnor::readFromDevice: removing ECC...
HBRT: PNOR:RtPnor::readFromDevice> Uncorrectable ECC error : chip=0,offset=0x0

(the parameters to readFromDevice may differ)

Luckily, there’s a simple workaround to fix it all up! You will need the pflash utility. Primarily, pflash is meant only for developers and those who know what they’re doing. You can turn your computer into a brick using it.

pflash is packaged in Ubuntu 16.10 and RHEL 7.3, but you can otherwise build it from source easily enough:

git clone https://github.com/open-power/skiboot.git
cd skiboot/external/pflash
make

Now that you have pflash, you just need to erase the HBEL partition and write (ECC) zeros:

dd if=/dev/zero of=/tmp/hbel bs=1 count=147456
pflash -P HBEL -e
pflash -P HBEL -p /tmp/hbel

Note: you cannot just erase the partition or use the pflash option to do an ECC erase, you may render your system unbootable if you get it wrong.

After that, restart opal-prd however your distro handles restarting daemons (e.g. systemctl restart opal-prd.service) and all should be well.

1 Million SQL Queries per second: GA MariaDB 10.1 on POWER8

A couple of days ago, MariaDB announced that MariaDB 10.1 is stable GA – around 19 months since the GA of MariaDB 10.0. With MariaDB 10.1 comes some important scalabiity improvements, especially for POWER8 systems. On POWER, we’re a bit unique in that we’re on the higher end of CPUs, have many cores, and up to 8 threads per core (selectable at runtime: 1, 2, 4 or 8/core) – so a dual socket system can easily be a 160 thread machine.

Recently, we (being IBM) announced availability of a couple of new POWER8 machines – machines designed for Linux and cloud environments. They are very much OpenPower machines, and more info is available here: http://www.ibm.com/marketplace/cloud/commercial-computing/us/en-us

Combine these two together, with Axel Schwenke running some benchmarks and you get 1 Million SQL Queries per second with MariaDB 10.1 on POWER8.

Having worked a lot on both MySQL for POWER and the firmware that ships in the S882LC, I’m rather happy that 1 Million queries per second is beyond what it was in June 2014, which was a neat hack on MySQL 5.7 that showed the potential of MySQL on POWER8 but wasn’t yet a product. Now, you can run a GA release of MariaDB on GA POWER8 hardware designed for scale-out cloud environments and get 1 Million SQL queries/second (with fewer cores than my initial benchmark last year!)

What’s even more impressive is that this million queries per second is in a KVM guest!

MariaDB & Trademarks, and advice for your project

I want to emphasize this for those who have not spent time near trademarks: trademarks are trouble and another one of those things where no matter what, the lawyers always win. If you are starting a company or an open source project, you are going to have to spend a whole bunch of time with lawyers on trademarks or you are going to get properly, properly screwed.

MySQL AB always held the trademark for MySQL. There’s this strange thing with trademarks and free software, where while you can easily say “use and modify this code however you want” and retain copyright on it (for, say, selling your own version of it), this does not translate too well to trademarks as there’s a whole “if you don’t defend it, you lose it” thing.

The law, is, in effect, telling you that at some point you have to be an arsehole to not lose your trademark. (You can be various degrees of arsehole about it when you have to, and whenever you do, you should assume that people are acting in good faith and just have not spent the last 40,000 years of their life talking to trademark lawyers like you have).Basically, you get to spend time telling people that they have to rename their product from “MySQL Headbut” to “Headbut for MySQL” and that this is, in fact, a really important difference.

You also, at some point, get to spend a lot of time talking about when the modifications made by a Linux distribution to package your software constitute sufficient changes that it shouldn’t be using your trademark (basically so that you’re never stuck if some arse comes along, forks it, makes it awful and keeps using your name, to the detriment of your project and business).

If you’re wondering why Firefox isn’t called Firefox in Debian, you can read the Mozilla trademark policy and probably some giant thread on debian-legal I won’t point to.

Of course, there’s ‘ MySQL trademark policy and when I was at Percona, I spent some non-trivial amount of time attempting to ensure we had a trademark policy that would work from a legal angle, a corporate angle, and a get-our-software-into-linux-distros-happily angle.

So, back in 2010, Monty started talking about a draft MariaDB trademark policy (see also, Ubuntu trademark policy, WordPress trademark policy). If you are aiming to create a development community around an open source project, this is something you need to get right. There is a big difference between contributing to a corporate open source product and an open source project – both for individuals and corporations. If you are going to spend some of your spare time contributing to something, the motivation goes down when somebody else is going to directly profit off it (corporate project) versus a community of contributors and companies who will all profit off it (open source project). The most successful hybrid of these two is likely Ubuntu, and I am struggling to think of another (maybe Fedora?).

Linux is an open source project, RedHat Enterprise Linux is an open source product and in case it wasn’t obvious when OpenSolaris was no longer Open, OpenSolaris was an open source product (and some open source projects have sprung up around the code base, which is great to see!). When a corporation controls the destiny of the name and the entire source code and project infrastructure – it’s a product of that corporation, it’s not a community around a project.

From the start, it seemed that one of the purposes of MariaDB was to create a developer community around a database server that was compatible with MySQL, and eventually, to replace it. MySQL AB was not very good at having an external developer community, it was very much an open source product and not a an open source project (one of the downsides to hiring just about anyone who ever submitted a patch). Things struggled further at Sun and (I think) have actually gotten better for MySQL at Oracle – not perfect, I could pick holes in it all day if I wanted, but certainly better.

When we were doing Drizzle, we were really careful about making sure there was a development community. Ultimately, with Drizzle we made a different fatal error, and one that we knew had happened to another open source project and nearly killed it: all the key developers went to work for a single company. Looking back, this is easily my biggest professional regret and one day I’ll talk about it more.

Brian Aker observed (way back in 2010) that MariaDB was, essentially, just Monty Program. In 2013, I did my own analysis on the source tree of MariaDB 5.5.31 and MariaDB 10.0.3-ish to see if indeed there was a development community (tl;dr; there wasn’t, and I had the numbers to prove it).If you look back at the idea of the Open Database Alliance and the MariaDB Foundation, actually, I’m just going to quote Henrik here from his blog post about leaving MariaDB/Monty Program:

When I joined the company over a year ago I was immediately involved in drafting a project plan for the Open Database Alliance and its relation to MariaDB. We wanted to imitate the model of the Linux Foundation and Linux project, where the MariaDB project would be hosted by a non-profit organization where multiple vendors would collaborate and contribute. We wanted MariaDB to be a true community project, like most successful open source projects are – such as all other parts of the LAMP stack.

….

The reality today, confirmed to me during last week, is that:

Those in charge at Monty Program have decided to keep ownership of the MariaDB trademark, logo and mariadb.org domain, since this will make the company more valuable to investors and eventually to potential buyers.

Now, with Monty Program being sold to/merged into (I’m really not sure) SkySQL, it was SkySQL who had those things. So instead of having Monty Program being (at least in theory) one of the companies working on MariaDB and following the Hacker Business Model, you now have a single corporation with all the developers, all of the trademarks, that is, essentially a startup with VC looking to be valuable to potential buyers (whatever their motives).

Again, I’m going to just quote Henrik on the us-vs-them on community here:

Some may already have observed that the 5.2 release was not announced at all on mariadb.org, rather on the Monty Program blog. It is even intact with the “us vs them” attitude also MySQL AB had of its community, where the company is one entity and “outside community contributors” is another. This is repeated in other communication, such as the recent Recently in MariaDB newsletter.

This was, again, back in 2010.

More recently, Jeremy Cole, someone who has pumped a fair bit of personal and professional effort into MySQL and MariaDB over the past (many) years, asked what seemed to be a really simple question on the maria-discuss mailing list. Basically, “What’s going on with the MariaDB trademark? Isn’t this something that should be under the MariaDB foundation?”

The subsequent email thread was as confusing as ever and should be held up as a perfect example about what not to do. Some of us had by now, for years, smelt something fishy going on around the talk of a community project versus the reality. At the time (October 2013), Rasmus Johansson (VP of Engineering at SkySQL and Board Member of MariaDB foundation) said this:

The MariaDB Foundation and SkySQL are currently working on the trademark issue to come up with a solution on what rights to the trademark each entity should have. Expect to hear more about this in a fairly near future.

 

MariaDB has from its beginning been a very community friendly project and much of the success of MariaDB relies in that fact. SkySQL of course respects that.

(and at the same time, there were pages that were “Copyright MariaDB” which, as it was pointed out, was not an actual entity… so somebody just wasn’t paying attention). Also, just to make things even less clear about where SkySQL the corporation, Monty Program the corporation and the MariaDB Foundation all fit together, Mark Callaghan noticed this text up on mariadb.com:

The MariaDB Foundation also holds the trademark of the MariaDB server and owns mariadb.org. This ensures that the official MariaDB development tree<https://code.launchpad.net/maria> will always be open for the MariaDB developer community.

So…. there’s no actual clarity here. I can imagine attempting to get involved with MariaDB inside a corporation and spending literally weeks talking to a legal department – which thrills significantly less than standing in lines at security in an airport does.

So, if you started off as yay! MariaDB is going to be a developer community around an open source project that’s all about participation, you may have even gotten code into MariaDB at various times… and then started to notice a bit of a shift… there may have been some intent to make that happen, to correct what some saw as some of the failings of MySQL, but the reality has shown something different.

Most recently, SkySQL has renamed themselves to MariaDB. Good luck to anyone who isn’t directly involved with the legal processes around all this differentiating between MariaDB the project, MariaDB Foundation and MariaDB the company and who owns what. Urgh. This is, in no way, like the Linux Foundation and Linux.

Personally, I prefer to spend my personal time contributing to open source projects rather than products. I have spent the vast majority of my professional life closer to the corporate side of open source, some of which you could better describe as closer to the open source product end of the spectrum. I think it is completely and totally valid to produce an open source product. Making successful companies, products and a butt-ton of money from open source software is an absolutely awesome thing to do and I, personally, have benefited greatly from it.

MariaDB is a corporate open source product. It is no different to Oracle MySQL in that way. Oracle has been up front and honest about it the entire time MySQL has been part of Oracle, everybody knew where they stood (even if you sometimes didn’t like it). The whole MariaDB/Monty Program/SkySQL/MariaDB Foundation/Open Database Alliance/MariaDB Corporation thing has left me with a really bitter taste in my mouth – where the opportunity to create a foundation around a true community project with successful business based on it has been completely squandered and mismanaged.

I’d much rather deal with those who are honest and true about their intentions than those who aren’t.

My guess is that this factored heavily into Henrik’s decision to leave in 2010 and (more recently) Simon Phipps’s decision to leave in August of this year. These are two people who I both highly respect, never have enough time to hang out with and I would completely trust to do the right thing and be honest when running anything in relation to free and open source software.

Maybe WebScaleSQL will succeed here – it’s a community with a purpose and several corporate contributors. A branch rather than a fork may be the best way to do this (Percona is rather successful with their branch too).

popcon-historical: a tool for monitoring package popularity in debian/ubuntu

I’ve just uploaded (where ‘just’ is defined as “a little while ago”) popcon-historical to github. It’s a rather rudimentary way to look at the popcon data from Debian and Ubuntu over time. It loads all the data into a Drizzle database and then has a small perl web-app to generate graphs (and CSV).

Github: https://github.com/stewartsmith/popcon-historical

I’ve also put up a project page on it: https://flamingspork.com/popcon-historical/

An example graph is this one of Percona Toolkit vs Maatkit installs in Ubuntu over time:

You can actually get it to graph any package (which, unlike the graphs on debian.org, the package does not have to be in the Debian archive to graph it over time – it can be a package from third party repos).

An argument for popcon

There is a package called popularity-contest that’s available in both Debian and Ubuntu (and likely other Debian derivatives). It grabs the list of packages installed on the machine and submits it to the Debian or Ubuntu popularity contests.

There you can see which are the most popular packages in Debian and Ubuntu. Unsurprisingly, dpkg, the package manager is rather popular.

Why should you enable it? Looking at popcon results are solid numbers as to how many users you may have. Although the absolute numbers may not be too accurate, it’s a sample set and if you examine the results over time you can start to get an idea on if your software is growing in popularity or not.

But there’s something more than that, if you can prove that a lot of people are installing your software on Debian, then you’re likely going to be able to argue for more work time being spent on improving the packaging for Debian.

Quite simply, enabling popcon is a way to help people like me argue for more time being spent on making Debian better.

DevStack woes

DevStack is meant to be a three step “get me an openstack dev environment” thing. You’re meant to be able to grab a fresh installation of something like Ubuntu 12.04 or Fedora and “git clone $foo && cd devstack && ./stack.sh”, wait a while and then be able to launch instances.

This much does work.

What does not work is being able to ssh to those instances. The networking is completely and utterly broken. I have tried both Ubuntu and Fedora in a fresh VM (KVM, on an Ubuntu host) and have asked a variety of experts for help. No dice.

What I want to hear is a way to remotely get it going locally, in a VM.

At the moment I’m tempted to submit a pull request to the devstack website adding a 4th step of “muck around for a few days before giving up on ever being able to ssh into a launched instance as these instructions are wrong”.

Switching to Fedora from Ubuntu

I’ve run Ubuntu on my desktop (well… and laptop) since roughly the first release back in 2004. I’ve upgraded along the way, with reinstalls on the laptop limited to changing CPU architecture and switching full disk encryption.

Yesterday I wiped Ubuntu and installed Fedora.

Previously to Ubuntu I ran Debian. I actually ran Debian unstable on my desktop/laptop. I ran Debian Stable on any machines that had to sit there and “just work” and were largely headless. Back then Debian stable just didn’t have even remotely recent enough kernel, X and desktop apps to really be usable with any modern hardware. The downside to this was that having an IRC client open to #debian-devel and reading the topic to find if sid (codename for the unstable distribution) was pretty much a requirement if you ever thought about running “apt-get dist-upgrade”. This was exactly the kind of system that you wouldn’t give to non-expert family members as a desktop and expect them to maintain it.

Then along came Ubuntu. The basic premise was “a Debian derived distribution for the desktop, done right.” Brilliant. Absolutely amazingly brilliant. This is exactly what I wanted. I had a hope that I’d be able to focus on things other than “will dist-upgrade lead to 4 hours of fixing random things only to discover that X is fundamentally broken” and a possibility that I could actually recommend something to people who weren’t experts in the field.

For many years, Ubuntu served well. Frequent updates and relatively flawless upgrades between releases. A modern desktop environment, support for current hardware – heck, even non computer literate family members started applying their own security updates and even upgrading between versions!

Then, something started to go awry…. Maybe it was when Ubuntu shipped a kernel that helpfully erased the RAID superblock of the array in the MythTV machine… Maybe it was when I realized that Unity was failing as a basic window manager and that I swore less at fvwm…. Or maybe it was when I had a bug open for about 14,000 years on that if you left a Unity session logged in for too long all the icons in the dock would end up on top of each other at the top left of the screen making it rather obvious that nobody working on Ubuntu actually managed to stay logged in for more than a week. Or could it be that on the MythTV box and on my desktop having the login manager start (so you can actually log in to a graphical environment) is a complete crapshoot, with the MythTV box never doing it (even though it is enabled in upstart… trust me).

I think the final straw was the 13.04 upgrade. Absolutely nothing improved for me. If I ran Unity I got random graphics artifacts (a pulldown menu would remain on the screen) and with GNOME3 the unlock from screensaver screen was half corrupted and often just didn’t show – just type in your password and hope you’re typing it into the unlock screen and it hasn’t just pasted it into an IM or twitter or something. Oh, and the number of times I was prompted for my WiFi network password when it was saved in the keyring for AT LEAST TWO YEARS was roughly equivalent to the number of coffee beans in my morning espresso. The giant regressions in graphics further removed any trust I had that Mir may actually work when it becomes default(!?) in the next Ubuntu release.

GNOME3 is not perfect… I have to flip a few things in the tweak tool to have it not actively irritate me but on the whole there’s a number of things I quite like about it. It wins over Unity in an important respect: it actually functions as a window manager. A simple use case: scanning photos and then using GIMP to edit the result. I have a grand total of two applications open, one being the scanning software (a single window) and the other being the GIMP. At least half the time, when I switch back to the scanning program (i.e. it is the window at the front, maximized) I get GIMP toolbars on top of it. Seriously. It’s 2013 and we still can’t get this right?

So… I went and tried installing Fedora 19 (after ensuring I had an up to date backup).

The install went pretty smoothly, I cheated and found an external DVD drive and used a burnt DVD (this laptop doesn’t have an optical drive and I just CBF finding a suitably sized USB key and working out how to write the image to it correctly).

The installer booted… I then managed to rather easily decrypt my disk and set it to preserve /home and just format / and /boot (as XFS and ext3 respectively) and use the existing swap. Brilliant – I was hoping I wouldn’t have to format and restore from backup (a downside to using Maildir is that you end up with an awful lot of files). Install was flawless, didn’t take any longer than expected and I was able to reboot into a new Fedora environment. It actually worked.

I read somewhere that Fedora produces an initramfs that is rather specific to the hardware you’re currently running on, which just fills me with dread for my next hardware upgrade. I remember switching hard disks from one Windows 98 machine to another and it WAS NOT FUN. I hope we haven’t made 2013 the year of Windows 98 emulation, because I lived through that without ever running the damn thing at home and I don’t want to repeat it.

Some preferences had to be set again, there’s probably some incompatibility between how Ubuntu does things and how Fedora does things. I’m not too fussed about that though.

I did have to go and delete every sign of Google accounts in GNOME Online Accounts as it kept asking for a password (it turns out that two-factor-auth on Google accounts doesn’t play too nice). To be fair, this never worked in Ubuntu anyway.

In getting email going, I had to manually configure postfix (casually annoying to have to do it again) and procmail was actually a real pain. Why? SELinux. It turns out I needed to run “restorecon -r /home”. The way it would fail was silently and without any error anywhere. If I did “setenforce 0” it would magically work, but I actually would like to run with SELinux: better security is better. It seems that the restorecon step is absolutely essential if you’re bringing across an existing partition.

Getting tor, polipo and spamassasin going was fairly easy. I recompiled notmuch, tweaked my .emacs and I had email back too. Unfortunately, it appears that Chromium is not packaged for Fedora (well.. somebody has an archive, but the packages don’t appear to be GPG signed, so I’m not going to do that). There’s a complaint that Chromium is hard to package blah blah blah but if Debian and Ubuntu manage it, surely Fedora can. I use different browsers for different jobs and although I can use multiple instances of Firefox, it doesn’t show up as different instances in alt-tab menu, which is just annoying.

It appears that the version of OTR is old, so I filed a bug for that (and haven’t yet had the time to build+package libotr 4.0.0 – but it’s sorely needed). The pytrainer app that is used to look at the results of my Garmin watch was missing some depedencies (bug filed) and I haven’t yet tried to get the Garmin watch to sync… but that shouldn’t be too hard…

The speakers on my laptop still don’t work – so it’s somebody screwing up either the kernel driver or pulseaudio that makes the speakers only sometimes work for a few seconds and then stop working (while the headphone port works fine).

On the whole, I’m currently pretty happy with it. We’ll see how the upgrade to Fedora 20 goes though…. It’s nice using a desktop environment that’s actually supported by my distribution and that actually remotely works.

No, I haven’t forgotten digital (darktable for the epic win)

This was my first real play with darktable. It’s a fairly new “virtual lighttable and darkroom for photographers” but if you are into photography and into freedom, you need to RUN (not walk) to the install page now.

My first real use of it was for a simple image that I took from my hotel room when I was in Hong Kong last week. I whacked the fisheye on the D200, walked up to the window (and then into it, because that’s what you do when looking through a fisheye) and snapped the street scene below as the sun was going away.

Hotel Window (Hong Kong)

I’d welcome feedback… but I kinda like the results, especially for a shot that wasn’t thought about much at all (it was intended as a just recording my surroundings shot).

The second shot I had a decent go at was one I snapped while out grabbing some beers with some of the Rackspace guys (Hi Tim and Eddie!) in Hong Kong. Darktable let me develop the RAW image from my D200 and get exactly the image I was looking for…. well, at least to my ability so far. Very, very impressed.

Hong Kong streetlife

Being a photographer and using Ubuntu/GNOME has never been so exciting. Any inclination I had of setting up a different OS for that “real” photo stuff is completely gone.

(Incidently, I will be talking about darktable at LUV in July)

desktop-couch has been nothing but suck

$ du -sh /home/stewart/.cache/desktop-couch/desktop-couchdb.*
746M	/home/stewart/.cache/desktop-couch/desktop-couchdb.log
4.0K	/home/stewart/.cache/desktop-couch/desktop-couchdb.pid
16K	/home/stewart/.cache/desktop-couch/desktop-couchdb.stderr
653M	/home/stewart/.cache/desktop-couch/desktop-couchdb.stdout

$ du -sh /home/stewart/.local/share/desktop-couch/.gwibber_messages_design/2f3267703246f5e02533e59714915b7d.view 
436M	/home/stewart/.local/share/desktop-couch/.gwibber_messages_design/2f3267703246f5e02533e59714915b7d.view

I feel better already. I think the log files irritate me the most.

evolution-data-server even worse (is that possible?)

Just caught it using 713MB of resident memory. What the fuck? I don’t even have Evolution running! There’s only the clock applet (which does pull things out of calendar i guess…).

Does Evolution win the prize for worst piece of free software yet?

new NetworkManager VPNC not better at all (in fact, much worse)

I upgraded to Ubuntu 8.10 the other day, NetworkManager promptly forgot my wireless LAN key (grr… lucky I keep a copy in a text file) as well as my VPN configuration. It’s also changed the UI for entering what specific networks to route over the VPN (’cause the last thing you want is putting all your traffic through VPN when you have a perfectly good internet connection here… or even worse, I do *not* need to go via Sydney or the US to access the machine 2ft from me thank you very much).

Generally not happy with the new NetworkManager.