baz 404 .listing

Guess you're happy with that answer. To fix the mirroring, he needs to create the non-zero sizes (contents don't matter) in the  ARCHIVE/=meta-info/http-blows file.
Then he needs to run archive-fixup on the name of the mirror
So, for example, if he can ssh to where the mirror is, he can echo "yeah" > HERE/IS/MY/MIRROR/=meta-info/http-blows, then back on the machine that has the original source, run archive-fixup ARCHIVE-MIRROR

Thank you interweb, you fixed it.

bzr the website dude

I’m currently in the process of trying out Tailor to convert the CVS repository that stores the revision history of my website to bzr.

I’ve been meaning to try bzr again for ages, and here’s the excuse. I’m going to see if this is going to work well for me.

Basically, what I do is I hack things locally, test, then rsync to web host. I blog using the live webhost site and rsync back (with a mysqldump before the rsync).

The only thing I’m really noticing at the moment is that the import speed seems to be fairly slow. If I were doing this on a mysql tree, I’d probably just go to bed. or wait until next week.

Thanks to James for his handy little HOWTO on doing this.

mythtv lircrc with xine goodnees

xine --keymap=lirc

Is a useful command to run. I redirected it to a file and appended an edited copy to my existing lircrc for mythtv.

I used to run irxevent, but am now using mythtv’s builtin support. I had to symlink ~/.lircrc to ~/.mythtv/lircrc

I have a Winfast TV Deluxe 2000 (or some such thing). The lircrc file I’m using now is lircrc

NDB Disk IO patterns – REDO and UNDO Logs

(stolen out of Mikael’s post to the cluster list – http://lists.mysql.com/cluster/2396)

Both REDO and UNDO logs are stored in buffers but are sent after a while to disk.

For REDO the disk write happens when 256 kByte of buffer is filled up or when a GCP requests synching the data (the 256 kByte write is without synch
but every 4 MByte gets synched even without GCP occurring.

For UNDO similar algorithm that write when 256 kByte of buffer is filled, synch after 4 MByte or when the local checkpoint is completed.

I wonder if we should be tuning more? I honestly don’t know the answer to this – but I don’t think it’s a limiting factor at the moment :)

NDB Disk Requirements

up to 3 copies of data (3*DataMemory)

+ 64MB * NoOfFragLogFiles (default=8)

+ UNDO log (dependent on update speed)

For example:

DataMemory=1024MB

idea on disk usage= 1024*3 + 64 * 8 =  3584MB + UNDO log

It’s very tempting to have a “SHOW ESTIMATES” command in the management client/server that performs all these equations (and the answers). I bet Professional Services would love me :)

Munich

On an internal list, a thread switched over to breifly mentioning the film Munich which incidently, I saw a few weeks ago just after linux.conf.au and really enjoyed.

I thought it was really well done and a good film. I really recommend going to see it – it’s a good cinematic experience. Possibly don’t see it if you’re feeling really sad though – not exactly a happy film. Eric Bana and Geoffrey Rush are superb in this film (both Aussies too!).

I found it to be more about his journey than anything else and enjoyed it as it was a personal story.

Oh, and why haven’t Margaret and David reviewed it yet? I would love to know what they thought. It’s not often I see a film before I’ve seen them review it :)

Slashdot | Olympic Medalist was Spyware King

Slashdot | Olympic Medalist was Spyware King

They ask “shouldn’t they be disqualified for behaviour like that”. Well, personally I think it should carry jail time. Possibly public flogging. In fact, just have somebody poke them with a sharp stick every minute of every day for 10 years and occationally have loud things scream in their ears. Also, make their phone ring with telemarketers and their mailbox fill up with offers for stuff they don’t need or want.

I hate spammers. Rot in $bad_place you parasites.

I really have no time for spammers.

yummy tofu curry

Okay, I cheated – I used one of the bottled curry pastes. I really shouldn’t have – it’s not that hard to do it yourself. But, I was feeling *really* lazy.

So, ingredients were:

  • 1 onion, finely chopped
  • 1 block of firm tofu, cut into small (1cm) cubes
  • 1 can diced tomatoes
  • 1 can 140g coconut cream (this has nothing to do with dairy cream, it’s just squeezed coconut concentrate)
  • a bit of chilli powder
  • a few cloves chopped garlic
  • green peas (i love peas in a curry)
  • a red curry paste (i used one that said “butter chicken” on the front) but you could just whack in some spices, herbs and tamarind.

saute the onions, throw in the tofu and cook . add the can of tomatoes and curry paste – in fact, add everything else.

Cook until good and warm all the way through. The tofu I used has a habit of crumbling – didn’t harm the taste though!

This was really yummy. Ate it last night as leftovers and made it the night before. Wish I had more for tonight. Delicious!

EnterpriseDB – WHERE IS THE SOURCE????

EnterpriseDB : Open source based relational database, PostgreSQL based database management software

Oh, that’s right – it’s a proprietary database! “Based on PostgreSQL” – well, good for them – they got a bunch of stuff for free[1]. But without a commitment to having the source out there for every release (or even the development tree) how committed are they really?

How can anybody tell if their improvements are good and well written or just utter hacks that would make you loose your lunch? (not dissing their programmers here, just pointing out that you cannot know).

Meanwhile, over here at MySQL, every single time that a developer types bk commit, an email is sent (with the content of the patch) to a public list[2]. Also, the development trees are out there, as is the source to every release in a tarball. So you can get the complete revision history of the MySQL server.[3]

That’s called committment to freedom!

[1] I understand they do pump some stuff back into PostgreSQL, but it’s still a fork with non-public bits! This also isn’t a diss on PostgreSQL.
[2] Yes, people do actually read this. I have (personally) gotten replies from people out there in the big wide world about commits I’ve made to the source tree.

[3] We’re not perfect by any means – but IMHO we’re pretty good and there’s lots of people totally committed to making sure we get better.

your work is seen by a lot of people

Oracle has 50,000 employees. Thats 50,000 people waking up each day to work on Oracle products, and those 50,000 get paid by Oracle each day. We have 50,000 people download our software everyday and work to make it better. But we dont pay them. Which model would you rather have?

A quote from our CEO Marten Mickos in “Oracle’s New Enemy” over at forbes.com.

It is pretty neat to have your work seen by that many people each day.

Westpac Internet -Terms and conditions

Westpac Internet -Terms and conditions

It seems that Westpac are not only clueless when it comes to security, they also don’t yet know about how the web works.

Aparrently this violates their “Terms and conditions”. Oh, and so does this.

That’s right. go put your mouse over those links.

That’s right boys and girls – they are that retarded that they don’t want you linking to:

  1. their front page
  2. the page listing their “terms and conditions”

Consider this a public flogging with a friggin huge clue stick.

I hope your browsers break and you can’t click on any links to anywhere on the web – you deserve it.

I bet you any 5 year old has a better understanding of the web than Westpac. You are a mentally retarded spoon.

INFORMATION_SCHEMA.FILES (querying disk usage from SQL)

In MySQL 5.1.6 there’s a new INFORMATION_SCHEMA table.

Currently, it only has information on files for NDB but we’re hoping to change that in a future release (read: I think it would be neat).

This table is a table generated by the MySQL server listing all the different files that are/could be used by a storage engine. Three (may) be table to file mappings (or not) depending on the engine.

Basically, NDB does files like so:

A table is stored in a tablespace.

A tablespace has datafiles.

Datafiles are of a set size.

Space is allocated in datafiles to tables in a unit called an extent.

If you don’t have any free extents you cannot have new tables store data on disk.

If you don’t have any free extents you may still be able to add data to a table as there may be free space in an extent allocated to that table.

Logs (used for crash recovery) are stored in logfiles.

logfiles are part of logfile groups.

A tablespace uses a logfile group for logging.

Try the following bits of code and running SELECT * from INFORMATION_SCHEMA.FILES between each statement.

CREATE LOGFILE GROUP lg1
ADD UNDOFILE 'undofile.dat'
INITIAL_SIZE 16M
UNDO_BUFFER_SIZE = 1M
ENGINE=NDB;

ALTER LOGFILE GROUP lg1
ADD UNDOFILE 'undofile02.dat'
INITIAL_SIZE = 4M
ENGINE=NDB;

CREATE TABLESPACE ts1
ADD DATAFILE 'datafile.dat'
USE LOGFILE GROUP lg1
INITIAL_SIZE 12M
ENGINE NDB;

ALTER TABLESPACE ts1
ADD DATAFILE 'datafile02.dat'
INITIAL_SIZE = 4M
ENGINE=NDB;

CREATE TABLE t1
(pk1 INT NOT NULL PRIMARY KEY, b INT NOT NULL, c INT NOT NULL)
TABLESPACE ts1 STORAGE DISK
ENGINE=NDB;

SHOW CREATE TABLE t1;

INSERT INTO t1 VALUES (0, 0, 0);
SELECT * FROM t1;

DROP TABLE t1;
ALTER TABLESPACE ts1
DROP DATAFILE 'datafile.dat'
ENGINE = NDB;

ALTER TABLESPACE ts1
DROP DATAFILE 'datafile02.dat'
ENGINE = NDB;

DROP TABLESPACE ts1
ENGINE = NDB;

DROP LOGFILE GROUP lg1
ENGINE =NDB;

For a point of interest, these examples are taken from the ndb_dd_basic test (which can be found in mysql-test/t/ndb_dd_basic.test)

Phorum’s RSS sucks

Noticed this about our web based forums today:

the “Re: What is this? “Can’t find record in ”’ on query.”” post on the cluster forum from 10/02/06 07:53:20 isn’t the last message in that thread. there are currently 6 messages of which I only see 2.

Not only that, but from looking at the RSS, I can’t even see this post.

argh! So I shot off an email to our internal guys. The reply was that they don’t have hacking Phorum on their radar (fair enough). Of course, this just means that Phorum sucks[1] (or at least did in the version we do) and adds to the list of reasons why web based forums are much like doing $adjective to $noun.

What is it with new internet lamers and the inability to use an email program? Or even an nntp client (okay, usenet is officially crap now unless you just want spam and pornography) but the tradition of companies having internal/external nntp servers is old, tried and true.

[1] Phorum may itself be a great web based forum. It’s just that all web based forums suck – on principle.

New Category: Inciting Hatred

This is where I will clasify posts that point out how dumb something is. I will feel more free to use explicit language, exaggerated comparisons and will encourage hatred of whatever cheeses me off whenever I choose to write about it.

I would have used “rant” as a category, but this way it’ll screw with those word-scraping spying programs used to “protect” us.

debian installer partitioning tool

It blows slightly less goat than the previous offering.

It totally blows against something like the RH/Fedora tool. Like, they actually work properly and won’t show you something as dumb as LVM VGs and LVs witohut an underlying lvm partition set up anywhere. Oh, and the supremely broken behaviour of giving weird partition error messages repeatedly when I’m trying to set up /boot.

Oh how you irritate me. Isn’t this RAID and LVM thing meant to be easy. So why isn’t it?

Because it’s obviously a good idea to re-invent the bloody wheel a few times, that’s why. I think i would have preferred doing it all on the command line. Fuck that sucks donkey.