Skip to content. | Skip to navigation

Personal tools


You are here: Home / Blog


Setting up Ceph the hard way

by Bastian Blank — last modified Sep 09, 2013 10:55 AM
Filed Under:

Almost all existing documentation tells me how to setup Ceph with one or two layers of abstration. This entry shows how to set it up by hand without needing root permissions.


Ceph consists of two main daemons. One is the monitoring daemon, which monitors the health of the cluster and provides location information. The second is the storage daemon, which maintains the actual storage. Both are needed in a minimal setup.


The monitor daemons are the heart of the cluster. They maintain quorum within the cluster to check if everything can be used. They provide referrals to clients, to allow them to find the data they seek. Without a majority of monitors nothing will work within the cluster.


The storage daemons maintain the actual storage. One daemon maintains one backend storage device.


The default config is understandable, but several things will just not work with it.

Monitor on localhost

By default the monitor daemon will not work on localhost. There is an (undocumented) override to force it to work on localhost:

 mon addr = [::1]:6789

The monitor will be renamed to mon.admin internaly.


Ceph supports IP (IPv6) or legacy-IP (IPv4), but never both. I don't really use legacy-IP any longer, so I have to configure Ceph accordingly:

  ms bind ipv6 = true

One-OSD clusters

For testing purposes I wanted to create a cluster with exactly one OSD. It never got into a clean state. So I asked and found the answer in #ceph:

 osd crush chooseleaf type = 0

Disable authentication

While deprecated, the following seems to work:

 auth supported = none

Complete configuration

 auth supported = none
 log file = $name.log
 run dir = …
 osd pool default size = 1
 osd crush chooseleaf type = 0
 ms bind ipv6 = true

 mon data = …/$name

 mon addr = [::1]:6789

 osd data = …/$name
 osd journal = …/$name/journal
 osd journal size = 100

 host = devel


This is currently based on my updated packages. And they are still pretty unclean from my point of view.


All the documentation tells only about ceph-deploy and ceph-disk. This tools are abstractions that need root to mount stuff and do all the work. Here I show how to do a minimal setup without needing root.

Keyring setup

For some reason even with no authentication the monitor setup wants a keyring. So just set one up:

$ ceph-authtool --create-keyring keyring --gen-key -n mon.
$ ceph-authtool keyring --gen-key -n client.admin

Monitor setup

Monitor setup by hand is easy:

$ mkdir $mon_data
$ ceph-mon -c ceph.conf --mkfs --fsid $(uuidgen) --keyring keyring

After that just start it:

$ ceph-mon -c ceph.conf

OSD setup

First properly add the new OSD to the internal state:

$ ceph -c ceph.conf osd create
$ ceph -c ceph.conf osd crush set osd.0 1.0 root=default

Then setup the OSD itself:

$ mkdir $osd_data
$ ceph-osd -c ceph.conf -i 0 --mkfs --mkkey --keyring keyring

And start it:

$ ceph-osd -c ceph.conf -i 0
starting osd.0 at :/0 osd_data $osd_data $osd_data/journal

Health check

The health check should return ok after some time:

$ ceph -c ceph.conf health

Using SECCOMP to filter sync operations

by Bastian Blank — last modified Mar 04, 2013 06:35 PM
Filed Under:

Linux includes a syscall filter since a long time. It was restricted to a pre-defined set of syscalls. Since some versions Linux got a more generic filter.

Linux can use a BPF-filter to define actions for syscalls. This allows a fine granular specification on which syscalls to act. Also it supports different outcomes assigned to the filter. This filter can be used to filter out sync operations.

Debian already got a tool to do this called eatmydata. It is pretty limited as it uses a shared library to drop the library functions. It needs to be available at all times, or it will not do anything.

I wrote a small tool that asks the kernel to filter out sync operations for all children. It sets a filter with all currently supported sync-like operations and makes them return success. However it can't filter the O_SYNC-flag from the open-syscall, so it just makes it return an error. It executes the command given on the command-line after that.

This is just a proof of concept, but lets see.

 * Copyright (C) 2013 Bastian Blank <>
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions are met:
 * 1. Redistributions of source code must retain the above copyright notice, this
 *    list of conditions and the following disclaimer.
 * 2. Redistributions in binary form must reproduce the above copyright notice,
 *    this list of conditions and the following disclaimer in the documentation
 *    and/or other materials provided with the distribution.

#define _GNU_SOURCE 1
#include <errno.h>
#include <fcntl.h>
#include <seccomp.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

#define filter_rule_add(action, syscall, count, ...) \
  if (seccomp_rule_add(filter, action, syscall, count, ##__VA_ARGS__)) abort();

static int filter_init(void)
  scmp_filter_ctx filter;

  if (!(filter = seccomp_init(SCMP_ACT_ALLOW))) abort();
  if (seccomp_attr_set(filter, SCMP_FLTATR_CTL_NNP, 1)) abort();
  filter_rule_add(SCMP_ACT_ERRNO(0), SCMP_SYS(fsync), 0);
  filter_rule_add(SCMP_ACT_ERRNO(0), SCMP_SYS(fdatasync), 0);
  filter_rule_add(SCMP_ACT_ERRNO(0), SCMP_SYS(msync), 0);
  filter_rule_add(SCMP_ACT_ERRNO(0), SCMP_SYS(sync), 0);
  filter_rule_add(SCMP_ACT_ERRNO(0), SCMP_SYS(syncfs), 0);
  filter_rule_add(SCMP_ACT_ERRNO(0), SCMP_SYS(sync_file_range), 0);
  return seccomp_load(filter);

int main(__attribute__((unused)) int argc, char *argv[])
  if (argc <= 1)
    fprintf(stderr, "usage: %s COMMAND [ARG]...\n", argv[0]);
    return 2;

  if (filter_init())
    fprintf(stderr, "%s: can't initialize seccomp filter\n", argv[0]);
    return 1;

  execvp(argv[1], &argv[1]);

  if (errno == ENOENT)
    fprintf(stderr, "%s: command not found: %s\n", argv[0], argv[1]);
    return 127;

  fprintf(stderr, "%s: failed to execute: %s: %s\n", argv[0], argv[1], strerror(errno));
  return 1;

LDAP, Insignificant Space and Postfix

by Bastian Blank — last modified Mar 02, 2013 10:10 AM
Filed Under:

For some, LDAP, just like X.500, is black magic. I won't argue against. Sometimes it really shows surprising behavior. It always makes sense, if you think about what LDAP is built for. One surprising behavior is the handling of the "Insignificant Space"

LDAP supports syntax and comparator methods. The syntax specifies how the entry should look. Usually this is some form of text, but numbers or stuff like telephone numbers are supported as well. The comparators specifies how the values are compared. Most of the text comparators are defined to apply the handling of insignificant spaces.

Insignificant space handling normalizes the use of spaces. First all leading and trailing spaces are removed. All the internal spaces are normalized to at most two spaces. At the end all strings starts and ends with one space to allow proper sub-string matches. The resulting strings are used for comparisons.

This behavior makes sense most of the time. If the user want to find something in the directory, he usually don't cares about spaces, but about content. But I found one occurrence where this produces some grieve.

Postfix supports LDAP since some time. And lets say, it does not care about spaces in its queries. This is no problem as e-mail addresses do not contain spaces. Or do they?

Yes, e-mail addresses can contain spaces. This is not widely known, but still allowed. This addresses are quoted and the command looks like RCPT TO:<"␣test">. The local part is quoted and contains a space at the beginning. And this is where the problem starts.

Postfix sanitizes the address. It uses a simplified internal representation of the address for all lookups. So the address gets ␣ This form is used in all table lookups.

The LDAP table uses the internal form of the address. This address is copied verbatim into the query. This query may look this way (mail=␣ It is sent to the server this way.

The LDAP server applies the insignificant space modifications. The query is now interpreted and the comparator specified modifications are applied. The query gets effectively changed to ( And this is where the fun starts.

Postfix accepts undeliverable mails. Depending on the setup, such LDAP queries may be used to check for valid addresses. Because of the space handling, the sender can add spaces to the beginning of the address and it will still be considered valid. In later steps this addresses are not longer valid.

Addresses starting with spaces are considered invalid in some locations of Postfix. What surprised me a bit is that virtual alias handling did not map them. The unmodified addressed showed up on the LMTP backend server. That's why they showed up my radar.

I would say Postfix is wrong in this case. The LDAP server applies the spec correctly and defines spaces in e-mail addresses as insignificant. Postfix on the other side considers them significant. The easiest fix would be to not allow any spaces in the LDAP table.

New software: LMTP to UUCP gateway

by Bastian Blank — last modified Dec 25, 2012 02:10 PM
Filed Under:

I use UUCP to get my mails. It works fine but lacks support for modern SMTP features like DSN. While it may be possible to bolt support into the the rmail part, both the sendmail interface used to submit mails and the Postfix pipe daemon used to extract mail are not able to do so. So I started a small project to get around this problem.

This software uses LMTP to retrieve and SMTP to send all mails. LMTP (a SMTP derivative with support for all extensions) is used to inject mail via a small daemon. The mails are transported using a format similar to batched SMTP to the remote system. It is then injected via SMTP to the local MTA.


LMTP is used to supply mail. As a SMTP derivative, LMTP inherits support for all the available SMTP extensions. The only difference between LMTP and SMTP is the support for one result per recipient after end-of-data. This allow proper handling and mails with multiple recipients without a queue.

Mails are supplied to a special LMTP server. This server may currently run from inetd or in foreground by itself. A real daemon mode is not yet implemented.

Each mail is submitted to the UUCP queue in its own format. We need to store a lot of meta-data along with the real mail. This data is stored in a custom format.


All data is transferred using a custom protocol. It is a SMTP derivative, but it is only used in uni-directional communication, so no responses exists. It uses its own Hello command and supports the SMTP commands MAIL, RCPT and DATA.

This format allows exactly one mail in each file. An EOF ends the mail transaction. Also all data must be in dot-escaped form like in SMTP.

Hello (UHLO)

A sender must start the transaction with this command. It specifies the name of the sender and all requested SMTP extensions.


uhlo = "UHLO" SP ( Domain / address-literal ) *( SP ehlo-keyword ) CRLF

The receiver must check if all requested SMTP extensions are available.


Each mail is submitted by the UUCP system. It calls the supplied receiver tool called rumtp. This tool reads the protocol stream and submits the mail to a local SMTP server.

There is no error handling right now in this tool. All errors will create a mail to the local UUCP admin by the UUCP system itself.

License and distribution

This package is licensed GPL 3. It is for new distributed via Alioth.

Relaying e-mail over unreliable connections

by Bastian Blank — last modified Dec 24, 2012 03:50 PM
Filed Under:


I still prefer to handle all my mails at home. I got backup set-up on my own and have anything in direct access. This network is connected to the internet via an end-user DSL connection without a static IP-address and without any useful SLA. However relaying mails over such an unreliable connection is still an unsolved problem.

There exists a lot of solutions for this problem. I'm currently building a new e-mail setup, so I tried to collect all the used and maybe possible solutions for the relay problem. I will show some of them.

Don't do it

The easiest solution is to just don't try to collect mails at home. This solution is somewhat against the rules, but I know people who prefers it. Access to the mails is usually done with IMAP and a copy is made if needed. This is not really a solution, but it works for many people.


SMTP is the mail protocol used in the internet. It is used for all public mail exchanges. By itself it can't be used to submit mails to remote destinations

With some preparations it can be used to relay mail over unreliable connections. There are three different turn commands in SMTP that can be used to start the mail flow. Two of them will be

VPN or dynamic DNS with TLS with or without extended turn (ETRN)

Using SMTP themself is an easy solution. It can relay mails without much hasle But it needs either a fixed or authenticated connection.

SMTP can use a fixed connection. This is usually provided by some sort of VPN. The VPN can be encrypted, but does not need to. This allow the MTA to connect to the downstream server if it is available.

The other solution is to authenticate the downstream server. Authentication is available via TLS and X.509 certificates. It still needs some way to find it, but with dynamic DNS this is no real problem. Both variants can be combined with the extended turn command.

Extended turn allows to request a queue flush for a domain. It can be used to request mails only to be delivered if the downstream server is available at all. This reduces the load on the MTA.

Authenticated turn (ATRN)

On-demand mail relay is a rarely supported ESMTP feature. The Authenticated turn command effectively reverses the SMTP connection and allows, after authentication, the flow of mails from the server to the client via standard SMTP. There exists some stand-alone implementations, but no widely used MTA includes an implementation.

POP3/IMAP and fetchmail/getmail

All e-mails can be delivered to mailboxes and retrieved by the end-users system and re-injected into the delivery system. Both fetchmail and getmail are able to retrieve mail from POP3 and IMAP servers. They are either directly delivered via a MDA like procmail or maildrop. Otherwise they are submitted via the sendmail interface or SMTP and delivered by the MTA.

Neither POP3 nor IMAP have support for meta data like the real recipient or sender.

Mailbox per address

The mails for each address are delivered to its own mailbox. This allows proper identification of the recipient address. It still have no real idea of the sender address. Because it needs to poll one mailbox per address, this highers the resources needed on both sides dramatically.

Multi-drop mailbox

The mails for all addresses are delivered into one mailbox. The original recipient must be saved in a custom header to allow this information to be restored. Only one mailbox needs to be polled for mail addresses.


One of the oldest transports used for mail is UUCP. It effectively copies a file to a different system and pipes it into a program. UUCP can be used to transport mails in various ways.


Each mail is copied verbatim to the client. It saves the sender address in form of a "From" pseudo header in the mail itself and supplies the recipient on the command-line. So it have access to both sender and recipient address.

Batched SMTP

Batched SMTP transfers old-style SMTP transactions over UUCP The MTA (Exim supports this) or a special tool (in the bsmtpd package) writes this files and after a given time or size they are dispatched via UUCP to the remote system.

The bsmtpd packages was removed from Debian some years ago.

Dovecot synchronization

Dovecot support bi-directional synchronization of mailbox contents. It holds all data on both sides. The internal log is used to merge changes done on both sides, so it should not loose any data. This synchronization can be used to work with the data on both sides (via Dovecot of cause) or create backups. It needs shell access to the user owning the data on both sides.


There is no one size fits all solution for this problem. If you admin the remote system, you can implement any of this solutions If it is managed by someone else you need good luck.

Almost all solutions does not support current SMTP features. The one I'm really missing is DSN, aka configurable delivery notices. POP3 and IMAP handle already delivered mail and have no need for this. All UUCP variants does not handle it, because they are much older anyway. Only SMTP itself supports all of its features.

I still use rmail over UUCP for my mails at home and it works flawless. UUCP itself runs over SSH, it can compress data on the fly and authenticate using private keys.

DDoS against DNS-Servers

by Bastian Blank — last modified Jun 17, 2012 08:55 PM
Filed Under:

Today I found out that the DNS-Server at home is used for a DDoS against a DNS-Server. The attackers send a small query, " ANY" in this case, with a faked sender IP. The DNS-Server answers with a much larger packet to the (faked) sender.

Because the domain is not local, the nameserver should have only produced an error. But the bind-config allowed everyone to get answers to cached answers:

allow-query-cache {; ::/0; };

With proper restriction, the server only returns errors now.

Restricting answer for cached entries does not help if the queried nameserver is authoritative for the domain. In this case it can help to drop queries for the ANY type. In ferm this looks like this:

proto udp dport domain mod string from 40 algo bm hex-string '|0000ff0001|' DROP;

Installing Wheezy on Thinkpad X130e

by Bastian Blank — last modified May 27, 2012 08:55 PM
Filed Under:

Thinkpad X130e is rather new Notebook. It uses UEFI and features a Atheros network card and new Broadcom wireless card.

I installed a Lenovo Thinkpad X130e with Wheezy.


The X130e does not want to boot via BIOS emulation. The Wheezy D-I installs only BIOS Grub. It does not provide the selection to use EFI at all. So the initial installation was BIOS only and EFI refused to boot it without an error. BIOS emulation simply don't want to work on the X130e.

Partman does not align partitions in GPT. For some reasons the partitions created by partman are not aligned in the partition table at all. This is bad for performance. Also it makes it impossible to create an EFI partition on it that is recognizable by Grub.

EFI Grub needs the EFI partition. This partition needs to be properly aligned and formatted with FAT16. It must be mounted on /boot/efi. Grub gets installed on this with the following commands:

apt-get install grub-efi-amd64
grub-install --bootloader-id=GRUB --removable


Grub fails to load EFI console stuff. Grub needs extra modules to allow the kernel to write on the console. Otherwise the kernel just hangs without output. This can be done via /boot/grub/custom.cfg to load them:

insmod efi_gop
insmod efi_uga


The Radeon card needs a firmware. It just produces random output. To get a usable output, Radeon modeset must be disabled on the kernel command line:


Now the firmware can be installed:

apt-get install firmware-linux

Wireless card

The Broadcom wireless card needs a firmware:

apt-get install firmware-brcm80211

Booting Debian via AoE (ATA over Ethernet) and PXE

by Bastian Blank — last modified Apr 07, 2012 10:35 AM
Filed Under:

AoE is one of the protocols supported by Linux to access storage via network. It uses plain Ethernet for communication and includes a discovery mechanism to find all available targets. I use it to provide disk space to VMs running on different machines.

Next step is to boot via AoE. It is no real problem to use AoE in running systems. However with some help it is even possible to actually boot disk-less machines via AoE. The PXE implementation iPXE provides the AoE support to actually boot from. I will describe the necessary parts.

Setup vblade

The AoE target used it vblade.

vblade needs access to raw sockets. As I prefer to not have run anything as root if it is not necessary, I use filesystem capabilities to allow it access to the network.

setcap cap_net_raw+ep /usr/sbin/vblade

vblade gets the mac address of the initiator, the shelf and slot number, the network device and the block device.

/usr/sbin/vblade -m $mac 0 0 eth0 $dev

Setup a tftp server and iPXE

apt-get install atftpd ipxe
ln -s /usr/lib/ipxe/undionly.kpxe /var/lib/tftpboot

Setup ISC dhcpd

The dhcp server needs to be configured. It needs to hand out two distinct parameter sets. The first is used to chain-load iPXE into the normal PXE stack. The second is for iPXE and sets the root path to the AoE device. They are selected on the iPXE marker in the request.

if exists user-class and option user-class = "iPXE" {
  filename "";
  option root-path "aoe:e0.0";
} else {
  filename "undionly.kpxe";

Support AoE via initramfs-tools

The initramfs needs to initialize AoE support. It needs to enable the network device used for communication with the AoE server and wait until it is up. After that it needs to load the aoe module and run aoe-discover. We should have all devices now.

The root device can now be used like any other normal device. After the AoE device is initialized, it can be found via UUID and all the other ways. So no further modifications are necessary over the usage of local disks. The initramfs finds the device as usual and boots from it.

The initramfs support is still a prototyp, but seems to work. For initramfs-tools it needs a hook to include all necessary stuff in the initramfs and a script to actually do the work. Both are shown here.



case $1 in
  echo "udev"
  exit 0

. /usr/share/initramfs-tools/hook-functions

copy_exec /sbin/aoe-discover

manual_add_modules aoe



case $1 in
  echo "udev"
  exit 0

ifconfig eth0 up
sleep 10
modprobe aoe
udevadm settle --timeout=30


Not all parts of this works 100%. Some parts works not for all hardware.

  • My old notebook is not able to run Linux with this setup. grub loads the kernel via AoE and nothing comes later.
  • The network may need more time. Especially in large environments with spanning tree enabled, it may need half a minute until any packets will flow.

Some of the problems can be addressed later. Some can't.

Magic Lantern on EOS 500D

by Bastian Blank — last modified Dec 20, 2011 03:45 PM
Filed Under:

Magic Lantern is a firmware extension for Canon DSLR cameras. It provides many new features for video and liveview mode and also some for photo mode.

Magic Lantern is a firmware extension for several video capable Canon DSLR cameras. The (not longer) current release 11.11.11 of the unified branch works on 500D and most newer models except 5D Mk2.

The installation on my 500D was pretty easy. The camera needs the correct firmware installed (1.1.1), otherwise Magic Lantern will refuse to run. It needs one modification to the camera, a debugging setting to load the system from a SD card. The software is then loaded from a specially prepared SD-card every time the camera boots.

Magic Lantern includes a lot of nice extensions for video and liveview mode. My favorites are histogram overlay, edge detection and exposure display. The histogram overlay shows the histogram in different modes (RGB, luma) of the currently displayed view. Edge detection can be used to find the focus via the liveview output. However a lot more features are available.

Linux 3.0 and Xen

by Bastian Blank — last modified Jun 23, 2011 04:35 PM
Filed Under:

Linux 3.0 includes the traditional device backends and supports full Dom0-operation.

It took a long time to get all the parts of the Xen support into the Linux kernel. While rudimentary Dom0-support was available since 2.6.38, support for device backends were missing. It was possible to replace this backend with a userspace implementation included in qemu, but I never tested that.

With Linux 3.0, both the traditional block backend and the network backend are available. They are already enabled in the current 3.0-rc3/-rc4 packages in experimental, so the packages can be used as Dom0 and run guests. Right now the backend modules are not loaded, so this still needs some work. Neither the init scripts loads them, because the names where in flux the last time I laid hand on it, nor does the kernel themself expose enough information to load them via udev. I think using udev to load the modules is the way to go.

This step marks the end of a five year journey. Around 2.6.16 the Xen people started to stay really close to Linux upstream. With the 2.6.18 releas this stopped and the tree was pushed in different states into Debian Etch and RHEL 5. After that, Xen upstream ceased work on newer versions completely, only changes to the now old 2.6.18 tree where done. SuSE started a forward port of the old code base to newer kernel versions and Debian Lenny released with such a patched 2.6.26. Around that time, minimal support for DomU on i386 using paravirt showed up and Lenny had two different kernels with Xen support. Since 2.6.28 this support was mature and works rather flawless since. Somehow after that, a new port of the Dom0 support, now using paravirt, showed up. This tree based on 2.6.32 is released with Debian Squeeze. After several more rounds of redefining and polishing it is now mostly merged into the core kernel.

I don't know what the future brings. We have two virtualization systems supported by Linux now. The first is KVM that converts the kernel into a hypervisor and runs systems with help of the hardware virtualization. The later one is Xen that runs under a standalone hypervisor and supports both para- and hardware virtualization. Both works, KVM is easier to use and even works on current System z hardware. It can be used by any user with hopefully enough margin of security between them. Xen's home is more suited for servers, where you don't have users at all. Both have advantages and disadvantages, so everyone have to decide what he needs, there is no "one size fits all".

New software: python-dvdvideo

by Bastian Blank — last modified Aug 29, 2010 05:20 PM
Filed Under:

python-dvdvideo is a library to read DVD-Video images. It includes a tool to dump encrypted DVD-Video images. It is implemented in Python 3.

After a long time, I decided to write again. I decided to start with software I wrote for my own usage that could be usefull for other people. I'll start with python-dvdvideo, a DVD-Video reader written in Python 3, and the reference tool dvd-video-backup-image, a generic DVD-Video dumper. Lets see, if this blog will see more postings in the future.


I started to write this software, because libdvdread was often unable to decipher my newly purchased video DVDs. libdvdread expects a rather valid structure of the filesystem and other metadata on the disk. It will forcefully bail out on several error conditions. So I often ended patching libdvdread to make dvdbackup able to read the new disks.

Usually there are two ways to create backups of such DVDs, as files or complete images. Dumping them as files have large problems if there are certain defects in the filesystem, like some space is referenced in several titlesets. I have a disk that produces 25GiB of output during such a dump. So the less problematic way to do that is to dump the complete image. That is the way I used in the tool I built on top of this.


The software is devided into several parts. First a small UDF reader. On top of this comes a DVD video reader. It makes use of libdvdcss wrapper. All of this is used to implement a small tool to dump whole images. I will describe this parts here.

UDF reader

The UDF reader implements a minimal set of features. I implemented only the stuff I found as needed and used in the available DVDs. This reader allows to read the lowlevel UDF, used as base of all video DVDs.

DVD video reader

The dvd video reader uses the UDF reader to get the necessary information from the disk. Again this reader is quiet smallish. It only trusts the UDF for the starts of titlesets and expects that anything else is listed in the info files. This allows to read even discs with broken filesystems, which are really common.

libdvdcss wrapper

The libdvdcss wrapper is implemented using ctypes. The ctypes library allows easy access to functions defines in shared object. The library allows calling of the functions and maps arguments and return values to the Python datatypes. This wrapper allows me to read also encrypted DVDs.

Image dumper

This tool allows to dump a encrypted video DVD into a file. It tries to detect encrypted (video/vob files) and unencrypted (info files, otherwise used space) parts of the disk. This way it is able to dump anything, as long as it can read the filesystem and info files. However, some discs contains overlapping areas, which can't be that easily deguised.

The tool includes a small conflict resolver that handles overlapping parts. It uses a set of rules to allow some types to coexist and some to be modified. On of the rules relabels things included in an info files but also a title vob as always unencrypted. With this resolver, most of the problems can be handled and we get a playable result.

License and distribution

This package is licensed GPL 3 or later. It is for new distributed via Alioth.


This tool allows me to dump all video DVDs I got my hands on in the last time. It allows me to watch the videos on my notebook that have no optical disc reader on its own. Maybe someone may need such a tool also.

USA-Urlaub: 5. Tag (Mt. Tabor)

by Bastian Blank — last modified Sep 29, 2009 05:20 PM
Filed Under:

Heute habe ich erst mal ein Auto besorgt und dann die neue Unterkunft für die nächsten Tage gesucht.

Irgendwie hatte ich danach nich gerade viel Lust, also bin ich mal auf den nächsten Vulkankrater, den Mount Tabor, gelaufen. Dieser ist ein bewaldeter Hügel mitten in der Stadt und wohl auch ein beliebtes Ziel der Radfahrer.


Harvey W. Scott

An ein paar Stellen sieht man ziemlich weit.


Downtown vom Mount Tabor


Mount Hood

USA-Urlaub: 1. Tag (Konferenz)

by Bastian Blank — last modified Sep 29, 2009 04:30 PM
Filed Under:

Die komplett unpassend gestellte inner Uhr hat mich um 0400 aus dem Schlaf gerissen. Meinem Kollegen im Zimmer ging es genau so. Es war also etwas kollektives Dösen angesagt. Der Sonnenaufgang bescherte uns das folgende Bild.


Sonnenaufgang in Portland mit Mount Hood

Bis auf die zu laute Klimaanlage war das dann eine ganz normale Konferenz. Es gab BoFs und Vorträge zu allen möglichen Themen. Der erste BoF war einer der wichtigsten, es ging um Linux Packaging in den Distributionen und wie man das ein wenig besser vernetzen kann.

Abends gab es einen Empfang zusammen mit den Besuchern der LinuxCon in einem Meeresfrüchterestaurant. Das Essen war super und die Unterhaltungen auch. Ich durfte jemandem dann noch erklären, was es mit dem Oktoberfest auf sich hat.

USA-Urlaub: Erste Eindrücke

by Bastian Blank — last modified Sep 24, 2009 06:25 PM
Filed Under:

Ich habe die Möglichkeit erhalten, die Linux Plumbers Conf in Portland, Oregon, Vereinigte Staaten zu besuchen. Ich lasse mir dann natürlich nicht die Möglichkeit entgehen, noch etwas Urlaub in dem mir bis jetzt nur aus Erzählungen bekannten Umfeld zu machen.

Mein Flug ging über Atlanta und ein paar Sachen fallen mir dann schon dort auf.

  • Alles ist groß. Schon der Weg vom Gate zur Passkontrolle sind ein paar hundert Meter. Und bis zum Gate an welchem mein Anschlussflug geht, bin ich noch mal über 10 Minuten unterwegs, inklusive einer Fahrt mit einer automatischen Bahn.
  • Die Amis scheinen eine Aversion gegen Treppen zu haben. Im ganzen Flughafen Atlanta ist mir gerade mal eine Treppe begegnet. Sonst gab es noch welche an den Notausgängen oder sie waren nur für Mitarbeiter. In Portland gab es dann keine einzige. Es gibt nur Aufzüge und Rolltreppen.

Um 2200 Uhr (0700 Uhr nach der inneren Uhr) war ich dann im Hotel und konnte dann endlich etwas schlafen.

Almighty root

by Bastian Blank — last modified Apr 04, 2009 05:45 PM
Filed Under:

I was asked to take a look at a machine where aptitude don't even want to do the upgrade to etch. A first inspection shows some weird repositories in the source.list file and many daemons noone ever should use on that machine. I was able to do the upgrade with apt-get then.

After some time I got asked over modifications in /etc/exports. It basically included the following content:[1]

/     *(rw,async,no_root_squash)
/home *(rw,async,no_root_squash,nohide)
/usr  *(rw,async,no_root_squash,nohide)
/var  *(rw,async,no_root_squash,nohide)
[1]For those who don't speak NFS: This exports the specified filesystems (/, /home, /usr and /var) to everyone, and accepts whatever the client system say.

The machine was taked out of service immediately. We'll have no chance to answer the question whether this was silliness or intend.

Die Strasse, eine Farbwahl

by Bastian Blank — last modified Apr 02, 2009 12:00 PM
Filed Under:

Auf anraten der besten Augenoptikerin bin ich inzwischen glücklicher Besitzer einer Sonnenbrille. Ausserdem war heute ein schön warmer und sonniger Tag, genau das richtige Wetter um das Motorrad aus dem Winterschlaf zu holen. Nach etwas gutem Zureden erwachte das Motorrad zu neuem Leben und mit angemessener Schutzkleidung und der abgedunkelten Sicht ging es los.

Nach einer Weile fiel mir ein Auto mit einer komischen Lakierung auf; es sah aus wie ein Effektlack aus Purpur und Türkis, ähnlich der Wertangabe auf den hohen Euroscheinen. Nachdem immer mehr Autos diesen Effekt zeigten wurde ich unsicher und nach der nächsten Kurve fing auch die Strasse an in allen Farben zu leuchten. Mit geöffnetem Visier sah man alles zwar etwas dunkler, aber in der korrekten Farbwahl. Kaum war es wieder geschlossen zeigt sich die Umgebung und auch der Himmel in allen Spektralfarben.

Create version in Genericsetup metadata.xml from

by Bastian Blank — last modified Mar 08, 2009 11:25 PM
Filed Under:

Today I asked if it is possible to do automatic updates of the version in metadata.xml from the (possible mangled) version from Nothing popped up, so I wrote an extension of setuptools which does this.

import os.path
from distutils import log
from setuptools.command.egg_info import egg_info as _egg_info

class egg_info(_egg_info):
    def run(self):

        version = self.distribution.metadata.version

        for package in self.distribution.packages:
            path = os.path.join(*(package.split('.') + ['profiles', 'default']))
            if os.path.isdir(path):
                metadata_out = os.path.join(path, 'metadata.xml')
                metadata_in = os.path.join(path, '')
                if os.path.exists(metadata_in):
          'writing %s', metadata_out)
                    d = open(metadata_in, 'r').read().replace('@VERSION@', version)
                    open(metadata_out, 'w').write(d)

    cmdclass = {'egg_info': egg_info},

The version is listed with a placeholder in the input file called and is replaced during a normal develop call as done by buildout.


UPDATE: This is evil. metadata.xml should list the the config/profile version, not the code version.

Einem geschenkten Gaul schaut man nicht ins Maul

by Bastian Blank — last modified Nov 15, 2008 11:20 PM
Filed Under:

Dieses Sprichwort werde ich heute mal großzügig missachten und über das leider relativ nutzlose Geschenk berichten.

Es begab sich, das Pollin Electronic an diesem Wochenend mal wieder die halbjährige Hausmesse stattfinden lies. Als extra Schmankerl sollte es ein "gratis" Mobiltelefon für jeden Besucher geben. Dieses entpuppte sich als ein LG KP100, einem nach dem ersten Eindruck brauchbaren Gerät ohne Kamera und sonstigem Schnickschnack, von Vodafon in Verbindung mit einer CallYa-Karte, welche auch sofort auf den Besucher registriert wurde.

Nach dem Auspacken und etwas mit Strom versorgen sollte es ausprobiert werden. Also die erstbeste - natürlich nicht von Vodafon - SIM-Karte ausgesucht und reingesteckt. Mehr als ein "mit dir mag ich nicht reden" und der Wahl zwischen Notruf und Unlock wollte es sich mit dieser Karte allerdings nicht entlocken lassen; der leidige SIM-Lock hatte also zugeschlagen.

Damit ist der Plan dieses Gerät als Zweittelefon für Umgebungen in denen Kameras unerwünscht sind oder falls gerade mal ausversehen der Akku leer ist zunichte gemacht und ich frage mich was ich mit dem Telefon und der zwangsweise auf mich registrierten Karte anstellen soll, für mich haben diese so keinen Wert und sind ein Fall für den Entsorger.

Laut der Aufschrift hat die Karte ein Guthaben von 10EUR und sollte für 19,95EUR verkauft werden. Da das Telefon gelockt ist muss ich davon ausgehen das es mehr Wert ist als dieser Preis, da die Subventionen ja immer als Grund herhalten mussten.

Aktueller Zustand: Telefon ist gelockt; ich bin sauer. Jemand der sauer ist wird nicht unbedingt gerne Kunde, wenn nicht unbedingt nötig und da das ganze verschenkt wurde gibt es ja nicht gerade den "Zwang" in Form eines Preisschildes das jetzt zu verwenden. Hypothetischer Zustand: Telefon ist nicht gelockt, ich bin nicht sauer. Jemand für den der eine Teil des Geschenks, das Telefon selber, nützlich ist, überlegt sich eher ob der andere Teil, die CallYa-Karte, auch nützlich sein könnte.

Vodafon kostet diese Aktion für jeden nicht gewonnenen Kunden den Einfauspreis für das Telefon und die vielleicht noch mit dem Kartenguthaben erzeugten Kosten. Nur durch das erlangen von Neukunden kann daraus Profit geschlagen werden.

Ich habe den Sinn dieses SIM-Locks nie verstanden. Was soll hier vor wem beschützt werden? Das was sind wohl die Subventionen die in die Telefone fliessen. Beschützt werden müssen diese wohl vor dem Kunden, da dieser sich ein neues Telefon kaufen muss wenn er woanders hingehen will anstatt es einfach mitzunehmen.

Ubuntu, Ubuntu

by Bastian Blank — last modified Oct 10, 2008 01:20 PM
Filed Under:

I was forced to try Ubuntu Hardy in the new university pool. The setup includes one Linux server dedicated for the pool, one Windows AD for Kerberos authentication, one Windows fileserver with user data and 20 clients. The clients are new HP machines with a Radeon Xpress 200 card.

First problem was nasty, the X server turned the display black and then crashed, leaving an unusable console behind. Even a blacklist of the radeon module does not work. Somehow the Xorg radeon driver loads the module on its own, ignoring the modprobe blacklist. Only a hard blacklist using install radoen /bin/false in the modprobe config was able to prevent this. Lets hope that the endeavors to remove much priviledges from the X server goes good.

For the homes two setups was tested:

  • Homes on the Windows server via cifs, mounted via pam_mount using NTLM password.
  • Homes on the Linux server via nfs version 3.

The first one just produced an error that some Gnome component was unable to lock ~/.ICEauthority. This could be worked around in the Xsession. Otherwise it was usable to slow, which may be a problem with the Windows server.

The later, plus a reinstallation with fglrx, produced first unresponsive windows and then a complete unusable Gnome desktop. I would not completely vote against a problem with fglrx or even a broken installation, but the amount of problems exceeded the threshold.

Now we will try Lenny and see if this also happens there. Especially home via nfs is not that uncommon that we can let it broken in a release.

Xen update

by Bastian Blank — last modified Sep 28, 2008 09:10 PM
Filed Under:

I found a machine which is not so ancient and did some tests with Xen on it.


First was some tests with different Linux kernels and hypervisors (3.2 and 3.3). I have to say the overall compatibility got better. As unpriviledged domain (DomU) only one of the kernels failed, the one from Etch (2.6.18-6-xen-686) on the x86_64 hypervisor because of missing setup code.

For the operation as priviledged domain (Dom0) it looks not so good. The 2.6.18 from Xen 3.1 works mostly, the Lenny-targeted 2.6.26 is a little bit picky about the hardware and seems to work better in the 64bit variant, the 2.6.18 from 3.3 is old but rock-stable.

Stub domain

Xen 3.3 adds the possibility to move a the qemu which provides the emulated hardware for full virtualized domains in its own (paravirtualized) domain. The documentation is not really complete and the whole thing rather fragile. Error messages from the emulation domain are swallowed and depending on the config it also likes to crash.

It wants a new service, a filesystem backend, which is implemented in a root process in the dom0, even if it is not needed for operation. This service is not configurable, exports anything in /exports and allows writing, the code have similar quality then qemu.