The whole browser window but a border

Filling the whole browser window except for a border is surprisingly difficult. Here are three solutions.

    <link rel="stylesheet" type="text/css" href="test.css">
    <div class="outer outer1">
      <div class="inner">

Instead of outer1, there is also outer2 and outer3. The solutions use different approaches: The calc() function to do math in CSS, absolute positioning with top, left, bottom, right and the new box-sizing property. Here is the stylesheet:

html, body
    margin: 0px;
    padding: 0px;

    background-color: red;
    position: absolute;
    padding: 8px;

    width: calc(100% - 16px);
    height: calc(100% - 16px);

    top: 0px;
    left: 0px;
    right: 0px;
    bottom: 0px;

    box-sizing: border-box;
    height: 100%;
    width: 100%;

    box-sizing: border-box;
    background-color: green;
    width: 100%;
    height: 100%;
Posted in Uncategorized | Leave a comment

Docker trouble

Docker containers are great, and the Dockerfile build process is quite good, but there are pitfalls for newbies who come to Docker with a virtualization mindset. Docker containers are not light-weight VMs, because the abstraction happens at a much higher level. Docker is platform-as-a-service, not system-as-a-service. Here is a short list of issues I encountered migrating a couple of services from bare metal to Docker containers:

  • Lack of kernel independence with SE Linux
  • Lack of docker internals independence in the mount table.
  • Docker containers have no login session, so there is no TZ, TERM, LC_ALL setting, and changing the system settings in /etc has no effect – this will surprise some users.
  • The UIDs of the container and the host system are shared (this will probably be fixed with UID/GID mapping soon), encouraging users to run all containers as root just to make images shareable. A security failure in the container isolation leads to privilege escalation.
  • The hostname is randomly generated on each container start (breaking for example carbon-daemon metric logging, which includes the hostname), requiring application patching to set fix imaginary hostnames for reproducible results.
  • Lack of resource isolation, for example with regards to I/O performance. A container utilizing I/O resources heavily can stall a filesystem sync operation in another container.

Some generic issues also arise:

  • Docker images carry a tag, which is an arbitrary label (with the special tag “latest” being the silent default). Many use version numbers for labels, but this is an illusion, as tags are not formally inter-related, so Docker does not know if there is a newer version of an image available. This raises the question when images should be rebuild, and how to get noticed of base image updates.
  • There is a private Docker registry container to replace the Dockerhub, but it does not include the automatic building of images from Dockerfiles and assets in git repositories.
Posted in Uncategorized | Leave a comment

Fun with aptly

aptly (by Andrey Smirnov) seems to be a swiss army knife for Debian/Ubuntu repositories. You can create (partial) mirrors, snapshot them, merge snapshots and push them to an apt-get’able repository. You can also upload packages to a local repository and snapshot and/or publish that, too. Andrey is rocking the Debian world with this, thanks a lot!

To illustrate the work-flows that this tool enables, here is an example that extracts firebird 2.5.1 and its dependencies from Ubuntu precise (12.04) and injects it into a published repository for trusty (14.04) installations (which have only firebird 2.5.2).

# aptly mirror create -filter=firebird2.5-superclassic -filter-with-deps -architectures=amd64 ubuntu-precise-firebird precise universe
# aptly mirror update ubuntu-precise-firebird
# aptly snapshot create firebird from mirror ubuntu-precise-firebird

# aptly mirror create -filter=libicu48 -architectures=amd64 ubuntu-precise-libicu48 precise main
# aptly mirror update ubuntu-precise-libicu48
# aptly snapshot create libicu48 from mirror ubuntu-precise-libicu48

# aptly snapshot merge firebird2.5.1 firebird libicu48
# aptly publish snapshot -distribution trusty -component firebird firebird2.5.1

Now you can use this repository (assuming the files are available on localhost:80) with

# echo "deb http://localhost:80/ trusty firebird" >> /etc/apt/sources.list
# apt-get install firebird2.5-superclassic= firebird2.5-common= firebird2.5-server-common= firebird2.5-classic-common= firebird2.5-common-doc= libfbembed2.5= libib-util= libfbclient2=

I am not sure why apt needs so much hand-holding, maybe there is an easier way.

This is super-easy to figure out, thanks to the excellent online help and superb diagnostic output. I’d suggest that Andrey This repository adds bash tab completion to the commands for easier typing, as there are a lot of options and you might have many mirrors, snapshots and repositories.

If you have anything to do with maintaining a larger set of Debian/Ubuntu installations, check it out!

By the way, here is a condensed docker file that may be useful (aptly.conf is just the default config file in my case):

FROM ubuntu
ENV DEBIAN_FRONTEND noninteractive
# Based on
RUN echo "deb squeeze main" > /etc/apt/sources.list.d/aptly.list; \
apt-key adv --keyserver --recv-keys 2A194991; \
apt-get update; \
apt-get install aptly -y
COPY aptly.conf /etc/aptly.conf
# Will contain db, pool and public directories.
VOLUME ["/aptly"]
# Install a basic SSH server
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir -p /var/run/sshd
# There is no sane way to insert an authorized_keys file by volumes
# due to permission mismatch (ssh requires 0600 root, which docker
# can't read), so for now just add the file in the build.
COPY authorized_keys /root/.ssh/authorized_keys
RUN chmod 0700 /root/.ssh
RUN chmod 0600 /root/.ssh/authorized_keys
# Tab completion is useful.
RUN wget -O /etc/bash_completion.d/aptly
RUN apt-get update && apt-get install -y bash-completion
RUN echo ". /etc/bash_completion" >> /root/.bashrc
# Default signing key for aptly (used automatically).
COPY gnupg-private.txt /root/gnupg-private.txt
RUN gpg --import /root/gnupg-private.txt
# Standard SSH port
CMD ["/usr/sbin/sshd", "-D"]
Posted in DevOps | Leave a comment

Query Semantic MediaWiki with Angular through CORS

I have a private MediaWiki with the Semantic MediaWiki extensions, to keep some personal data. Wouldn’t it be nice to query that data from some other server, or from a web app? Semantic MediaWiki has a nice API that allows us to get data in JSON format. But we need to defeat the Same-Origin-Policy that protects our servers from evil code. JSONP is a well-known method that works, but only for anonymous requests on public wikis. Here is another approach that works with closed wikis, too.


In LocalSettings.php, you need to allow CORS. You can use the *-wildcard or a list of allowed domains to query your MediaWiki instance:

$wgCrossSiteAJAXdomains = array( '*' );

Also, all API requests will take an origin parameter that repeats the domain from which the request came. This is very annoying, but the MediaWiki developers were concerned about implemented caching properly and efficiently, and this is the solution they came up with.

I am running the example code in a local server with

python -m SimpleHTTPServer 8000

so the origin parameter should be http://localhost:8000 and I don’t need to disable strict origin policy checking for file URIs in my browser.


Here you need to configure the http service provider to allow cross-domain requests. We also configure it to send credentials along with a request globally.

   app.config(function($httpProvider) {
        $httpProvider.defaults.useXDomain = true;
        $httpProvider.defaults.withCredentials = true;

Logging in

To log in, you need to send a POST request to the MediaWiki API, and follow it up with another POST request to confirm (older versions only require one step). I put this in a controller:

$'', '',
           { params: { origin: 'http://localhost:8000',
                       format: 'json',
                       action: 'login',
                       lgname: 'marcus',
                       lgpassword: 'secretpassword' },
             /* Prevent CORS preflight.  */
             headers: { "Content-Type": "text/plain" }
            }).success(function(data) {
              if (data.login.result == 'NeedToken') {
                $'', '',
                           { params: { origin: 'http://localhost:8000',
                                       format: 'json',
                                       action: 'login',
                                       lgname: 'marcus',
                                       lgpassword: 'secretpassword',
                                       lgtoken: data.login.token },
                             /* Prevent CORS preflight.  */
                             headers: { "Content-Type": "text/plain" }
                            }).success(function(data) {
                                        { params: { origin: 'http://localhost:8000',
                                                    format: 'json',
                                                    action: 'ask',
                                                    query: '[[Category:Contact]]'
                                         pim.contacts = data.query.results;

Not the prettiest of code. There is a lot of error handling missing, and so on. But it should get you going. The login process will store session cookies in the http service provider, which are sent in the following API requests. Of course, you can also query wiki pages with the parse action, etc., as normal.

The special content-type header prevents the pre-flight OPTIONS requests that are specified by CORS, and that are not supported by MediaWiki. If you see unhandled OPTIONS requests in your network log, then you need to take a closer look at the content-type header. I don’t know yet if that is a concern for downloading images from the MediaWiki server. If you try it, leave a comment!

Posted in Programming | Tagged , , , , , | Leave a comment

Cython Trouble

Here are a couple of things I experienced using Cython to wrap my C++ library grakopp:

Assignment and Coercion

  • I couldn’t find a nice way to wrap boost::variant. Although the direct approach works, an assignment to the variant requires an unsafe cast, but that also adds the overhead of a copy. To work around this, I used accessor functions (requires changing the C++ implementation).
  • The operator= is not supported to declare custom assignment functions.
  • There is no other way to add custom coercion rules. The support for STL container coercion is hardcoded in Cython/Compiler/ This also makes the builtin coercions less useful.
  • string coercion seems to be unhappy quite often, so you have to cast to string or char* even constant strings.


  • Relative cimports are not supported.
  • Unintuitively, a corresponding pxd file is automatically included, which can not be supressed. So renaming its imports with “as” in a cimport is not possible.
  • There is no sensible place to put shared pxd files. [Wiki] [Forum]

References and Initializers

  • References can not be initialized with an assignment, so pointers or slow copies have to be used everywhere.
  • Only the default constructor can be used to instantiate stack or instance variables.


  • Cython could not resolve function overloading which differed only in the constness of the return type.


  • It’s not possible to write meta-extension classes for template classes directly – only extension types for specific instantiations. [Forum]
  • It’s not possible to use non-type template arguments (as a workaround, you can use ctypedef int THREE "3"). [Forum]
  • It’s also not possible to instantiate function templates, but there is a similar workaround. [Forum]
Posted in Programming | Tagged , , , | Leave a comment

Manual symbol version override

I had to deal with a proprietary software library that wouldn’t run on CentOS 6.5, because the library was compiled against a newer glibc (>= 2.14) while CentOS was running on glibc 2.12. Actually, there was only one symbol versioned later than 2.11, which was memcpy@2.14. It turns out that this is due to a well-known optimization (nice discussion with links [here).

Normally, one would install an appropriate version of the library and set LD_LIBRARY_PATH accordingly, but for libc that ain’t so easy, because you also need a matching runtime linker, and the kernel will always use /lib64/ or whatever is found in the INTERP program header of the ELF executable. You can run the matching linker manually, but this only works for the invoked processes, not its children. It’s a huge PITA, and short of a chroot filesystem or other virtualization I don’t know a good way to replace the system C library (if you know a way, leave a comment!).

Anyway, I decided to patch the binary. First, I checked the older version of the library, and saw that it required memcpy@2.2.5. So here we go:

$ readelf -V

Version symbols section '.gnu.version' contains 3627 entries:
 Addr: 000000000003688e  Offset: 0x03688e  Link: 2 (.dynsym)
  000:   0 (*local*)       0 (*local*)       1 (*global*)      1 (*global*)
  d7c:   9 (GLIBC_2.14)    1 (*global*)      1 (*global*)      1 (*global*)

Version needs section '.gnu.version_r' contains 4 entries:
 Addr: 0x00000000000384e8  Offset: 0x0384e8  Link: 3 (.dynstr)
  000000: Version: 1  File:  Cnt: 1
  0x0010:   Name: GCC_3.0  Flags: none  Version: 8
  0x0020: Version: 1  File:  Cnt: 3
  0x0030:   Name: CXXABI_1.3.1  Flags: none  Version: 7
  0x0040:   Name: CXXABI_1.3  Flags: none  Version: 6
  0x0050:   Name: GLIBCXX_3.4  Flags: none  Version: 4
  0x0060: Version: 1  File:  Cnt: 3
  0x0070:   Name: GLIBC_2.14  Flags: none  Version: 9
  0x0080:   Name: GLIBC_2.11  Flags: none  Version: 5
  0x0090:   Name: GLIBC_2.2.5  Flags: none  Version: 3
  0x00a0: Version: 1  File:  Cnt: 1
  0x00b0:   Name: GLIBC_2.2.5  Flags: none  Version: 2

Let’s check if reference 0xd7c (== 3452) really is memcpy:

$ readelf --dyn-syms|grep $((0xd7c))
  3452: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND memcpy@GLIBC_2.14 (9)

We look up the specification for the layout of these .gnu.version sections. The plan is to copy the entry for GLIBC_2.2.5 into the entry for GLIBC_2.14, so that all references to version “9” go to glibc 2.2.5 instead of 2.14.

$ od -j $((0x384e8+0x60)) -N $((0x40)) -Ax -t x1
038548 01 00 03 00 ed b9 01 00 10 00 00 00 40 00 00 00
038558 94 91 96 06 00 00 09 00 43 ba 01 00 10 00 00 00
038568 91 91 96 06 00 00 05 00 4e ba 01 00 10 00 00 00
038578 75 1a 69 09 00 00 03 00 59 ba 01 00 00 00 00 00

We can see the verneed (“version needed”) entry for libc here, together with three vernaux (“version needed auxiliary”) entries. Each vernaux entry consists of a 4-byte hash value for the version name (for faster comparison than strcmp, here 0x06969194 for GLIBC_2.14), 4 bytes flags and other information (such as the version number referenced by the .gnu.version section in the last byte), a 4-byte offset into the string table with the human-readable version string, and a 4-byte length for the entry (always 0x10).

We want to keep the indirectly referenced version number (“9″), so there are no duplicate entries, but copy the hash and string pointer values. Of course, the next offset stays, too. After editing with a hex editor, we have:

$ od -j $((0x384e8+0x60)) -N $((0x40)) -Ax -t x1
038548 01 00 03 00 ed b9 01 00 10 00 00 00 40 00 00 00
038558 75 1a 69 09 00 00 09 00 59 ba 01 00 10 00 00 00
038568 91 91 96 06 00 00 05 00 4e ba 01 00 10 00 00 00
038578 75 1a 69 09 00 00 03 00 59 ba 01 00 00 00 00 00

Let’s see if this worked:

$ readelf -V
  0x0060: Version: 1  File:  Cnt: 3
  0x0070:   Name: GLIBC_2.2.5  Flags: none  Version: 9
  0x0080:   Name: GLIBC_2.11  Flags: none  Version: 5
  0x0090:   Name: GLIBC_2.2.5  Flags: none  Version: 3

$ readelf --dyn-syms|grep $((0xd7c))
  3452: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND memcpy@GLIBC_2.2.5 (9)

There is an extra memcpy@@2.14 reference, but no entry for it in the version table. I can get rid of that with strip --strip-unneeded, if I want to.

This seems to work for me just fine, and in fact it would have worked even if there wasn’t a GLIBC_2.2.5 entry already, but an entry for some other version. However, if there are more symbols to deal with, we might actually need to edit the actual symbol versions in the .gnu.version section (change the “9” into a “3” in this case), or do more complicated editing.

Posted in Programming | Tagged , , | Leave a comment

Virtualization Technologies

Virtualization technology is moving fast, and what used to be hot yesterday is as cold as ice today. There is a lot of material to digest, and a lot of documentation that seems somewhat relevant but can be out of date. Surely, this blog post will suffer the same fate, but nevertheless, here it is: A quick list of the most relevant and up to date technology that I could find to set up a small cloud.

KVM provides low-level access to hardware virtualization.

Virtio provides I/O paravirtualization to grant guest systems faster, more direct access to host system peripherals.

Qemu has full support for KVM and virtio. As an extra bonus, it also supports legacy I/O virtualization as well as full system emulation (for example, running ARM systems on X86 hardware). It’s the glue that binds things together to a complete virtualized machine (system board and peripherals).

libvirtd is daemon to manage virtual machine instances and the underlying storage and network devices. This is the productive level for actual user interaction (while the above are building blocks used only indirectly through the libvirtd interface). libvirtd uses policykit for access control. In addition to the CLI tool virsh, there are many other tools that build on libvirtd, some graphical, such as virt-manager.

For a small personal cloud, libvirtd with its basic tools, command line and graphical, may be all you need. Of course, enterprisey users may require more complete management interfaces such as OpenStack etc.

As for the operating system images, it is possible to install from scratch, but that is very old style. Today, most vendors provide OpenStack images in qcow2 format, which can also be used with various cloud providers, which can be very convenient to use. These are basically pre-installed systems with the cloud-init utility that runs at boot time and looks in various places for special configuration files. Beside fancy provisioning servers, it is possible to use a simple ISO image with meta-data.yml and user-data.yml files.

And there you have it, a blue print for your own personal cloud. Once you picked up these basic building blocks, understanding integrated solutions such as OpenStack should be much less confusing. Or at least that’s what I am hoping.

Posted in Programming | Tagged , | Leave a comment

Compiling FreeCad on Fedora 20

FreeCad is a very promising free and portable CAD program. Unfortunately, it’s dependency chain is a bit messy, and building those libraries is not for the faint of heart. Normally, GNU/Linux distributions do a good job on that for you, but in Fedora, the packaging is not quite up to date. The included FreeCad 0.13 works, kinda, but there are crashes and bugs like missing text rendering in Draft mode. As FreeCad is progressing fast, it is useful to build the latest version, and here is how to do just that on Fedora 20.

First, you install the dependencies, except for coin, soqt and python-pivy.

 $ sudo yum install cmake doxygen swig gcc-gfortran gettext dos2unix desktop-file-utils libXmu-devel freeimage-devel mesa-libGLU-devel OCE-devel python python-devel boost-devel tbb-devel eigen3-devel qt-devel qt-webkit-devel ode-devel xerces-c xerces-c-devel opencv-devel smesh-devel freetype freetype-devel libspnav-devel

Then you download and install the Coin3 source package by corsepiu:

 $ rpm -i Coin3-3.1.3-4.fc19.src.rpm

Which you can now build and install:

 $ rpmbuild -bb ~/rpm/SPECS/Coin3.spec
 $ sudo rpm -i ~/rpm/RPMS/x86_64/{Coin3-3.1.3-4.fc20.x86_64.rpm,Coin3-devel-3.1.3-4.fc20.x86_64.rpm}

Note that source packages are installed as normal user, while binary packages are installed as root. Verify that Coin3 is your active alternative for coin-config:

 $ alternatives --display coin-config
 coin-config - status is manual.
  link currently points to /usr/lib64/Coin3/coin-config

If it is not, set it with `alternatives –set coin-config`.

Now you have to rebuild and install a bunch of packages depending on Coin2 in Fedora 20. By rebuilding them, you make them depend on Coin3, which is what FreeCad expects.

 $ yumdownloader --source SoQt SIMVoleon python-pivy
 $ rpm -i SoQt-1.5.0-10.fc20.src.rpm SIMVoleon-2.0.1-16.fc20.src.rpm python-pivy-0.5.0-6.hg609.fc20.src.rpm
 $ rpmbuild -bb ~/rpm/SPECS/SoQt.spec
 $ sudo rpm -i ~/rpm/RPMS/x86_64/{SoQt-1.5.0-10.fc20.x86_64.rpm,SoQt-devel-1.5.0-10.fc20.x86_64.rpm}
 $ rpmbuild -bb ~/rpm/SPECS/SIMVoleon.spec
 $ sudo rpm -i ~/rpm/RPMS/x86_64/{SIMVoleon-2.0.1-16.fc20.x86_64.rpm,SIMVoleon-devel-2.0.1-16.fc20.x86_64.rpm}
 $ rpmbuild -bb ~/rpm/SPECS/python-pivy.spec
 $ sudo rpm -i ~/rpm/RPMS/x86_64/python-pivy-0.5.0-6.hg609.fc20.x86_64.rpm

Now you can finally download and build FreeCad:

 $ git clone freecad
 $ mkdir build
 $ cd build
 $ cmake ../freecad
 $ make -j 8

This will take a while. When it has finished, you can start FreeCad with:

 $ bin/FreeCad

Have Fun!

Posted in Programming | Tagged , | 3 Comments

OpenSSH authorized_key options by version

This should be in the official documentation, but for what it’s worth:

All versions of OpenSSH support the following options in authorized_keys:
command=””, environment=””, from=””, no-agent-forwarding, no-port-forwarding, no-pty, no-X11-forwarding.

Starting with version 2.5.2, OpenSSH supports permitopen.

Starting with version 4.3, OpenSSH supports tunnel.

Starting with version 4.9, OpenSSH supports no-user-rc.

Starting with version 5.4, OpenSSH supports cert-authority.

Starting with version 5.6, OpenSSH supports principals.

Posted in Uncategorized | Leave a comment

Bayesian inference introduction

I wrote a small introduction to Bayesian inference, but because it is pretty heavy on math, I used the format of an IPython notebook. Bayesian inference is an important process in machine learning, with many real-world applications, but if you were born any time in the 20th century, you were most likely to learn about probability theory from a frequentist point of view. One reason may be that calculating some integrals in Bayesian statistics was too difficult to do without computers, so frequentist statistics was more economical. Today, we have much better tools, and Bayesian statistics seems more feasible. In 2010, the US Food and Drug Administration issued a guidance document explaining some of the situations where Bayesian statistics is appropriate. Overall, it seems there is a big change happening in how we evaluate statistical data, with clearer models and more precise results that make better use of the available data, even in challenging situations.

Posted in Mathematics | Tagged , , , , , | 2 Comments