Author Archives: pommi

Routed IPTV via a Debian router (XS4ALL or KPN)

At home my FTTH Internet connection is provided by XS4ALL. They provide a FRITZ!Box router to connect to the Internet. Instead of using the FRITZ!Box I’ve always used my own Debian GNU/Linux machine to route traffic to the internet.

The XS4ALL uplink has 2 VLANs:

  • VLAN4: TV (bridged, RFC1483)
  • VLAN6: PPPoE IPv4 + IPv6 internet connection

My XS4ALL uplink is connected to a managed switch. My Motorola 1963 TV Receiver is directly connected to an untagged VLAN4 port on my switch. This way the TV Receiver is directly connected to the TV platform on OSI Layer 2.

Recently I got a letter from XS4ALL saying that this setup is going to change. The TV Receiver can not be connected to the TV platform directly anymore, but needs to be part of the internal network. This adds the ability to support Internet services (like Youtube, Netflix, etc.) on the TV Receiver.

Current setup

In my current setup the upstream connection is connected to a managed switch. VLAN4 and VLAN6 are tagged on this switchport. The TV Receiver is connected to an untagged VLAN4 switchport. It can directly communicate with the TV platform. The Debian Router is connected to a tagged VLAN6 switchport for internet access and a tagged VLAN1 switchport for the local network. Devices on the local network connect to the Internet via the Debian Router on VLAN1.

New setup

In the new setup the TV Receiver is not in untagged VLAN4 anymore. Instead VLAN4 is now tagged on the switchport of the Debian Router as it will function as a gateway to the TV Platform. I created VLAN104 in which the TV Receiver will be. It’s also possible to create a setup where the TV Receiver is in VLAN1, but my Managed Switch currently doesn’t support IGMP Snooping. The result of that would be that if you are watching TV, all other devices in VLAN1 also receive the IPTV multicast traffic.

Layer 2 / Layer 3 view

In a more detailed view, leaving out the physical hardware, it looks like the diagram below. Local devices on VLAN1 access the Internet through the Debian Router, which routes the traffic to VLAN6. The TV Receiver on VLAN104 accesses the TV Platfrom through the Debian router, which routes it to VLAN4. The Debian Router runs an igmpproxy to route Multicast Traffic (IPTV) from VLAN4 to VLAN104. The red arrow shows that the TV Receiver is now also able to access the Internet for for services like Youtube or Netflix.

How is the Debian Router configured?

First of all the Debian Router has 1 physical interface, 4 VLAN interfaces and 1 PPPoE interface. They are configured in /etc/network/interfaces:

auto eth0
iface eth0 inet manual
    up ip link set up dev eth0
    down ip link set down dev eth0

# LAN
auto vlan1
iface vlan1 inet manual
    pre-up ip link add link eth0 name vlan1 type vlan id 1
    up ip link set up dev vlan1
    up ip addr add 10.0.1.1/24 brd + dev vlan1
    down ip addr del 10.0.1.1/24 dev vlan1
    down ip link set down dev vlan1
    post-down ip link delete vlan1

# IPTV
auto vlan4
iface vlan4 inet manual
    pre-up ip link add link eth0 name vlan4 type vlan id 4
    up ip link set up dev vlan4
    post-up dhclient vlan4
    pre-down dhclient -x
    down ip link set down dev vlan4
    post-down ip link delete vlan4

# Internet (PPPoE)
auto vlan6
iface vlan6 inet manual
    pre-up ip link add link eth0 name vlan6 type vlan id 6
    up ip link set up dev vlan6
    down ip link set down dev vlan6
    post-down ip link delete vlan6

# IPTV (Internal)
auto vlan104
iface vlan104 inet manual
    pre-up ip link add link eth0 name vlan104 type vlan id 104
    up ip link set up dev vlan104
    up ip addr add 10.0.104.1/24 brd + dev vlan104
    down ip addr del 10.0.104.1/24 dev vlan104
    down ip link set down dev vlan104
    post-down ip link delete vlan104

# PPPoE
auto xs4all
iface xs4all inet ppp
    provider xs4all

The DHCP client configuration in /etc/dhcp/dhclient.conf will request a subnet-mask (option 1), broadcast-address (option 28), routers (option 3) and Classless Static Routes (option 121) on VLAN4:

option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
interface "vlan4" {
  request subnet-mask, broadcast-address, routers, rfc3442-classless-static-routes;
  send vendor-class-identifier "IPTV_RG";
}

This will result in the fact that the vlan4 interface will get an IP address and additional routes will be added to the route table of the Debian Router to be able to access the TV Platform:

# ip addr show dev vlan4
5: vlan4@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1b:21:c3:f8:90 brd ff:ff:ff:ff:ff:ff
    inet 10.86.117.65/21 brd 10.86.119.255 scope global vlan4
       valid_lft forever preferred_lft forever

# ip route | grep vlan4
10.86.112.0/21 dev vlan4 proto kernel scope link src 10.86.117.65
213.75.112.0/21 via 10.86.112.1 dev vlan4

Configure /etc/igmpproxy.conf to forward multicast traffic from VLAN4 to VLAN104:

phyint vlan4 upstream  ratelimit 0  threshold 1
        altnet 213.75.0.0/16
        altnet 217.166.0.0/16

phyint vlan104 downstream  ratelimit 0  threshold 1
        altnet 10.0.104.0/24

Make sure IPv4 forwarding is enabled:

# cat /proc/sys/net/ipv4/ip_forward
1

And configure IPTables to allow the traffic we want to allow:

# allow igmpproxy traffic to the TV Receiver
iptables -A INPUT -i vlan104 -j ACCEPT
iptables -A OUTPUT -o vlan104 -j ACCEPT

# allow dhclient + igmpproxy traffic to the TV Platform
iptables -A INPUT -i vlan4 -d 224.0.0.0/4 -j ACCEPT
iptables -A OUTPUT -o vlan4 -p udp --dport 68 -j ACCEPT
iptables -A OUTPUT -o vlan4 -p igmp -d 224.0.0.22 -j ACCEPT

# allow TV Receiver traffic to the TV Platform and apply Source NAT
iptables -A FORWARD -i vlan104 -o vlan4 -j ACCEPT
iptables -A FORWARD -i vlan4 -o vlan104 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i vlan4 -o vlan104 -p udp -d 224.0.0.0/4 -j ACCEPT
iptables -t nat -A POSTROUTING -o vlan4 -j MASQUERADE

# allow TV Receiver traffic to the internet
iptables -A FORWARD -i vlan104 -o ppp0 -j ACCEPT
iptables -A FORWARD -i ppp0 -o vlan104 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE

Download “NPO Radio 2 – Top 2000” in mp3 format

In a marathon program from Christmas to New Year’s eve NPO Radio 2 broadcasts the so called “Top 2000“. A list of the 2000 most popular songs of all time. Because I’m not able to listen all 2000 songs in one go, I like to have them on a USB drive in MP3 format, so that I’m able to listen for example in my car.

The shell script below downloads the full “Top 2000” of 2018 in MP3 format from the official website. 80 MP3 files, ~12GB in size.

#!/bin/sh
set -e

for i in $(seq 25 31); do
    curl -s https://www.nporadio2.nl/uitzendinggemist?date=$i-12-2018 | grep '/gemist/uitzending' | cut -d'"' -f 2 | xargs -i echo "https://www.nporadio2.nl{}" | tac >> pages
done

for p in $(cat pages); do
    curl -s $p | grep broadcaststream | cut -d '"' -f 2 | xargs -i echo "https:{}" >> mp3
done
# remove the 1st 4 items (00:00-02:00, 02:00-04:00, 04:00-06:00, 06:00-08:00)
tail -n +5 mp3 | sponge mp3

for m in $(cat mp3); do
    curl -OL $m
done

Safe in-place upgrade to a slim Debian stretch running i3

Debian Stretch was released last month, so it is time to upgrade my laptop. I’m an i3 window manager user. Previously my procedure was to backup /home, reinstall using the network installer (I don’t like apt-get dist-upgrade, I like to start clean) and tick the “Debian Desktop Environment … GNOME” checkbox and after the installer was done, install i3 and the rest of my tools.

While I use some of the tools from Gnome, like gnome-terminal, network manager, nautilus and Eye of Gnome, I do not really need the complete Gnome desktop environment and 100s of software packages that come with it. This time I want a basic system with only the tools I need. And I want to upgrade in-place without losing my Debian jessie install.

Some notes about the installation:

  • It’s a Lenovo Thinkpad T450s
  • I currently have 2 partitions: 1 for /boot and the other is encrypted with luks
  • The encrypted partition contains 3 logical volumes for /, /home and swap
  • A 4th lv (logical volume) will be created for the new root partition for Debian stretch
  • The new lv will be BTRFS formatted and I’ll use a BTRFS subvolume to be able to create snapshots of it
  • The minimal required software will be installed to run the i3 window manager, including some tools I regularly use.

Let’s prepare the root volume:

VG=bento # my lvm2 volume group is called bento
LV=stretch # the new lv will be called stretch
LABEL=stretch # label for btrfs

lvcreate -L10G -n $LV $VG
mkfs.btrfs -L $LABEL /dev/$VG/$LV

mkdir /mnt/$LV
mount /dev/$VG/$LV /mnt/$LV
cd /mnt/$LV
# create the root subvolume
btrfs subvolume create @
cd -
umount /mnt/$LV
# mount the subvolume instead
mount -o subvol=/@ /dev/$VG/$LV /mnt/$LV

The boot partition (/boot) will be reused/shared between the current Debian jessie install and the new stretch install. Because I still use Debian jessie daily for my work, I still want to be able to boot jessie as a fallback. To see what happens to /boot/grub and especially /boot/grub/grub.cfg I will make a git repository in /boot/grub.

cd /boot/grub
git init
git add -A .
git commit -am 'grub at the time jessie was still installed'

Let’s start with the install:

/usr/sbin/debootstrap --include udev,openssh-server,linux-image-amd64 stretch /mnt/$LV http://deb.debian.org/debian/

# the new system needs to know about the encrypted partition, so copy crypttab
cp /etc/crypttab /mnt/$LV/etc/crypttab

# configure /mnt/$LV/etc/fstab
# example contents (replace $UUID with the uuid of /boot, replace $LABEL with the btrfs label)
UUID=$UUID /boot ext4 defaults 0 2
LABEL=$LABEL / btrfs subvol=/@,defaults,noatime 0 0
LABEL=$LABEL /btrfs-root btrfs subvol=/,defaults,noatime 0 0
# optionally add the existing mountpoint for /home

# The root of the BTRFS filesystem is mounted at `/btrfs-root`. From here we can manage the subvolumes and snapshots
mkdir /mnt/$LV/btrfs-root

# chroot into the new system
mount -o bind /boot /mnt/$LV/boot/
mount -o bind /dev /mnt/$LV/dev/
mount -t proc proc /mnt/$LV/proc
mount -t sysfs sys /mnt/$LV/sys
chroot /mnt/$LV

# set a root password
passwd
# create a normal user account
adduser pommi

# we do not want recommended packages to be installed automatically
echo 'APT::Install-Recommends "false";' > /etc/apt/apt.conf.d/00InstallRecommends

# stable and security updates
cat > /etc/apt/sources.list <<EOT
deb http://deb.debian.org/debian stretch main contrib non-free
deb http://deb.debian.org/debian stretch-updates main contrib non-free
deb http://security.debian.org/ stretch/updates main contrib non-free
EOT
apt-get update
apt-get upgrade

# install some basics
apt-get install cryptsetup lvm2 locales busybox less grub-pc git vim-nox initramfs-tools btrfs-progs

# set the default locale (to for example en_US.UTF-8)
dpkg-reconfigure locales

# set the timezone
dpkg-reconfigure tzdata

The basics are done. Now install the Desktop Environment. To graphically login after booting I chose lightdm, which is a lightweight display manager. Gnome comes with gdm (Gnome Display Manager), but that installs ~87 another software packages I don’t want.

# suckless-tools for dmenu, x11-xserver-utils for xrandr
apt-get install lightdm i3 i3status suckless-tools xserver-xorg x11-xserver-utils

# networking, including wifi, gnome-keyring to store wifi passwords
apt-get install network-manager-gnome firmware-iwlwifi firmware-linux gnome-keyring

# and a terminal and a browser
apt-get install gnome-terminal firefox-esr

The system is now ready. Let’s check the changes in /boot/grub before we reboot.

cd /boot/grub
git status
git diff
# commit the changes
git add -A .
git commit -m 'after installing stretch'

In my case I had (at least) 2 kernels present in /boot. An active one for jessie (/vmlinuz-3.16.0-4-amd64) and a new one for stretch (/vmlinuz-4.9.0-3-amd64). When update-grub was executed from stretch just now, it changed all the grub menu items to boot into the new stretch system (every “linux” line now contains: root=/dev/mapper/$VG-$LV). To be able to still boot to jessie, I revert the changes for the “linux” lines that are supposed to boot the jessie system (/vmlinuz-3.16.0-4-amd64 kernels).

For example, change this:

linux /vmlinuz-3.16.0-4-amd64 root=/dev/mapper/bento-stretch ro quiet

back to (“root” is my current root lv):

linux /vmlinuz-3.16.0-4-amd64 root=/dev/mapper/bento-root ro quiet

And commit the result:

git add grub.cfg
git commit -m 'boot jessie with the old kernel'

Time to reboot to stretch

# exit the chroot
exit
umount /mnt/$LV/sys
umount /mnt/$LV/proc
umount /mnt/$LV/dev/
umount /mnt/$LV/boot/
umount /mnt/$LV
reboot

Lightdm will start and show you a login screen. Login using the normal user you just created an i3 will start.

Additional software and configuration

Lock screen (Ctrl+Alt+l and after 5 minutes):

apt-get install i3lock xautolock
echo 'exec xautolock -time 5 -locker i3lock' >> .config/i3/config
echo 'bindsym Control+$alt+l exec xautolock -locknow' >> .config/i3/config

Start Network Manager Applet on startup

echo 'exec --no-startup-id nm-applet' >> .config/i3/config

Open urls from gnome-terminal in firefox:

apt-get install xdg-utils
xdg-settings get default-web-browser
xdg-settings set default-web-browser firefox-esr.desktop

Collectd Graph Panel v1

v1 is here. CGP is finished ūüėÜ

Joking aside. It has been requested multiple times. So let’s get it over with. The last version was more then 3.5 years ago. This will be the last tagged version of CGP. Every commit in the master branch after this release can be considered as a new release. ūüėČ

Use git and “git pull” to keep up-to-date or download the latest version here.

Notable Changes since v0.4.1:

  • mobile support (responsive design)
  • automatic support for all plugins (markup/styling in json)
  • hybrid graph type (canvas graph on detail page, png on the others)
  • svg graph support
  • support for newer PHP versions
  • deprecate support for collectd 4

Special thanks for this version go to Peter Wu for improving security, Manuel Luis for maintaining jsrrdgraph and Vincent Brillault for his amount of contributions.

githubGitHub: https://github.com/pommi/CGP
Download: https://github.com/pommi/CGP/archive/master.zip
Git: git clone https://github.com/pommi/CGP.git

Nagios notifications via Telegram

This post shows you how to use Telegram for Nagios notifications. First create a Telegram Bot by talking to the BotFather. The Telegram Bot will be the sender of the Nagios alerts.

telegram-botfather

You’ll receive an API token that also includes the UserID of the Bot:

  • Token: 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw
  • UserID: 200194008

Download the nagios_telegram.py script that will send the alerts via Telegram:

wget -O /usr/local/bin/nagios_telegram.py https://raw.githubusercontent.com/pommi/telegram_nagios/master/telegram_nagios.py
chmod 755 /usr/local/bin/nagios_telegram.py

This is the configuration you need in Nagios (of course replace the token with your own):

# commands to send host/service notifications
define command {
  command_name     notify-host-by-telegram
  command_line     /usr/local/bin/telegram_nagios.py --token 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw --object_type host --contact "$CONTACTPAGER$" --notificationtype "$NOTIFICATIONTYPE$" --hoststate "$HOSTSTATE$" --hostname "$HOSTNAME$" --hostaddress "$HOSTADDRESS$" --output "$HOSTOUTPUT$"
}
define command {
  command_name     notify-service-by-telegram
  command_line     /usr/local/bin/telegram_nagios.py --token 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw --object_type service --contact "$CONTACTPAGER$" --notificationtype "$NOTIFICATIONTYPE$" --servicestate "$SERVICESTATE$" --hostname "$HOSTNAME$" --servicedesc "$SERVICEDESC$" --output "$SERVICEOUTPUT$"
}

# 2 example contact definitions
define contact {
  contact_name                    John Doe
  pager                           12345678
  service_notification_commands   notify-service-by-telegram
  host_notification_commands      notify-host-by-telegram
}
define contact {
  contact_name                    Telegram Group Chat
  pager                           -23456789
  service_notification_commands   notify-service-by-telegram
  host_notification_commands      notify-host-by-telegram
}

The Telegram Nagios plugin is able to send alerts to a single contact or to a group chat. As you can see Telegram GroupIDs are negative numbers.

How to get your UserID or GroupID?

Download and install this Telegram CLI: https://github.com/vysheng/tg. The CLI makes it easier to discover your UserID and GroupIDs.

$ telegram-cli
...
> get_self
User John Doe @johndoe (#12345678):
        phone: XXXXXXXXXXX
        offline (was online [2016/03/15 11:57:46])

There is your UserID (#12345678). First start a conversation with the Bot you just created to be able to receive messages (Nagios alerts) from the Bot and to be able to invite it to a Telegram group chat.

To receive Nagios alerts in a Telegram group chat, create a group chat and invite the Bot. You need at least 2 other users in the group.

$ telegram-cli
...
> create_group_chat "Nagios Alerts" user#200194008 user#12345678 user#33333333
[21:28]  Nagios Alerts John Doe created chat Nagios Alerts. 3 users


> chat_info Nagios_Alerts
Chat Nagios Alerts updated photo admin members
Chat Nagios Alerts (id 23456789) members:
                Nagios Bot invited by John Doe at [2016/03/08 21:28:59]
                ...
                John Doe invited by John Doe at [2016/03/08 21:28:59] admin

There is the GroupID (id 23456789) of the Nagios Alerts group chat which needs to be configured in the Nagios configuration as a negative number (-23456789).

Let’s send some test messages!

telegram_nagios.py --token 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw --object_type service --contact "-23456789" --servicestate "OK" --hostname "hostname.domain.tld" --servicedesc "load" --output "OK - load average: 0.02 0.01 0.01"
telegram_nagios.py --token 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw --object_type service --contact "-23456789" --servicestate "WARNING" --hostname "hostname.domain.tld" --servicedesc "load" --output "WARNING - load average: 3.48 4.19 2.74"
telegram_nagios.py --token 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw --object_type service --contact "-23456789" --servicestate "CRITICAL" --hostname "hostname.domain.tld" --servicedesc "load" --output "CRITICAL - load average: 233.29 154.35 15.05"
telegram_nagios.py --token 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw --object_type host --contact "-23456789" --hoststate "UNREACHABLE" --hostname "hostname.domain.tld" --hostaddress "2001:DB8::1" --output "Network Unreachable (hostname.domain.tld)"
telegram_nagios.py --token 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw --object_type host --contact "-23456789" --hoststate "DOWN" --hostname "hostname.domain.tld" --hostaddress "2001:DB8::1" --output "PING CRITICAL - Packet loss = 100%"
telegram_nagios.py --token 200194008:AAEG6djWC9FENEZaVIo3y3vZm24P3GTMetw --object_type host --contact "-23456789" --hoststate "UP" --hostname "hostname.domain.tld" --hostaddress "2001:DB8::1" --output "PING OK - Packet loss = 0%, RTA = 3.74 ms

And here is the result sunglasses

telegram-nagios

Upgrade Oracle Java without interrupting a Mendix App

In the “Mendix Cloud” we are hosting thousands of Mendix Apps. All these Apps are running on top of the Oracle Java Runtime Environment (JRE) in Debian Linux environments. We use java-package to package the Oracle JRE to be able to easily redistribute it to all our servers.

After packaging and putting the Debian package in our local apt repository the Oracle JRE can be easily installed via apt-get.

# apt-get install oracle-java8-jre

When there is an update available of the Oracle JRE, we again package the new version and put it in our local apt repository. The update will now be available to all our Debian Linux environments.

# apt-get -V upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  oracle-java8-jre (8u40 => 8u45)
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 39.4 MB of archives.
After this operation, 26.6 kB of additional disk space will be used.
Do you want to continue [Y/n]?

But wait… it doesn’t warn you about it, but do you remember these screens when using Windows or Mac OSX?

javaupdate-windows  javaupdate-mac

This doesn’t mean that this doesn’t apply to Linux. ūüėČ Also on Linux it’s required to restart all java processes. In case of a Oracle JRE update it meant that we had to plan maintenance windows and restart all Mendix Apps while rolling out the update.

A new approach

It would have been much nicer if we could roll out updates without thinking about the Mendix Apps that are currently using the installed Java version. In the Linux universe this is not an unfamiliar issue. Look for example at the Linux kernel. The Linux kernel that is currently running also cannot be replaced or uninstalled. You would run into all kinds of issues regarding kernel modules and libraries that have been changed or removed. Therefore the packaging system is keeping the last X Linux kernels installed including the one you are currently running.

Since Debian 8.0 (Jessie) the apt package (since version 0.9.9.1) contains this file: “/etc/kernel/postinst.d/apt-auto-removal“. This file is executed after the installation (during “postinst“) of each “linux-image*” package. The “apt-auto-removal” script lists all installed kernels and creates an “APT::NeverAutoRemove” list in “/etc/apt/apt.conf.d/01autoremove-kernels” of the 3 most recent versions plus the one that is currently in use in. “linux-image*” packages that are not on that list may be “AutoRemoved“.

For Oracle JRE we can exactly use the same procedure. There are a few requirements:

  1. java-package needs to create versioned packages so we can install multiple versions at the same time.
  2. The oracle-java8uXX-jre package must run an apt-auto-removal script after installation to update an APT::NeverAutoRemove list.
  3. The apt-auto-removal script needs to be in a separate package, because its already required on installation of a oracle-java8uXX-jre package.
  4. We need an oracle-java8-jre-latest dependency package to install the latest oracle-java8uXX-jre package, also so that for example oracle-java8uXX-jre is marked as automatically installed so it can be removed using apt-get autoremove when it’s not on the APT::NeverAutoRemove list.

system2

Versioned packages with java-package

java-package needed to be patched to produce versioned packages. Instead of “oracle-java8-jre” we needed to have “oracle-java8uXX-jre” where XX is the update version number, for example “oracle-java8u45-jre“.

Besides the package name, the package content needed to be installed in a different place. With “oracle-java8-jre” all files are installed in “/usr/lib/jvm/jre-8-oracle-x64/“. This needed to change to “/usr/lib/jvm/jre-8uXX-oracle-x64/“.

Changing 4 lines of bash gave the expected result (github.com/mendix/java-package):

diff --git a/lib/jdk.sh b/lib/jdk.sh
index cd41772..bc981e1 100644
--- a/lib/jdk.sh
+++ b/lib/jdk.sh
@@ -57,8 +57,8 @@ j2sdk_run() {
     echo
     diskfree "$j2se_required_space"
     read_maintainer_info
-    j2se_package="$j2se_vendor-java$j2se_release-jdk"
-    j2se_name="jdk-$j2se_release-$j2se_vendor-$j2se_arch"
+    j2se_package="$j2se_vendor-java${j2se_release}u$j2se_update-jdk"
+    j2se_name="jdk-${j2se_release}u$j2se_update-$j2se_vendor-$j2se_arch"
     local target="$package_dir/$j2se_name"
     install -d -m 755 "$( dirname "$target" )"
     extract_bin "$archive_path" "$j2se_expected_min_size" "$target"
diff --git a/lib/jre.sh b/lib/jre.sh
index ecd6d41..b209fcb 100644
--- a/lib/jre.sh
+++ b/lib/jre.sh
@@ -42,8 +42,8 @@ j2re_run() {
     echo
     diskfree "$j2se_required_space"
     read_maintainer_info
-    j2se_package="$j2se_vendor-java$j2se_release-jre"
-    j2se_name="jre-$j2se_release-$j2se_vendor-$j2se_arch"
+    j2se_package="$j2se_vendor-java${j2se_release}u$j2se_update-jre"
+    j2se_name="jre-${j2se_release}u$j2se_update-$j2se_vendor-$j2se_arch"
     local target="$package_dir/$j2se_name"
     install -d -m 755 "$( dirname "$target" )"
     extract_bin "$archive_path" "$j2se_expected_min_size" "$target"

Now we were able to install multiple Oracle JRE versions alongside each other. I thought it was also nice to have a “/usr/bin/java8” symlink, which always points to the latest version. This was also easily implemented:

diff --git a/lib/oracle-jdk.sh b/lib/oracle-jdk.sh
index adb3dc2..bdd2b91 100644
--- a/lib/oracle-jdk.sh
+++ b/lib/oracle-jdk.sh
@@ -124,6 +124,10 @@ fi
 install_no_man_alternatives $jvm_base$j2se_name/jre/lib $oracle_jre_lib_hl
 install_alternatives $jvm_base$j2se_name/bin $oracle_bin_jdk
 
+if [[ -f "$jvm_base$j2se_name/bin/java" ]]; then
+    update-alternatives --install "/usr/bin/java$j2se_release" "java$j2se_release" "$jvm_base$j2se_name/bin/java" $j2se_priority
+fi
+
 # No plugin for ARM architecture yet
 if [ "${DEB_BUILD_ARCH:0:3}" != "arm" ]; then
 plugin_dir="$jvm_base$j2se_name/jre/lib/$DEB_BUILD_ARCH"
@@ -148,6 +152,8 @@ fi
 remove_alternatives $jvm_base$j2se_name/jre/lib $oracle_jre_lib_hl
 remove_alternatives $jvm_base$j2se_name/bin $oracle_bin_jdk
 
+update-alternatives --remove "java$j2se_release" "$jvm_base$j2se_name/bin/java"
+
 # No plugin for ARM architecture yet
 if [ "${DEB_BUILD_ARCH:0:3}" != "arm" ]; then
 plugin_dir="$jvm_base$j2se_name/jre/lib/$DEB_BUILD_ARCH"
diff --git a/lib/oracle-jre.sh b/lib/oracle-jre.sh
index 3958ea7..fcc2287 100644
--- a/lib/oracle-jre.sh
+++ b/lib/oracle-jre.sh
@@ -96,6 +96,10 @@ install_alternatives $jvm_base$j2se_name/bin $oracle_jre_bin_jre
 install_no_man_alternatives $jvm_base$j2se_name/bin $oracle_no_man_jre_bin_jre
 install_no_man_alternatives $jvm_base$j2se_name/lib $oracle_jre_lib_hl
 
+if [[ -f "$jvm_base$j2se_name/bin/java" ]]; then
+    update-alternatives --install "/usr/bin/java$j2se_release" "java$j2se_release" "$jvm_base$j2se_name/bin/java" $j2se_priority
+fi
+
 plugin_dir="$jvm_base$j2se_name/lib/$DEB_BUILD_ARCH"
 for b in $browser_plugin_dirs;do
     install_browser_plugin "/usr/lib/\$b/plugins" "libjavaplugin.so" "\$b-javaplugin.so" "\$plugin_dir/libnpjp2.so"
@@ -114,6 +118,8 @@ remove_alternatives $jvm_base$j2se_name/bin $oracle_jre_bin_jre
 remove_alternatives $jvm_base$j2se_name/bin $oracle_no_man_jre_bin_jre
 remove_alternatives $jvm_base$j2se_name/lib $oracle_jre_lib_hl
 
+update-alternatives --remove "java$j2se_release" "$jvm_base$j2se_name/bin/java"
+
 plugin_dir="$jvm_base$j2se_name/lib/$DEB_BUILD_ARCH"
 for b in $browser_plugin_dirs;do
     remove_browser_plugin "\$b-javaplugin.so" "\$plugin_dir/libnpjp2.so"

And the last part regarding java-package was to execute “/etc/oracle-java/postinst.d/apt-auto-removal” after installation:

diff --git a/lib/oracle-jre.sh b/lib/oracle-jre.sh
index fcc2287..ebebb1f 100644
--- a/lib/oracle-jre.sh
+++ b/lib/oracle-jre.sh
@@ -104,6 +104,10 @@ plugin_dir="$jvm_base$j2se_name/lib/$DEB_BUILD_ARCH"
 for b in $browser_plugin_dirs;do
     install_browser_plugin "/usr/lib/\$b/plugins" "libjavaplugin.so" "\$b-javaplugin.so" "\$plugin_dir/libnpjp2.so"
 done
+
+if [ -d "/etc/oracle-java/postinst.d" ]; then
+    run-parts --report --exit-on-error --arg=$j2se_vendor-java${j2se_release}u$j2se_update-jre /etc/oracle-java/postinst.d
+fi
 EOF
 }

apt-auto-removal and APT::NeverAutoRemove

To generate the “APT::NeverAutoRemove” list, we’ve taken the “apt-auto-removal” script from the apt package and modified it to support oracle-java packages:

#!/bin/sh
set -e

# Author: Pim van den Berg <pim.van.den.berg@mendix.com>
#
# This is a modified version of the /etc/kernel/postinst.d/apt-auto-removal
# script from the apt package to mark kernel packages as NeverAutoRemove.
#
# Mark as not-for-autoremoval those oracle-java packages that are currently in use.
#
# We generate this list and save it to /etc/apt/apt.conf.d instead of marking
# packages in the database because this runs from a postinst script, and apt
# will overwrite the db when it exits.

eval $(apt-config shell APT_CONF_D Dir::Etc::parts/d)
test -n "${APT_CONF_D}" || APT_CONF_D="/etc/apt/apt.conf.d"
config_file=${APT_CONF_D}/01autoremove-oracle-java

eval $(apt-config shell DPKG Dir::bin::dpkg/f)
test -n "$DPKG" || DPKG="/usr/bin/dpkg"

if [ ! -e /bin/fuser ]; then
	echo "WARNING: /bin/fuser is missing, could not generate reliable $config_file"
	exit
fi

java_versions=""

for java_binary in /usr/lib/jvm/*/bin/java; do
	if /bin/fuser $java_binary > /dev/null 2>&1; then
		java_versions="$java_versions
$(dpkg -S $java_binary | sed 's/: .*//')"
	fi
done

versions="$(echo "$java_versions" | sort -u | sed -e 's#\.#\\.#g' )"

generateconfig() {
	cat <<EOF
// DO NOT EDIT! File autogenerated by $0
APT::NeverAutoRemove
{
EOF
	for version in $versions; do
		echo "   \"^${version}$\";"
	done
	echo '};'
}
generateconfig > "${config_file}.dpkg-new"
mv "${config_file}.dpkg-new" "$config_file"

The “java-auto-removal” script will go through all “/usr/lib/jvm/*/bin/java” files and check whether they are in use, using the “/bin/fuser” command. When in use, the package the java binary is part of will be added to the “APT::NeverAutoRemove” list. This list will be written to /etc/apt/apt.conf.d/01autoremove-oracle-java.

Great improvement ūüėÄ

That’s it. We are now able to upgrade Oracle Java while the Mendix App keeps running. Once the Mendix App is stopped and then started by the customer, it will start to use the new version of Java. Once another new Oracle Java update is installed or the “java-auto-removal” script is run, the “APT::NeverAutoRemove” list is updated. After that the Oracle Java version that was in use by the Mendix App before it stopped can be “AutoRemoved“. ūüėÄ

Installing elementary OS on my late 2006 MacBook 2,1

In January 2007 I bought my first OS X MacBookdevice, a white 13.3-inch MacBook, running OS X 10.4 Tiger on a 2.0Ghz Intel Core 2 Duo processor, having 1GB of RAM. Along the way I upgraded it to 2GB RAM and gave it a fantastic boost by replacing the HDD by an Intel 320 Series SSD. I also upgraded OS X to 10.5 Leopard, 10.6 Snow Leopard and in the end to 10.7 Lion. At the same time YouTube started (2008) offering 720p HD videos. Now almost all videos are available in >720p format. What always frustrated me a bit is that this MacBook wasn’t 100% capable to show 720p YouTube videos. It was viewable, but with annoying frame-drop here and there.

Lately I stumbled upon this “Linux Sucks” YouTube video, which showed the enormous growth of elementary OS on distrowatch.com in the past years. I was interested. Normally I use Debian with i3 or sometimes Gnome3. But I was interested in this lightweight Ubuntu based OS to replace OS X on my MacBook.

I’d like to explain how I installed elementary OS on my MacBook including full disk encryption.

Creating a MacBook compatible bootable USB stick

First of all I downloaded the 32-bit ISO of the latest elementary OS release (Freya). To be able to boot this ISO from a USB stick on a MacBook, you have to create FAT32 formatted USB stick which contains an EFI/BOOT/ folder, with 2 files in there:

Installing elementary OS

Now you can boot from the USB stick. After a minute or 2 the live CD is started.

elementary1

Start the installation by clicking the bottom-right CD icon and follow the wizard.

elementary2 elementary3 elementary4 elementary5

When you get a question about unmounting /dev/sdb, just say “No”. /dev/sdb is your USB device. At the “Installation type” screen choose “Something else”.

elementary6

The interesting part starts now. This screen shows the partition layout of the recognized devices. /dev/sda here is the internal harddrive. In my case it also says INTEL SSD at the bottom. Again ignore /dev/sdb, this is your USB device.

When OS X is installed you have a couple of partitions on the internal harddrive:

  • /dev/sda1: EFI partition, required for booting
  • /dev/sda2: HFS+ partition containing Mac OS X
  • /dev/sda3: An optional ~650MB recovery partition (since OS X 10.7 Lion)

Remove /dev/sda2 and /dev/sda3. Now create 2 new partitions on /dev/sda:

  • a 256MB ext2 partition, this will be the /boot partition
  • fill up the rest with a partition that will be used as “physical volume for encryption

elementary7 elementary8

The installer tries to be smart now, by marking sda3_crypt to be formatted as ext4. Change this partition and configure it to not format it. After that “Quit” the installer.

We just quit the installer, because we want to create 3 partitions in the encrypted sda3_crypt for the root partition (/), swap partition and home partition (/home) using LVM2. This is not possible to configure this via the installer.

elementary9b elementary10b

Open the terminal App. As you can see /dev/sda3 is encrypted and referred to as /dev/mapper/sda3_crypt. Now execute the pvcreate, vgcreate and lvcreate commands. We’ll create a 10GB root partition, 4GB swap partition and the rest is for the home partition. You’ll see it also created some device symlinks in /dev/mapper.

Now run through the installer once again. At the “Installation type” screen choose “Something else” once again.

Now you see that the installer sees the partitions we just created. Configure the partitions:

  • /dev/mapper/apple-home: btrfs partition mounted at /home
  • /dev/mapper/apple-root: ext4 partition mounted at /
  • /dev/mapper/apple-swap: swap partition
  • /dev/sda2: ext2 partition mounted at /boot

elementary11 elementary12 elementary13

Continue clicking “Install Now” and click “Continue” to confirm to write the changes to disk.

elementary14 elementary15 elementary16 elementary17

Follow the wizard until it starts installing elementary OS and wait for a while.

elementary18 elementary19

It will fail to install the bootloader. Continue without a bootloader. After that the installation from the wizard is complete. Choose “Continue Testing” here.

elementary20b elementary21b

Now we have to fix the bootloader manually. Open the Terminal App. Mount the required partitions and chroot into the new elementary OS installation. Install grub-efi-ia32 and run grub-install. Copy a grub.mo file to /boot/grub/locale/en.mo and run grub-mkconfig to generate the grub configuration.

elementary22b

To make the initial RAMdisk (/boot/initrd.img) aware that /dev/sda3 is an encrypted partition. Put the desired configuration in /etc/crypttab and update the initial RAMdisk.

You can now reboot the MacBook. The funny thing is that elementary OS is snappier then OS X on this MacBook. And it now plays 720p videos flawlessly. ūüėÄ

Debian Jessie: bye bye bind9 + dnssec-tools, hello PowerDNS

I recently upgraded my DNS server to Debian Jessie. In fact I reinstalled it from scratch and used puppet to install and configure all the required components. This DNS server, running bind9, is the authoritative nameserver for nethuis.nl.

nethuis.nl uses DNSSEC. To apply DNSSEC I used dnssec-tools, which gives you tools like zonesigner, rollerd and donuts to sign, roll and check your DNSSEC enabled zones. Two years ago I had a hard time setting this up, hitting various bugs in dnssec-tools 1.13-1 from Debian Wheezy. I ended up running a quite stable setup after packaging dnssec-tools 1.14 and using a patched version of zonesigner that didn’t increase the serial of the zone.

While installing the same setup on Debian Jessie, I noticed that dnssec-tools wasn’t in Jessie because of a bug in rollerd. I decided to install the dnssec-tools 1.14 package I used before on Debian Wheezy. This all seemed fine until I receive this email from my daily donuts run:

undefined method Net::DNS::RR::new_from_hash at /usr/lib/x86_64-linux-gnu/perl5/5.20/Net/DNS/RR.pm line 791.
Net::DNS::RR::AUTOLOAD("Net::DNS::RR", "rname", "hostmaster.nethuis.nl", "serial", 2014081039, "class", "IN", "expire", 1814400, ...) called at /usr/share/perl5/Net/DNS/ZoneFile/Fast.pm line 201
Net::DNS::ZoneFile::Fast::parse("file", "nethuis.nl.signed", "origin", "nethuis.nl.", "soft_errors", 1, "on_error", CODE(0x4ec0698)) called at /usr/sbin/donuts line 338

This thread indicated there are more related issues in the dnssec-tools package.

Time to re-evaluate. Debian Jessie is frozen, dnssec-tools didn’t get in and there is not much conversation going on in bugreport #754704 that kicked dnssec-tools out of testing. I also can’t update the signed zones as long as this is broken and the current signed zone is valid until 3 weeks from now. ūüė•

OpenDNSSEC looked like an alternative. I could have also used the tools that come with bind9 to sign, roll and check my zones. But I liked to try something new, PowerDNS.

# apt-get install pdns-server

As a previous bind9 user, the easiest way was to put all zone configuration from my original named.conf in /etc/powerdns/bindbackend.conf. I was amazed. It just worked. ūüėÄ

The nethuis.nl zone was still a pre-signed DNSSEC zone. While reading the PowerDNS documentation I found out that PowerDNS is able to do “Front-signing”, which is an amazing feature. PowerDNS does the signing part on-the-fly. There is no need to re-sign the zone every time you make a change to the zone.

First of all I changed the filename in /etc/powerdns/bindbackend.conf to the unsigned one. After that I created a database to manage the DNSSEC keys, added a line to the PowerDNS configuration to use this database and restarted PowerDNS.

# pdnssec create-bind-db /var/lib/powerdns/bind-dnssec-db.sqlite3
# echo "bind-dnssec-db=/var/lib/powerdns/bind-dnssec-db.sqlite3" &gt;&gt; /etc/powerdns/pdns.d/pdns.simplebind.conf
# systemctl restart pdns

I liked to keep the KSK and ZSKs I was already using for my zone, so I imported those.

# pdnssec import-zone-key nethuis.nl Knethuis.nl.+008+00754.private KSK
# pdnssec import-zone-key nethuis.nl Knethuis.nl.+008+43743.private ZSK
# pdnssec import-zone-key nethuis.nl Knethuis.nl.+008+63186.private ZSK
# pdnssec deactivate-zone-key nethuis.nl 3
# pdnssec rectify-zone nethuis.nl
# dig +short +dnssec nethuis.nl SOA
ns1.nethuis.nl. hostmaster.nethuis.nl. 2014081039 28800 3600 1814400 600
SOA 8 2 600 20150115000000 20141225000000 43743 nethuis.nl. lqH6nrHf6YPcLv2TgQgC4gOI4gOGORsmfj/LDJAhu+GpWpiFTnQGtj08 I2TocYQ0jwkoar370quZyvKNAyjTBGNUw6rOxdjbxAn8DhMpBPi7TMfq PP7NXJLkxbx2aIW9r1C0iMk5WAYbi01bEsJY014WiX+s+QdRDPwWaanZ zFI=

That’s it. I’m really happy PowerDNS integrated DNSSEC in it’s product instead of having an additional toolset to manage DNSSEC pre-signed zones.

Update

On January 19th, 20:39:59 UTC, it got completely out of hand. The images below from dnsviz.net showed me the nethuis.nl zone was expired on all the Authoritative DNS slaves.

nethuis.nl-dnssec-issues

Hovering with my mouse over the purple lines showed me the expired status:

nethuis.nl-dnssec-issues2

While the nethuis.nl zone hosted on the Authoritative DNS master was completely fine:

nethuis.nl-dnssec-issues3

What was going on here? ūüėē

It was clear that the slaves didn’t transfer the zone after it was re-signed by the Authoritative DNS master. According to RFC 1996 the SOA record should be increased if you want the Authoritative DNS slaves to update their zones. This is something that was clearly not done in my case.

I found the SOA-EDIT setting. My current SERIAL is configured in the YYYYMMDDSS format, so I configured the SOA-EDIT setting to use INCEPTION-INCREMENT.

# pdnssec set-meta nethuis.nl SOA-EDIT INCEPTION-INCREMENT

This overrules the SERIAL that is configured in the on-disk zone-file. Every Thursday after the zone is re-signed the SERIAL is automatically increased and all Authoritative DNS slaves will transfer the new zone.

Linux bcache SSD caching statistics using collectd

In October 2012 I started using bcache as an SSD caching solution for my Debian Linux server. I’ve been very happy about it so far. Back then I used a manually compiled 3.2 Linux kernel based on the bcache-3.2 git branch provided by Kent (which has been removed). This patch needed to be applied to make bcache work with grsecurity. I also created a Debian package of the bcache-tools userspace tools to be able to create the bcache setup.

At the start of this year I moved to a 3.12 kernel, also manually compiled. It’s quiet a relief that bcache is included in mainline since the 3.10 kernel. ūüôā

This is my setup:

  1. 500GB backing device – 20GB caching device (qcow2 images)
  2. 1.3TB backing device – 36GB caching device (file storage)

The past year I’ve definitely noticed the performance difference using bcache. But I was still curious about when and how bcache was using the attached SSD. Is it using the write-back cache a lot? How many times can bcache read it’s data from the SSD cache instead of accessing the HDD?

I created a python script to collect all kinds of bcache statistics (parts of the code in this script are copied from bcache-status). This script outputs the statistics to STDOUT in a collectd exec plugin compatible way. The collectd exec plugin can be configured in collectd.conf this way:

&lt;Plugin exec&gt;
Exec "user:group" "/path/to/collectd-bcache"
&lt;/Plugin&gt;

To visualize the collected data I created a bcache plugin for CGP. This is the result:

bcache-cache-hit-ratio bcache-access bcache-usage bcache-bypassed

Write-back to HDD throttled

At some time I noticed that in my case flushing data from the write-back cache to the HDD was somehow rate-limited to ~3 MB/s. You can nicely see this in these graphs:

bcache-dirty-data bcache-throughput

These threads on the mailinglist of bcache mention the same thing:

Kent explained that this is managed by the PD controller in bcache. The PD controller has been rewritten in the 3.13 Linux kernel, so I’m very interested if this behavior changed. I didn’t upgrade my kernel to 3.13 yet because I’m a very cautious about it. Still a lot of development is going on at the bcache project. But I’m looking forward to upgrading to 3.13, 3.14 or probably 3.15.

githubcollectd-bcache: github.com/pommi/collectd-bcache
CGP bcache plugin: github.com/pommi/CGP/…/bcache.json

Mendix shipped in a Docker container

Imagine… Imagine if you could setup a new Mendix hosting environment in seconds, everywhere. A lightweight, secure and isolated environment where you just have to talk to a RESTful API to deploy your MDA (Mendix Deployment Archive) and start your App.

Since the 2nd quarter of this year a great piece of software became very popular to help to achieve this goal: Docker. Docker provides a high-level API on top of Linux Containers (LXC), which provides a lightweight virtualization solution that runs processes in isolation.

Mendix on Docker

tl;dr

Run a Mendix App in a Docker container in seconds:

root@host:~# docker run -d mendix/mendix
root@host:~# curl -XPOST -F model=@project.mda http://172.17.0.5:5000/upload/
File uploaded.
root@host:~# curl -XPOST http://172.17.0.5:5000/unpack/
Runtime downloaded and Model unpacked.
root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XPOST http://172.17.0.5:5000/start/
App started. (Database updated)
root@host:~#

Docker

There has been a lot of buzz around Docker since its start in March 2013. Being able to create an isolated environment once, package it up, and run it everywhere makes it very exciting. Docker provides easy-to-use features like Filesystem isolation, Resource isolation, Network isolation, Copy-on-write, Logging, Change management and more.

For more details about Docker, please read “The whole story”. We’d like to go on with the fun stuff.

Mendix on Docker

Once a month a so-called FedEx Day (Research Day, ShipIt day, Hackatron) is organized at Mendix. On that day, Mendix developers have the freedom to work on whatever they want. We’ve been playing with Docker a couple of Research Day’s ago. Just see how it works, that kind of stuff. But this time we really wanted to create something we’d potentially use in production. A proof of concept how to run Mendix on Docker.

The plan:

  1. Create a Docker Container containing all software to run Mendix
  2. Create a RESTful API to upload, start and stop a Mendix App within that container

What about the database, you may be wondering? We’ll just use a Docker container that provides us a PostgreSQL service! You can also build your own PostgreSQL container or use an existing PostgreSQL server in your network.

Start off with an image:

mendix-docker

This is what we are building. A Docker container containing:

  • All required software to run a Mendix App, like the Java Runtime Environment and the m2ee library
  • A RESTful API (m2ee-api) to upload, start and stop an App (listening on port 5000)
  • A webserver (nginx), to serve static content and proxy App paths to the Mendix runtime (listening on port 7000)
  • When an App is deployed the Mendix runtime will be listening on port 8000 locally

Building the base container

Before we can start to install the software, we need a base image. A minimal install of an operating system like Debian GNU/Linux, Ubuntu, Red Hat, CentOS, Fedora, etc. You could download a base container from the Docker Index. But because this is so basic and we’d like to create a Mendix container we can trust 100% (a 3rd party base image could contain back-doors), we created one ourselves.

A Debian GNU/Linux Wheezy image:

debootstrap wheezy wheezy http://cdn.debian.net/debian
tar -C wheezy -c . | docker import - mendix/wheezy

That’s all! Let’s show the image we’ve just created:

root@host:~# docker images
REPOSITORY       TAG       IMAGE ID       CREATED           VIRTUAL SIZE
mendix/wheezy    latest    1bee0c7b9ece   6 seconds ago     218.6 MB
root@host:~#

Building the Mendix container

On top of the base image we just created, we can start to install all required software to run Mendix. Creating a Docker container can be done using a Dockerfile. It contains all instructions to provision the container and information like what network ports to expose and what executable to run (by default) when you start using the container.

There is an extensive manual available about how to run Mendix on GNU/Linux. We’ve used this to create our Dockerfile. This Dockerfile also installs files like /home/mendix/.m2ee/m2ee.yaml, /home/mendix/nginx.conf and /etc/apt/sources.list. They must be in your current working directory when running the docker build command. All files have been published to GitHub.

To create the Mendix container run:

docker build -t mendix/mendix .

That’s it! We’ve created our own Docker container! Let’s show it:

root@host:~#
REPOSITORY       TAG       IMAGE ID       CREATED           VIRTUAL SIZE
mendix/mendix    latest    c39ee75463d6   10 seconds ago    589.6 MB
mendix/wheezy    latest    1bee0c7b9ece   3 minutes ago     218.6 MB
root@host:~#

Our container has been published to the Docker Index: mendix/mendix

The RESTful API

When you look at the Dockerfile, it shows you it’ll start the m2ee-api on startup. This API will listen on port 5000 and currently supports a limited set of actions:

GET  /about/        # about m2ee-api
GET  /status/       # app status
GET  /config/       # show configuration
POST /config/       # set configuration
POST /upload/       # upload a new MDA
POST /unpack/       # unpack the uploaded MDA
POST /start/        # start the app
POST /stop/         # stop the running app
POST /terminate/    # terminate the running app
POST /kill/         # kill the running app
POST /emptydb/      # empty the database

Usage

Now that we’ve created the container and published it to the Docker Index we can start using it. And not only we can start using it. Everyone can!

Pull the container and start it.

root@host:~# docker pull mendix/mendix
Pulling repository mendix/mendix
c39ee75463d6: Download complete
eaea3e9499e8: Download complete
...
855acec628ec: Download complete
root@host:~# docker run -d mendix/mendix
bd7964940dfc61449da79cddd1c0e8845d61f6ec1092b466e8e2e582726a5eea
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS                NAMES
bd7964940dfc        mendix/mendix:latest       /bin/su mendix -c /u   19 seconds ago      Up 18 seconds       5000/tcp, 7000/tcp   tender_hawkings
root@host:~# docker inspect bd7964940dfc | grep IPAddress | awk '{ print $2 }' | tr -d ',"'
172.17.0.5
root@host:~#

In this container the RESTful API started and is now listening on port 5000. We can for example ask for its status or show its configuration.

root@host:~# curl -XGET http://172.17.0.5:5000/status/
The application process is not running.
root@host:~# curl -XGET http://172.17.0.5:5000/config/
{
"DatabaseHost": "127.0.0.1:5432",
"DTAPMode": "P",
"MicroflowConstants": {},
"BasePath": "/home/mendix",
"DatabaseUserName": "mendix",
"DatabasePassword": "mendix",
"DatabaseName": "mendix",
"DatabaseType": "PostgreSQL"
}
root@host:~#

To run an App in this container, we first need a database server. Pull a PostgreSQL container from the Docker Index and start it.

root@host:~# docker pull zaiste/postgresql
Pulling repository zaiste/postgresql
0e66fd3d6a6f: Download complete
27cf78414709: Download complete
...
046559147c70: Download complete
root@host:~# docker run -d zaiste/postgresql
9ba56a7c4bb132ef0080795294a077adca46eaca5738b192d2ead90c16ac2df2
root@host:~# docker ps
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS                NAMES
9ba56a7c4bb1        zaiste/postgresql:latest   /bin/su postgres -c    22 seconds ago      Up 21 seconds       5432/tcp             jolly_darwin
bd7964940dfc        mendix/mendix:latest       /bin/su mendix -c /u   30 seconds ago      Up 29 seconds       5000/tcp, 7000/tcp   tender_hawkings
root@host:~# docker inspect 9ba56a7c4bb1 | grep IPAddress | awk '{ print $2 }' | tr -d ',"'
172.17.0.4
root@host:~#

Now configure Mendix to use this database server.

root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XGET http://172.17.0.5:5000/config/
{
"DatabaseHost": "172.17.0.4:5432",
"DTAPMode": "P",
"MicroflowConstants": {},
"BasePath": "/home/mendix",
"DatabaseUserName": "docker",
"DatabasePassword": "docker",
"DatabaseName": "docker",
"DatabaseType": "PostgreSQL"
}
root@host:~#

Upload, unpack and start an MDA:

root@host:~# curl -XPOST -F model=@project.mda http://172.17.0.5:5000/upload/
File uploaded.
root@host:~# curl -XPOST http://172.17.0.5:5000/unpack/
Runtime downloaded and Model unpacked.
root@host:~# # set config after unpack (unpack will overwrite your config)
root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XPOST http://172.17.0.5:5000/start/
App started. (Database updated)
root@host:~#

Check if the application is running:

root@host:~# curl -XGET http://172.17.0.5:7000/
-- a lot of html --
root@host:~# curl -XGET http://172.17.0.5:7000/xas/
-- a lot of html --
root@host:~#

Great success! We’ve deployed our Mendix App in a completely new environment in seconds.

Reflection

Docker is a very powerful tool to deploy lightweight, secure and isolated environments. The addition of a RESTful API makes it very easy to deploy and start Apps.

One of the limitations after finishing this is that the App isn’t reachable from the outside world. The port redirection feature from Docker can be used for that. To run more Mendix containers on one host there must be some kind of orchestrator on the Docker host that administrates the containers and keeps track of what is running where.

The RESTful API provides a limited set of features in comparison with m2ee-tools. When you start your App using m2ee-tools and your database already contains data, the CLI will ask you kindly what to do. Currently the m2ee-api will just try to upgrade the database scheme if needed and start the App without a notice.