Linux bcache SSD caching statistics using collectd

In October 2012 I started using bcache as an SSD caching solution for my Debian Linux server. I’ve been very happy about it so far. Back then I used a manually compiled 3.2 Linux kernel based on the bcache-3.2 git branch provided by Kent (which has been removed). This patch needed to be applied to make bcache work with grsecurity. I also created a Debian package of the bcache-tools userspace tools to be able to create the bcache setup.

At the start of this year I moved to a 3.12 kernel, also manually compiled. It’s quiet a relief that bcache is included in mainline since the 3.10 kernel. :)

This is my setup:

  1. 500GB backing device – 20GB caching device (qcow2 images)
  2. 1.3TB backing device – 36GB caching device (file storage)

The past year I’ve definitely noticed the performance difference using bcache. But I was still curious about when and how bcache was using the attached SSD. Is it using the write-back cache a lot? How many times can bcache read it’s data from the SSD cache instead of accessing the HDD?

I created a python script to collect all kinds of bcache statistics (parts of the code in this script are copied from bcache-status). This script outputs the statistics to STDOUT in a collectd exec plugin compatible way. The collectd exec plugin can be configured in collectd.conf this way:

<Plugin exec>
Exec "user:group" "/path/to/collectd-bcache"
</Plugin>

To visualize the collected data I created a bcache plugin for CGP. This is the result:

bcache-cache-hit-ratio bcache-access bcache-usage bcache-bypassed

Write-back to HDD throttled

At some time I noticed that in my case flushing data from the write-back cache to the HDD was somehow rate-limited to ~3 MB/s. You can nicely see this in these graphs:

bcache-dirty-data bcache-throughput

These threads on the mailinglist of bcache mention the same thing:

Kent explained that this is managed by the PD controller in bcache. The PD controller has been rewritten in the 3.13 Linux kernel, so I’m very interested if this behavior changed. I didn’t upgrade my kernel to 3.13 yet because I’m a very cautious about it. Still a lot of development is going on at the bcache project. But I’m looking forward to upgrading to 3.13, 3.14 or probably 3.15.

githubcollectd-bcache: github.com/pommi/collectd-bcache
CGP bcache plugin: github.com/pommi/CGP/…/bcache.php

Mendix shipped in a Docker container

Imagine… Imagine if you could setup a new Mendix hosting environment in seconds, everywhere. A lightweight, secure and isolated environment where you just have to talk to a RESTful API to deploy your MDA (Mendix Deployment Archive) and start your App.

Since the 2nd quarter of this year a great piece of software became very popular to help to achieve this goal: Docker. Docker provides a high-level API on top of Linux Containers (LXC), which provides a lightweight virtualization solution that runs processes in isolation.

Mendix on Docker

tl;dr

Run a Mendix App in a Docker container in seconds:

root@host:~# docker run -d mendix/mendix
root@host:~# curl -XPOST -F model=@project.mda http://172.17.0.5:5000/upload/
File uploaded.
root@host:~# curl -XPOST http://172.17.0.5:5000/unpack/
Runtime downloaded and Model unpacked.
root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XPOST http://172.17.0.5:5000/start/
App started. (Database updated)
root@host:~#

Docker

There has been a lot of buzz around Docker since its start in March 2013. Being able to create an isolated environment once, package it up, and run it everywhere makes it very exciting. Docker provides easy-to-use features like Filesystem isolation, Resource isolation, Network isolation, Copy-on-write, Logging, Change management and more.

For more details about Docker, please read “The whole story”. We’d like to go on with the fun stuff.

Mendix on Docker

Once a month a so-called FedEx Day (Research Day, ShipIt day, Hackatron) is organized at Mendix. On that day, Mendix developers have the freedom to work on whatever they want. We’ve been playing with Docker a couple of Research Day’s ago. Just see how it works, that kind of stuff. But this time we really wanted to create something we’d potentially use in production. A proof of concept how to run Mendix on Docker.

The plan:

  1. Create a Docker Container containing all software to run Mendix
  2. Create a RESTful API to upload, start and stop a Mendix App within that container

What about the database, you may be wondering? We’ll just use a Docker container that provides us a PostgreSQL service! You can also build your own PostgreSQL container or use an existing PostgreSQL server in your network.

Start off with an image:

mendix-docker

This is what we are building. A Docker container containing:

  • All required software to run a Mendix App, like the Java Runtime Environment and the m2ee library
  • A RESTful API (m2ee-api) to upload, start and stop an App (listening on port 5000)
  • A webserver (nginx), to serve static content and proxy App paths to the Mendix runtime (listening on port 7000)
  • When an App is deployed the Mendix runtime will be listening on port 8000 locally

Building the base container

Before we can start to install the software, we need a base image. A minimal install of an operating system like Debian GNU/Linux, Ubuntu, Red Hat, CentOS, Fedora, etc. You could download a base container from the Docker Index. But because this is so basic and we’d like to create a Mendix container we can trust 100% (a 3rd party base image could contain back-doors), we created one ourselves.

A Debian GNU/Linux Wheezy image:

debootstrap wheezy wheezy http://cdn.debian.net/debian
tar -C wheezy -c . | docker import - mendix/wheezy

That’s all! Let’s show the image we’ve just created:

root@host:~# docker images
REPOSITORY       TAG       IMAGE ID       CREATED           VIRTUAL SIZE
mendix/wheezy    latest    1bee0c7b9ece   6 seconds ago     218.6 MB
root@host:~#

Building the Mendix container

On top of the base image we just created, we can start to install all required software to run Mendix. Creating a Docker container can be done using a Dockerfile. It contains all instructions to provision the container and information like what network ports to expose and what executable to run (by default) when you start using the container.

There is an extensive manual available about how to run Mendix on GNU/Linux. We’ve used this to create our Dockerfile. This Dockerfile also installs files like /home/mendix/.m2ee/m2ee.yaml, /home/mendix/nginx.conf and /etc/apt/sources.list. They must be in your current working directory when running the docker build command. All files have been published to GitHub.

To create the Mendix container run:

docker build -t mendix/mendix .

That’s it! We’ve created our own Docker container! Let’s show it:

root@host:~#
REPOSITORY       TAG       IMAGE ID       CREATED           VIRTUAL SIZE
mendix/mendix    latest    c39ee75463d6   10 seconds ago    589.6 MB
mendix/wheezy    latest    1bee0c7b9ece   3 minutes ago     218.6 MB
root@host:~#

Our container has been published to the Docker Index: mendix/mendix

The RESTful API

When you look at the Dockerfile, it shows you it’ll start the m2ee-api on startup. This API will listen on port 5000 and currently supports a limited set of actions:

GET  /about/        # about m2ee-api
GET  /status/       # app status
GET  /config/       # show configuration
POST /config/       # set configuration
POST /upload/       # upload a new MDA
POST /unpack/       # unpack the uploaded MDA
POST /start/        # start the app
POST /stop/         # stop the running app
POST /terminate/    # terminate the running app
POST /kill/         # kill the running app
POST /emptydb/      # empty the database

Usage

Now that we’ve created the container and published it to the Docker Index we can start using it. And not only we can start using it. Everyone can!

Pull the container and start it.

root@host:~# docker pull mendix/mendix
Pulling repository mendix/mendix
c39ee75463d6: Download complete
eaea3e9499e8: Download complete
...
855acec628ec: Download complete
root@host:~# docker run -d mendix/mendix
bd7964940dfc61449da79cddd1c0e8845d61f6ec1092b466e8e2e582726a5eea
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS                NAMES
bd7964940dfc        mendix/mendix:latest       /bin/su mendix -c /u   19 seconds ago      Up 18 seconds       5000/tcp, 7000/tcp   tender_hawkings
root@host:~# docker inspect bd7964940dfc | grep IPAddress | awk '{ print $2 }' | tr -d ',"'
172.17.0.5
root@host:~#

In this container the RESTful API started and is now listening on port 5000. We can for example ask for its status or show its configuration.

root@host:~# curl -XGET http://172.17.0.5:5000/status/
The application process is not running.
root@host:~# curl -XGET http://172.17.0.5:5000/config/
{
"DatabaseHost": "127.0.0.1:5432",
"DTAPMode": "P",
"MicroflowConstants": {},
"BasePath": "/home/mendix",
"DatabaseUserName": "mendix",
"DatabasePassword": "mendix",
"DatabaseName": "mendix",
"DatabaseType": "PostgreSQL"
}
root@host:~#

To run an App in this container, we first need a database server. Pull a PostgreSQL container from the Docker Index and start it.

root@host:~# docker pull zaiste/postgresql
Pulling repository zaiste/postgresql
0e66fd3d6a6f: Download complete
27cf78414709: Download complete
...
046559147c70: Download complete
root@host:~# docker run -d zaiste/postgresql
9ba56a7c4bb132ef0080795294a077adca46eaca5738b192d2ead90c16ac2df2
root@host:~# docker ps
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS                NAMES
9ba56a7c4bb1        zaiste/postgresql:latest   /bin/su postgres -c    22 seconds ago      Up 21 seconds       5432/tcp             jolly_darwin
bd7964940dfc        mendix/mendix:latest       /bin/su mendix -c /u   30 seconds ago      Up 29 seconds       5000/tcp, 7000/tcp   tender_hawkings
root@host:~# docker inspect 9ba56a7c4bb1 | grep IPAddress | awk '{ print $2 }' | tr -d ',"'
172.17.0.4
root@host:~#

Now configure Mendix to use this database server.

root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XGET http://172.17.0.5:5000/config/
{
"DatabaseHost": "172.17.0.4:5432",
"DTAPMode": "P",
"MicroflowConstants": {},
"BasePath": "/home/mendix",
"DatabaseUserName": "docker",
"DatabasePassword": "docker",
"DatabaseName": "docker",
"DatabaseType": "PostgreSQL"
}
root@host:~#

Upload, unpack and start an MDA:

root@host:~# curl -XPOST -F model=@project.mda http://172.17.0.5:5000/upload/
File uploaded.
root@host:~# curl -XPOST http://172.17.0.5:5000/unpack/
Runtime downloaded and Model unpacked.
root@host:~# # set config after unpack (unpack will overwrite your config)
root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XPOST http://172.17.0.5:5000/start/
App started. (Database updated)
root@host:~#

Check if the application is running:

root@host:~# curl -XGET http://172.17.0.5:7000/
-- a lot of html --
root@host:~# curl -XGET http://172.17.0.5:7000/xas/
-- a lot of html --
root@host:~#

Great success! We’ve deployed our Mendix App in a completely new environment in seconds.

Reflection

Docker is a very powerful tool to deploy lightweight, secure and isolated environments. The addition of a RESTful API makes it very easy to deploy and start Apps.

One of the limitations after finishing this is that the App isn’t reachable from the outside world. The port redirection feature from Docker can be used for that. To run more Mendix containers on one host there must be some kind of orchestrator on the Docker host that administrates the containers and keeps track of what is running where.

The RESTful API provides a limited set of features in comparison with m2ee-tools. When you start your App using m2ee-tools and your database already contains data, the CLI will ask you kindly what to do. Currently the m2ee-api will just try to upgrade the database scheme if needed and start the App without a notice.

Collectd Graph Panel v0.4

After 2,5 years and about 100 commits I’ve tagged version 0.4 of Collectd Graph Panel.

This version includes a new interface with a sidebar for plugin selection.

Javascript library jsrrdgraph has been integrated. Graphs will be rendered in the browser using javascript and HTML5 canvas by setting the “graph_type” configuration option to “canvas”. This saves a lot of CPU power on the server. Jsrrdgraph has some nice features. When rendered, you can move through time by dragging the graph from left to right and zoom in and out by scrolling on the graph.

Demo:

The Collectd compatibility setting has been changed to Collectd 5. If you’re still using Collectd 4, please set the “version” configuration setting to “4″, otherwise the graphs of a couple plugins won’t be showed right (like the interface, df, users plugins).

In this version of CGP, total values are added to the legend of I/O graphs and generated colors will be created using a rainbow palette instead of 9 predefined colors. Please read the changelog or git log for more information about the changes.

New plugins:

Special thanks for this version go to Manuel Luis, who developed jsrrdgraph, xian310 for the new interface, Manuel CISSÉ, Rohit Bhute, Matthias Viehweger, Erik Grinaker, Peter Chiochetti, Karol Nowacki, Aurélien Rougemont, Benjamin Dupuis, yur, Philipp Hellmich, Jonathan Huot, Neptune Ning and Nikoli for their contributions.

I’ve been using GitHub for a while now. You can download or checkout this version of CGP from the GitHub URL’s below. When you have improvements or fixes for CGP, don’t hesitate to send in a Pull Request on GitHub!

v0.4.1 update

I just removed the dependency on mod_rewrite when using jsrrdgraph to draw the graphs. This may solve javascript error: Invalid RRD: “Wrong magic id.” undefined.

githubGitHub: https://github.com/pommi/CGP
Download: https://github.com/pommi/CGP/archive/v0.4.1.zip
Git: git clone https://github.com/pommi/CGP.git

SSD caching using Linux and bcache

A couple of manufacturers are selling solutions to speed up your big HDD using a relative small SSD. There are techniques like:

But there is also a lot of development on the Linux front. There is Flashcache developed by Facebook, dm-cache by VISA and bcache by Kent Overstreet. The last one is interesting, because it’s a patch on top of the Linux kernel and will hopefully be accepted upstream some day.

Hardware setup

In my low power homeserver I use a 2TB Western Digital Green disk (5400 RPM). To give bcache a try I bought a 60GB Intel 330 SSD. Some facts about these data-carriers. The 2TB WD can do about 110 MB/s of sequential reads/writes. This traditional HDD does about 100 random operations per second. The 60GB Intel 330 can sequentially read about 500 MB/s and write about 450 MB/s. Random reads are done in about 42.000 operations per second, random writes in about 52.000. The SSD is much faster!

The image below shows the idea of SSD caching. Frequently accessed data is cached on the SSD to gain better read performance. Writes can be cached on the SSD using the writeback mechanism.

Prepare Linux kernel and userspace software

To be able to use bcache, there are 2 things needed:

  1. A bcache patched kernel
  2. bcache-tools for commands like make-bcache and probe-bcache

I used the latest available 3.2 kernel. The bcache-3.2 branch from Kent’s git repo merged successfully. Don’t forget to enable the BCACHE module before compiling.

On my low power home server I use Debian. Since there is no bcache-tools Debian package available yet, I created my own. Fortunately damoxc already packaged bcache-tools for Ubuntu once.

Debian package: pommi.nethuis.nl/…/bcache/
Git web: http://git.nethuis.nl/?p=bcache-tools.git;a=summary
Git: http://git.nethuis.nl/pub/bcache-tools.git

Bcache setup

Unfortunately bcache isn’t plug-and-play. You can’t use bcache with an existing formatted partition. First you have to create a caching device (SSD) and a backing device (HDD) on top of two existing devices. Those devices can be attached to each other to create a /dev/bcache0 device. This device can be formatted with your favourite filesystem. The creation of a caching and backing device is necessary because it’s a software implementation. Bcache needs to know what is going on. For example when booting, bcache needs to know what devices to attach to each other. The commands for this procedure are shown in the image below.

After this I had a working SSD caching setup. Frequently used data is stored on the SSD. Accessing and reading frequently used files is much faster now. By default bcache uses writethrough caching, which means that only reads are cached. Writes are being written directly to the backing device (HDD).

To speed up the writes you have to enable writeback caching. But you have to take in mind, there is a risk of losing data when using a writeback cache. For example when there is a power failure or when the SSD dies. Bcache uses a fairly simple journalling mechanism on the caching device. In case of a power failure bcache will try to recover the data. But there is a chance you will end up with corruption.

echo writeback > /sys/block/sda/sda[X]/bcache/cache_mode

When writes are cached on the caching device, the cache is called dirty. The cache is clean again, when all cached writes have been written to the backing device. You can check the state of the writeback cache via:

cat /sys/block/sda/sda[X]/bcache/state

To detach the caching device from the backing device run the command below (/dev/bcache0 will still be available). This can take a while when the write cache contains dirty data, because it must be written to the backing device first.

echo 1 > /sys/block/sda/sda[X]/bcache/detach

Attach the caching device again (or attach another caching device):

echo [SSD bcache UUID] > /sys/block/sda/sda[X]/bcache/attach

Unregister the caching device (can be done with or without detaching) (/dev/bcache0 will still be available because of the backing device):

echo 1 > /sys/fs/bcache/[SSD bcache UUID]/unregister

Register the caching device again (or register another caching device):

echo /dev/sdb[Y] > /sys/fs/bcache/register

Attach the caching device:

echo [SSD bcache UUID] > /sys/block/sda/sda[X]/bcache/attach

Stop the backing device (after unmounting /dev/bcache0 it will be stopped and removed, don’t forget to unregister the caching device):

echo 1 > /sys/block/sda/sda[X]/bcache/stop

Benchmark

To benchmark this setup I used two different tools. Bonnie++ and fio – Flexible IO tester.

 

Bonnie++

Unfortunately Bonnie++ isn’t that well suited to test SSD caching setups.

This graph shows that I’m hitting the limit on sequential input and output in the HDD-only and SSD-only tests. The bcache test doesn’t show much difference to the HDD-only test in this case. Bonnie++ isn’t able to warm up the cache and all sequential writes are bypassing the write cache.

In the File metadata tests the performance improves when using bcache.

 

fio

The Flexible IO tester is much better to benchmark these situations. For these tests I used the ssd-test example jobfile and modified the size parameter to 8G.

HDD-only:
seq-read: io=4084MB, bw=69695KB/s, iops=17423, runt= 60001msec
rand-read: io=30308KB, bw=517032B/s, iops=126, runt= 60026msec
seq-write: io=2792MB, bw=47642KB/s, iops=11910, runt= 60001msec
rand-write: io=37436KB, bw=633522B/s, iops=154, runt= 60510msec

SSD-only:
seq-read: io=6509MB, bw=110995KB/s, iops=27748, runt= 60049msec
rand-read: io=1896MB, bw=32356KB/s, iops=8088, runt= 60001msec
seq-write: io=2111MB, bw=36031KB/s, iops=9007, runt= 60001msec
rand-write: io=1212MB, bw=20681KB/s, iops=5170, runt= 60001msec

bcache:
seq-read: io=4127.9MB, bw=70447KB/s, iops=17611, runt= 60001msec
rand-read: io=262396KB, bw=4367.8KB/s, iops=1091, runt= 60076msec
seq-write: io=2516.2MB, bw=42956KB/s, iops=10738, runt= 60001msec
rand-write: io=2273.4MB, bw=38798KB/s, iops=9699, runt= 60001msec

In these tests the SSD is much faster with random operations. With the use of bcache random operations are done a lot faster in comparison to the HDD-only tests. It’s interesting that I’m not able to hit the sequential IO limits of the HDD and SSD in these tests. I think this is because my CPU (Intel G620) isn’t powerful enough for these tests. fio hits the IO limits of the SSD in an another machine with a Intel i5 processor.

Less CPU overhead with Qemu-KVM from Debian Wheezy

An interesting thing happened last week when I upgraded qemu-kvm from version 0.12.5 (Debian Squeeze) to 1.1.2 (Debian Wheezy). After a reboot (shutdown and start) of all my VM’s, they are using less CPU in total! I noticed this from the stats Collectd is collecting for me.

I’m running about 5 Debian Linux VM’s on my low power home server (Intel G620, 8G DDR3, DH67CF). Most of the time the VM’s are idle. As you can see in the graph below the CPU usage dropped. In particular the System CPU usage. The Wait-IO usage is mostly from Collectd, saving all the stats.

Looking a bit further I also noticed that the Local Timer Interrupts and Rescheduling Interrupts are halved.

They’ve done a nice job at Qemu-KVM!

Testing a D-Link Green Switch

Since a while I’ve been monitoring the power consumption of devices in my home using a power meter from BespaarBazaar.nl (advised by Remi). This power meter is a good one because it is very precise. It starts measuring at 0.2 Watt.

I needed an Ethernet switch to connect my TV, NMT and PS3 to my home network. While searching for a proper switch, I came across DLinkGreen.com. It looked promising. The Green Calculator, a 8.7MB Flash app which is using a lot of CPU (hello D-Link! is this Green?!? what about HTML5?), showed me I could save 70,98% of energy (2h, 1-5 ports > 28.7Wh D-Link Green vs. 99Wh Conventional per day) using D-Links Green technology.

I couldn’t find any green switches from other manufacturers so gave it a try. I bought a D-Link DGS-1005D. It’s a 5-ports unmanaged Gigabit Ehternet switch, supporting IEEE802.3az (Energy Efficient Ethernet), IEEE802.3x (Flow Control), 9000 bytes Jumbo Frames and IEEE802.1p QoS (4 queues).

So I did some tests using the power meter. As reference I used a HP Procurve 408 (8 ports 100Mbit switch). The results of the review are displayed below.

HP Procurve 408

Port # 1 2 3 4 5 Watt 24h 2h + 22h idle kWh annually
Adapter 1.4 33.6 33.6 12.264
4.4 105.6 105.6 38.544
m 4.9 117.6 106.6 38.909
m m 5.4 129.6 107.6 39.274
m m m 5.9 141.6 108.6 39.639
m m m m 6.4 153.6 109.6 40.004
m m m m m 6.8 163.2 110.4 40.296

D-Link DGS-1005D

Port # 1 2 3 4 5 Watt 24h 2h + 22h idle kWh annually
Adapter 0.0 0.0 0.0 0.0 :-)
1.1 26.4 26.4 9.636
g 1.6 38.4 27.4 10.001
g m 1.8 43.2 27.8 10.147
g m g 2.1 50.4 28.4 10.366
g m g g 2.5 60 29.2 10.658
g m g g g 2.9 69.6 30 10.950
g m m g g 2.7 64.8 29.6 10.804
g m m m g 2.5 60 29.2 10.658
g m m m m 2.3 55.2 28.8 10.512

m = 100 Mbit, g = 1 Gbit

First of all it’s interesting to see that the power adapter from HP is using 1.4 watts on it’s own already. Besides that it’s nice to know that a 100Mbit port uses less energy than a Gigabit port. The Green Calculator is quiet right in my case. I’m saving about 72~74% of energy.

Bonnie++ to Google Chart

Bonnie++ is the perfect tool to benchmark disk I/O performance. When you want to run heavy disk I/O consuming applications on a server, you need to know what the disk I/O limits are to have an indication what you can run on it. Bonnie gives you information about read-, write-, rewrite Block I/O performance and the performance of file creation and deletion.

Below is a sample piece of Bonnie++ output. It is hard to read, and when you’ve done a couple of runs on different hosts, it’s hard to compare these results.

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
host3           16G  1028   0 94051   0 40554   0  1668   0 121874   0 380.3   0
Latency             16101us    7694ms    7790ms   13804us     124ms   76495us
Version  1.96       ------Sequential Create------ --------Random Create--------
host3               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                256 50115   0 332943   0  2338   0 51591   0 +++++ +++  1189   0
Latency               312ms    1088us   10380ms     420ms      15us   15792ms
1.96,1.96,host3,1,1329840761,16G,,1028,0,94051,0,40554,0,1668,0,121874,0,380.3,0,256,,,,,50115,0,332943,0,2338,0,51591,0,+++++,+++,1189,0,16101us,7694ms,7790ms,13804us,124ms,76495us,312ms,1088us,10380ms,420ms,15us,15792ms

Inspired by bonnie-to-chart I’ve developed a tool called bonnie2gchart. With the CSV data Bonnie provides after a run, you can visualize that information in a graph with bonnie2gchart. bonnie2gchart uses the Google Chart API to create the graphs. It’s compatible with Bonnie++ 1.03 and 1.96. And it is also able to show the results of both versions in one graph.

Example chart

Demo: http://demo.nethuis.nl/bonnie2gchart/
Download: https://github.com/…/bonnie2gchart.tar.gz
Git: git://github.com/pommi/bonnie2gchart.git

OTRS the RT way

I have used RT (Request Tracker) for quite a while. Starting with RT 2.0.15 + MySQL. Later upgraded that version manually (with a couple of nasty PHP scripts) to RT 3.6.x + PostgreSQL. It worked and stayed consistent. I’ve never been very happy about RT. It now contains about 80K tickets and it is slow. I have been reading a lot of forum/mailinglist threads about migrating from OTRS to RT and RT to OTRS because RT or OTRS should be faster. Unfortunately I’ve not found a real conclusion.

While starting a new project I gave OTRS a try. Just because I think it doesn’t really matter if you use RT or OTRS. I liked the way some things worked in RT, so I created some patches to make OTRS work a little bit like RT.

Empty body check

First of all the body part in OTRS is required for some reason. This is annoying. For example when a ticket is resolved and a customer replies to thank you, you just want to close the ticket. Or when you just want to change the owner of a ticket. This patch disables the emtpy body check:

Download: http://pommi.nethuis.nl/…/otrs-3.0.4-remove-empty-body-check.patch

RT-like Dashboard

OTRS: RT-like DashboardIn RT you are responsible for your own tickets. The RT 3.6 dashboard shows you the most recent open tickets you own. And it shows the most recent unowned new tickets. OTRS shows all open tickets, instead of only the ones you own. OTRS also shows all new tickets including the ones that are already locked.

This patch adds a new module called DashboardTicketRT. It’s a customized DashboardTicketGeneric module to make your dashboard work like RT. After you have applied the patch, you can use this module with the following steps:

  1. Login as admin
  2. Go to Sysconfig
  3. Go to Frontend::Agent::Dashboard (Group: Ticket)
  4. In TicketNew change:
    • Attributes: StateType=new; to StateType=new;OwnerIDs=1;
    • Module: Kernel::Output::HTML::DashboardTicketGeneric to Kernel::Output::HTML::DashboardTicketRT
  5. In TicketOpen change:
    • Attributes: StateType=open; to StateType=open;StateType=new;OwnerIDs=UserID;
    • Module: Kernel::Output::HTML::DashboardTicketGeneric to Kernel::Output::HTML::DashboardTicketRT
    • Title: Open Tickets / Need to be answered to My Tickets / Need to be answered
  6. Restart your server

Download: http://pommi.nethuis.nl/…/otrs-3.0.5-dashboardticketrt.patch

Instant Close / Spam

OTRS: InstaClose linkTo work a little bit more efficient this patch adds an InstaClose and InstaSpam link to each ticket. With a single click you can close a ticket or move a ticket to the Junk queue. The patch adds a new module called AgentTicketInsta. You must add this code to Kernel/Config.pm to enable it:

$Self->{'Frontend::Module'}->{'AgentTicketInsta'} = {
 Group       => [ 'admin', 'users' ],
 Description => 'Instant Ticket Actions',
 Title       => 'Insta',
};

Download: http://pommi.nethuis.nl/…/otrs-3.0.4-instant-close-spam.patch

Collectd Graph Panel v0.3

It has been a while and many people were already using one of the development versions of CGP. Time to release a new version of CGP: v0.3.

Special thanks for this version go to Manuel CISSÉ, Jakob Haufe, Edmondo Tommasina, Michael Stapelberg, Julien Rottenberg and Tom Gallacher for sending me patches to improve CGP. Also thanks to everyone that replied after the previous release.

In this version there are a couple of minor improvements and some new settings to configure. But the most important: Support has been added for 13 Collectd plugins. CGP now supports 26 Collectd plugins. Also a couple of plugins have been updated. Please read the changelog or git log for more information about the changes.

New plugins:

Download the .tgz package, the patch file to upgrade from v0.2 or checkout the latest version from the git repository.

Download: http://pommi.nethuis.nl/…/cgp-0.3.tgz
Patch: http://pommi.nethuis.nl/…/cgp-0.2-0.3.patch.gz
Git: http://git.nethuis.nl/pub/cgp.git

Collectd Graph Panel v0.2

Version 0.2 of Collectd Graph Panel (CGP) has been released. This version has some interesting new features and changes since version 0.1.

A new interface is introduced, based on Daniel Von Fange’s styling. A little bit of web 2.0 ajax is used to expand and collapse plugin information on the host page. The width and heigth of a graph is configurable, also the bigger detailed one. UnixSock flush support is added to be able to let collectd write cached data to the rrd file, before an up-to-date graph of the data is generated. CPU support for Linux 2.4 kernel and Swap I/O support is added. And some changes under the hood.

Download the .tgz package, the patch file to upgrade from v0.1 or checkout the latest version from my public git repository.

CGP Overview Page CGP Server Page CGP Detailed Page

Download: http://pommi.nethuis.nl/…/cgp-0.2.tgz
Patch: http://pommi.nethuis.nl/…/cgp-0.1-0.2.patch.gz
Git: http://git.nethuis.nl/pub/cgp.git