Author Archives: pommi

Less CPU overhead with Qemu-KVM from Debian Wheezy

An interesting thing happened last week when I upgraded qemu-kvm from version 0.12.5 (Debian Squeeze) to 1.1.2 (Debian Wheezy). After a reboot (shutdown and start) of all my VM’s, they are using less CPU in total! I noticed this from the stats Collectd is collecting for me.

I’m running about 5 Debian Linux VM’s on my low power home server (Intel G620, 8G DDR3, DH67CF). Most of the time the VM’s are idle. As you can see in the graph below the CPU usage dropped. In particular the System CPU usage. The Wait-IO usage is mostly from Collectd, saving all the stats.

Looking a bit further I also noticed that the Local Timer Interrupts and Rescheduling Interrupts are halved.

They’ve done a nice job at Qemu-KVM!

Testing a D-Link Green Switch

Since a while I’ve been monitoring the power consumption of devices in my home using a power meter from BespaarBazaar.nl (advised by Remi). This power meter is a good one because it is very precise. It starts measuring at 0.2 Watt.

I needed an Ethernet switch to connect my TV, NMT and PS3 to my home network. While searching for a proper switch, I came across DLinkGreen.com. It looked promising. The Green Calculator, a 8.7MB Flash app which is using a lot of CPU (hello D-Link! is this Green?!? what about HTML5?), showed me I could save 70,98% of energy (2h, 1-5 ports > 28.7Wh D-Link Green vs. 99Wh Conventional per day) using D-Links Green technology.

I couldn’t find any green switches from other manufacturers so gave it a try. I bought a D-Link DGS-1005D. It’s a 5-ports unmanaged Gigabit Ehternet switch, supporting IEEE802.3az (Energy Efficient Ethernet), IEEE802.3x (Flow Control), 9000 bytes Jumbo Frames and IEEE802.1p QoS (4 queues).

So I did some tests using the power meter. As reference I used a HP Procurve 408 (8 ports 100Mbit switch).

HP Procurve 408

Port # 1 2 3 4 5 Watt 24h 2h + 22h idle kWh annually
Adapter 1.4 33.6 33.6 12.264
4.4 105.6 105.6 38.544
m 4.9 117.6 106.6 38.909
m m 5.4 129.6 107.6 39.274
m m m 5.9 141.6 108.6 39.639
m m m m 6.4 153.6 109.6 40.004
m m m m m 6.8 163.2 110.4 40.296

D-Link DGS-1005D

Port # 1 2 3 4 5 Watt 24h 2h + 22h idle kWh annually
Adapter 0.0 0.0 0.0 0.0 🙂
1.1 26.4 26.4 9.636
g 1.6 38.4 27.4 10.001
g m 1.8 43.2 27.8 10.147
g m g 2.1 50.4 28.4 10.366
g m g g 2.5 60 29.2 10.658
g m g g g 2.9 69.6 30 10.950
g m m g g 2.7 64.8 29.6 10.804
g m m m g 2.5 60 29.2 10.658
g m m m m 2.3 55.2 28.8 10.512

m = 100 Mbit, g = 1 Gbit

First of all it’s interesting to see that the power adapter from HP is using 1.4 watts on it’s own already. Besides that it’s nice to know that a 100Mbit port uses less energy than a Gigabit port. The Green Calculator is quiet right in my case. I’m saving about 72~74% of energy.

Bonnie++ to Google Chart

Bonnie++ is the perfect tool to benchmark disk I/O performance. When you want to run heavy disk I/O consuming applications on a server, you need to know what the disk I/O limits are to have an indication what you can run on it. Bonnie gives you information about read-, write-, rewrite Block I/O performance and the performance of file creation and deletion.

Below is a sample piece of Bonnie++ output. It is hard to read, and when you’ve done a couple of runs on different hosts, it’s hard to compare these results.

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
host3           16G  1028   0 94051   0 40554   0  1668   0 121874   0 380.3   0
Latency             16101us    7694ms    7790ms   13804us     124ms   76495us
Version  1.96       ------Sequential Create------ --------Random Create--------
host3               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                256 50115   0 332943   0  2338   0 51591   0 +++++ +++  1189   0
Latency               312ms    1088us   10380ms     420ms      15us   15792ms
1.96,1.96,host3,1,1329840761,16G,,1028,0,94051,0,40554,0,1668,0,121874,0,380.3,0,256,,,,,50115,0,332943,0,2338,0,51591,0,+++++,+++,1189,0,16101us,7694ms,7790ms,13804us,124ms,76495us,312ms,1088us,10380ms,420ms,15us,15792ms

Inspired by bonnie-to-chart I’ve developed a tool called bonnie2gchart. With the CSV data Bonnie provides after a run, you can visualize that information in a graph with bonnie2gchart. bonnie2gchart uses the Google Chart API to create the graphs. It’s compatible with Bonnie++ 1.03 and 1.96. And it is also able to show the results of both versions in one graph.

Example chart

Demo: http://demo.nethuis.nl/bonnie2gchart/
Download: https://github.com/…/bonnie2gchart.tar.gz
Git: git://github.com/pommi/bonnie2gchart.git

OTRS the RT way

I have used RT (Request Tracker) for quite a while. Starting with RT 2.0.15 + MySQL. Later upgraded that version manually (with a couple of nasty PHP scripts) to RT 3.6.x + PostgreSQL. It worked and stayed consistent. I’ve never been very happy about RT. It now contains about 80K tickets and it is slow. I have been reading a lot of forum/mailinglist threads about migrating from OTRS to RT and RT to OTRS because RT or OTRS should be faster. Unfortunately I’ve not found a real conclusion.

While starting a new project I gave OTRS a try. Just because I think it doesn’t really matter if you use RT or OTRS. I liked the way some things worked in RT, so I created some patches to make OTRS work a little bit like RT.

Empty body check

First of all the body part in OTRS is required for some reason. This is annoying. For example when a ticket is resolved and a customer replies to thank you, you just want to close the ticket. Or when you just want to change the owner of a ticket. This patch disables the emtpy body check:

Download: http://pommi.nethuis.nl/…/otrs-3.0.4-remove-empty-body-check.patch

RT-like Dashboard

OTRS: RT-like DashboardIn RT you are responsible for your own tickets. The RT 3.6 dashboard shows you the most recent open tickets you own. And it shows the most recent unowned new tickets. OTRS shows all open tickets, instead of only the ones you own. OTRS also shows all new tickets including the ones that are already locked.

This patch adds a new module called DashboardTicketRT. It’s a customized DashboardTicketGeneric module to make your dashboard work like RT. After you have applied the patch, you can use this module with the following steps:

  1. Login as admin
  2. Go to Sysconfig
  3. Go to Frontend::Agent::Dashboard (Group: Ticket)
  4. In TicketNew change:
    • Attributes: StateType=new; to StateType=new;OwnerIDs=1;
    • Module: Kernel::Output::HTML::DashboardTicketGeneric to Kernel::Output::HTML::DashboardTicketRT
  5. In TicketOpen change:
    • Attributes: StateType=open; to StateType=open;StateType=new;OwnerIDs=UserID;
    • Module: Kernel::Output::HTML::DashboardTicketGeneric to Kernel::Output::HTML::DashboardTicketRT
    • Title: Open Tickets / Need to be answered to My Tickets / Need to be answered
  6. Restart your server

Download: http://pommi.nethuis.nl/…/otrs-3.0.5-dashboardticketrt.patch

Instant Close / Spam

OTRS: InstaClose linkTo work a little bit more efficient this patch adds an InstaClose and InstaSpam link to each ticket. With a single click you can close a ticket or move a ticket to the Junk queue. The patch adds a new module called AgentTicketInsta. You must add this code to Kernel/Config.pm to enable it:

$Self->{'Frontend::Module'}->{'AgentTicketInsta'} = {
 Group       => [ 'admin', 'users' ],
 Description => 'Instant Ticket Actions',
 Title       => 'Insta',
};

Download: http://pommi.nethuis.nl/…/otrs-3.0.4-instant-close-spam.patch

Collectd Graph Panel v0.3

It has been a while and many people were already using one of the development versions of CGP. Time to release a new version of CGP: v0.3.

Special thanks for this version go to Manuel CISSÉ, Jakob Haufe, Edmondo Tommasina, Michael Stapelberg, Julien Rottenberg and Tom Gallacher for sending me patches to improve CGP. Also thanks to everyone that replied after the previous release.

In this version there are a couple of minor improvements and some new settings to configure. But the most important: Support has been added for 13 Collectd plugins. CGP now supports 26 Collectd plugins. Also a couple of plugins have been updated. Please read the changelog or git log for more information about the changes.

New plugins:

Download the .tgz package, the patch file to upgrade from v0.2 or checkout the latest version from the git repository.

Download: http://pommi.nethuis.nl/…/cgp-0.3.tgz
Patch: http://pommi.nethuis.nl/…/cgp-0.2-0.3.patch.gz
Git: http://git.nethuis.nl/pub/cgp.git

Collectd Graph Panel v0.2

Version 0.2 of Collectd Graph Panel (CGP) has been released. This version has some interesting new features and changes since version 0.1.

A new interface is introduced, based on Daniel Von Fange’s styling. A little bit of web 2.0 ajax is used to expand and collapse plugin information on the host page. The width and heigth of a graph is configurable, also the bigger detailed one. UnixSock flush support is added to be able to let collectd write cached data to the rrd file, before an up-to-date graph of the data is generated. CPU support for Linux 2.4 kernel and Swap I/O support is added. And some changes under the hood.

Download the .tgz package, the patch file to upgrade from v0.1 or checkout the latest version from my public git repository.

CGP Overview Page CGP Server Page CGP Detailed Page

Download: http://pommi.nethuis.nl/…/cgp-0.2.tgz
Patch: http://pommi.nethuis.nl/…/cgp-0.1-0.2.patch.gz
Git: http://git.nethuis.nl/pub/cgp.git

Collectd Graph Panel v0.1

Collectd Graph Panel (CGP) is a graphical web front-end for Collectd written in php. I’ve developed CGP because I wasn’t realy happy with the existing web front-ends. What I like to see in a front-end for Collectd is clear overview of all the hosts you are managing and an easy way to get detailed information about a plugin.

This first release is a very basic version of what I have in mind. Not all plugins are supported yet and the user interface can be done better. Still a lot to work on!

Please give it a try! It should work out-of-the-box. Download the tgz package or checkout the latest version from my public git repository.

Download: http://pommi.nethuis.nl/…
Git: http://git.nethuis.nl/pub/cgp.git

Postgrey p0f patch

Postgrey is a Postfix policy server implementing greylisting. This patch adds p0f support to Postgrey based on p0fq.pl from p0f, like the patch by Fedux. P0f is a fingerprint tool, to identify operating systems. The difference between Fedux’s patch and mine is that his patch requires p0f-analyzer. See the image below.

p0f-analyse-postgrey

My patch uses p0f’s socket created with the ‘-Q’ option. Because Postgrey doesn’t know what the source port is of the sender, p0f must also be configured with the ‘-0’ option.

my patch

Example usage:

# p0f
p0f -u postgrey -Q /var/run/p0f-sock -0 -N -i eth0 'tcp dst port 25'

# postgrey options
--p0f --p0f-service=/var/run/p0f-sock --p0f-ip=<ip-of-eth0>

Postgrey version: 1.32
Download: pommi.nethuis.nl/..

ext3 overhead on 1TB storage

Recently I bought a portable harddrive from Western Digital. The Western Digital Elements (WDE1U10000E) carries 1 Terabyte of space and can be connected via USB 2.0. 1TB, for just 99,00 Euro (2008-12-27). According to Wikipedia and the SI standard the drive must contain 1,000,000,000,000 bytes (1TB). fdisk shows us:

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes

So that is correct.

The disk is preformatted FAT32. After mounting the disk, df shows me there is actually 976283280 KiB (932GiB) available. This is about 999714078720 bytes. It looks like 490807296 bytes (468MiB) is gone, but it must be used for File Allocation Tables.

Because FAT32 is crappy old and I use Linux and want a journaling filesystem, I will reformat the device with ext3. After setting the right partition table type via fdisk.

$ mkfs.ext3 -m0 /dev/sda1
mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
61054976 inodes, 244190000 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
7453 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

After a minute or 10 (yeah! USB 2.0) it was all done. First of all 2646016 bytes (2584 KiB) is not formatted (1000204886016 – (244190000 * 4096)). After mounting the disk, df shows me this time there is 961432072 KiB (917 GiB) available. This is less then FAT32, but we have a journaling filesystem now. 15327928 (97676000 – 961432072) KiB is used for that. But why and how?

dumpe2fs /dev/sda1 shows us:

dumpe2fs 1.41.3 (12-Oct-2008)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          0bdd2888-06fc-4b22-a6e5-987ac65236ee
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              61054976
Block count:              244190000
Reserved block count:     0
Free blocks:              240306876
Free inodes:              61054965
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      965
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Sun Dec 28 14:22:22 2008
Last mount time:          Sun Dec 28 14:35:20 2008
Last write time:          Sun Dec 28 14:35:20 2008
Mount count:              1
Maximum mount count:      36
Last checked:             Sun Dec 28 14:22:22 2008
Check interval:           15552000 (6 months)
Next check after:         Fri Jun 26 15:22:22 2009
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      0bda5622-6cc2-4a1a-8135-c3f810580d43
Journal backup:           inode blocks
Journal size:             128M
  • /dev/sda1 has 7453 block groups.
  • Inode size is 256 bytes.
  • 8192 inodes for each block group.

7453 * 256 * 8192 makes 15630073856 bytes (15263744 KiB) for inode space.

15327928 – 15263744 = 64184 unexplained KiB left

Besides one primary superblock, 18 extra backup superblocks are stored on the disk. A superblock is 256 bytes, though it is stored in a 4 KiB block. 19 * 4096 makes 77824 bytes (76 KiB).

64184 – 76 = 64108 unexplained KiB left

If someone has an explanation for it, please leave a reply.

61054976 inodes means there can be stored over 61 million files on the formatted 917 GiB. This is way too much for me. 10% of it is enough too, so there will also be less space needed for storing the inodes. Formatting the disk with option -i 131072 sould better fit me.