Keep the light green

Sometimes even a simple walldisplay (like the Jenkins walldisplay plugin) to show the current status for a Jenkins build is to complicated to understand and you want to boil down the current project/build status to a simple traffic light like red, yellow or green signal (management loves these simple three color status reports, and you can guess why ;-)).

For the cost of a Raspberry PI, a SD Card and a Cleware USB Traffic light (all together ~ 80€) you can build a that show the status of a Jenkins job as a traffic light like this:


Step 1 Install minibian on SD card

I like to use the minibian ( distribution for the Raspberry which is a stripped down version of the original raspbian distribution that also fits on smaller SD cards.

Connect Rapsberry to power and network

The first boot takes some time but once the network comes up the raspberry fetches an IP address using DHCP and tries to register itself in the network as raspberrypi so if your network supports this you should be able to SSH into your raspberry after a few minutes using:

Install clewarecontrol

To control the traffic light we will use clewarecontrol ( which unfortunately has no prebuild packages.

Download and extract sources

Install prerequisites

Build and install

Test clewarecontrol

The next step is to plug in the traffic light and test if clewarecontrol recognizes it:

Connect to Jenkins job

I created a BASH script that (when given the URL for a Jenkins job) tries to detect a local cleware traffic light device and displays the job status to it:

Usage is like follows:

Autodetect device

Use device with serial number

Autodetect device and poll Jenkins every n seconds

By |June 3rd, 2015|Uncategorized|

Monitoring Mango applications

A frequently asked question after deploying an application is in the best case “How is the application performing?” and in the (more common) worst case “Why is it not performing as expected?”.

To find the answer to these questions, application metrics are always useful to get an idea about what is going on inside your application. This post will line out Mangos built in monitoring capabilities and show how to use them in real life.

Metrics (mind the gap)

Most of the credits for the monitoring backend goes the the (very good) metrics library that lets you easily create gauges/meters/timers/histograms and health checks. To illustrate how new metrics are created have a look at the example below:

Here two metrics are created, for once a gauge that represents an instantaneous measurement of a value, in this case the size of a queue that is stored under the key “queue.size”. The other metric is a meter that represents the rate of an event here the processing rate of a queue and can be reached via the key “queue.requests”. Because Mango also uses the metrics library internally you don’t have to create the registry instance by yourself bur rather inject it like any other spring bean.

So even if you don’t define any own metrics you will always get Mangos internal measurements.

Watching you measurements

To view the actual values metrics comes with a variety of predefined ways to display them. Mango automatically exports the values via a web servlet dumping the values in JSON format and as JMX beans.

Example output of the metrics servlet


Example JMX bean output


Logging your metrics

Of course refreshing the metrics servlet every few seconds or having the JConsole runing all the time is not a feasible way to have an eye on your performance.

Take the following code example where the create method of the Entity4 DAO is delayed for a certain (configurable time) when a flag is set.

Code example

In reality you can think of a new feature that may be activated during runtime and you want to track the impact of this feature on the whole application. Out of the box metrics supports Ganglia and Graphite as logging backends, Graphite is directly supported by Mango to activate it just give your Graphite host and port as system parameters:

graphite.metrics.enabled=true enables the Grpahite backend where host and port are configured via and graphite.carbon.port respectively. Additionally when an events API url is configured via graphite.eventsapi.url all configuration changes that occur during the applications runtime will be recorded as events in Graphite.

So if you change the configuration of parameters that are used in the create method of the code example above:


You see that the property that enabled the new “feature” changed from false to true:


If you increase the wait time using the Property UI antoher event is recorded and we see that the mean execution time of the create method raises a bit more:


Now lets finally turn that silly new feature of and we see that the execution times normalize:


By |May 28th, 2015|Uncategorized|

Graphite, Carbon and Grafana 2.x Vagrant box

For testing purposes I needed a local Grafana 2.x installation to play with. Google gives you several ready to use Vagant/Docker setups but must of them are based on Grafana 1.x. So if you need a local Grafana 2.x have a look at the Vagrant box I created (

To create the box check out the code and fire up Vagrant:

Point your browser to http://localhost:9100/grafana/ to see Grafana running (or http://localhost:9100/ for plain Graphite)


By |May 16th, 2015|Uncategorized|

Managing and testing your Jenkins environment using Ansible and Vagrant

Due to many migration problems with my Jenkins build infrastructure (from one cloud provider to another) which mostly originated from long forgotten configuration tweaks on the original server, this time I took some extra time to transfer all this knowledge into a proper configuration management system so that the next migration will run more smoothly.

Why Ansible and not xxx?

For my setup I chose Ansible over Puppet/Salt/Chef because I was looking for a small agentless solution for my handfull of servers. Ansible basically only needs SSH access to the target machines and you are good to go. On top of that I never worked with ansible before so the following descriptions and explanations are the observations of a first time Ansible user and may be helpful for anyone starting with Ansible.

The goal

The overall goal for the whole process is to automate the setup of a Jenkins CI server that is surrounded by an nginx reverse proxy so that Jenkins will be reachable via To allow Jenkins to send mails the system shall be configured to use an external mail server as SMTP relay (including SMTP auth). As an additional bonus the whole setup will be tested in a local virtualized environment using Vagrant before it is rolled out to the live system.

The playbook

We will start by having a look at the playbook that will hold the description for our system.

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.

Luckily we don’t have to write every configuration step by hand because Ansible has the concept of includes/roles that are described in the documentation as follows:

Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions – they allow you to focus more on the big picture and only dive down into the details when needed.

Playbooks are text based and if you are familiar with the widely used YAML syntax you should have no problem to read playbooks. One of the most appealing advantages of Ansible is that for the most common tasks (like installing Jenkins, configuring nginx, …) there are roles available that you can use in your playbook. The roles that we will use are:

The ansible best practices document (a definite must read) proposes to store the roles in a folder called roles (duh) so to make these roles usable create the folder and check out the three roles into the folder:

The first role

Lets dive into our playbook and see how these roles are used in practice. Best practice suggests that the main playbook file is named site.yml and the playbooks for the individual hosts are stored in $hostname.yml and included in the site.yml, for this example (and the sake of simplicity) we will store everything in the site.yml

The first line hosts: zoidberg simply says that the following instructions should be applied to the host or host group with the name zoidberg (we will later see how host names and physical machines are matched and how hosts can be arranged in host groups). I like to use Futurama role names for my machines and because this example is from an actual running setup (if you are interested the repository is public available at the name for the host in this example is zoidberg.


Now lets have a look at the first role definition - { role: postfix advises Ansible to apply the postfix role to the host (or again host group) named zoidberg, followed by parameters that determine how exactly the postfix instance on zoidberg will be configured (most roles provide a readme files that describe the possible configuration variables for the role for example here). To avoid spreading these information around the playbook we make use of ansibles variable system. Variables can be defined inside the playbook itself, or in external files which is what happened in this case. For each host ansible automatically looks for an file named after the host inside the directory host_vars. In our case inside host_vars lies a file called zoidberg with the content:

which is automatically included and thus the variable hostname can be used for setting the postfix hostname. The SMTP auth information contains a username/password combination for the SMTP relay host which (as the name of the SMTP relay itself) is a bit of sensitive information that I don’t want to see in my public GitHub repository. For these variables I created a seperate file smtp_relay.yml that is stored in my user home on my local machine and included using the vars_files statement.

Jenkins and reverse proxy

The credit for the setup of Jenkins and nginx goes to the authors of the corresponding roles which set up everything just fine. The configuration for the Jenkins reverse proxy is nearly identical to the setup described in the Jenkins wiki.


The three roles from above will take care of our server setup, so the last thing for us left to to is to check if everything went correctly.

For the whole playbook, have a look at my GitHub infrastructure repository

By |May 8th, 2015|development, linux|

Deploying a web application (war) into a vagrant container using ansible

At the moment, the showcase application for my web application framework is hosted in the AWS cloud using the AWS Beanstalk service (as described in this post). Unfortunately my free usage period for the Amazon services timed out and despite the really great and easy way to deploy web applications using Beanstalk, the monthly fee of about ~70$ for a micro EC2 instance and a micro mysql instance are to high just to host a small showcase application. Luckily at home I have a HP Proliant Microserver that is already running 24/7 serving as NAS and has enough power to host the showcase. To increase portability and to make it possible to play around with the showcase on your local machine the new solution is based on a Vagrant (using Virtualbox as backend) and ansible to provsion the system.

The Vagrantfile defining the box is pretty straightforward, an Ubuntu 14.10 base system (Tomcat 8 is only available in the Ubuntu repositories after 14.10) and a port forward to make the Tomcat available from the outside. The synced_folder directive suppresses the default shared folder that is not needed here and the vb.customize statements increase the memory and CPU values for the box. The interesting part starts with the declaration of the provisioner for this box. I chose ansible this time (the previous versions of the showcase were deployed by salt and puppet) because of its simple and agentless architecture which leads to relatively fast results (in this simple usecase).

The playbook file is the entry point for ansible to look for its configuration. The idea of this particular playbook is to install Tomcat 8 (using a custom configuration file that includes the JDNI link definitions for my web application

The server.xml looks like this:

Now after Tomcat is installed we need the war file and of course that JDBC driver for the JNDI link (in this case an embedded Derby database). Because everything is available in maven repositories and I really don’t like maven I use gradle to fetch all dependencies and place them in a single folder. This is how the gradle build file looks

In the ansible playbook the gradle build file as well as the gradle wrapper are copied into the vagrant box and executed.

The wrapper frees us from installing gradle:

The Gradle Wrapper (henceforth referred to as the “wrapper”) is the preferred way of starting a Gradle build. The wrapper is a batch script on Windows, and a shell script for other operating systems. When you start a Gradle build via the wrapper, Gradle will be automatically downloaded and used to run the build. now that gradle has fetched all needed dependencies, just copy the war file and the JDBC jar into the right folders:

and restart tomcat:

And after a few moments tomcat happily serves our web application on port 8080. For completeness here the full playbook (the installation of the python-apt package is a hack due to a bug in the base image and normally not necessary):

So to deploy a new version of the showcase a simple vagrant destroy -f && vagrant up recreates the box which is from now on reachable at

The only downside of this solution is, that the hosting moved from here:


to here:


But on the upside, the NSA will never find my server =).

By |April 10th, 2015|linux|

I still don’t get it…

Because I had to explain to a few people what exactly the goal of the framework I’m working on is, I created a small “product page” that sums up the most important features: Feel free to contact me on Hangout/Google+/Mail if you have more questions.

By |April 6th, 2015|development|

New Feature: Application Properties

Mangos properties features enables you to equip your application with configuration options. Parameterization is supported from the command line using Java system properties, using Spring properties from the application context and of course from the database. The properties are defined using a Java based DSL and support features like fallback to other properties if undefined, default vales and a web interface to change properties live in the frontend.

See below for some code examples or here for full documentation.

Simple string database property

Setting default values

Useful defaults for your properties can be defined using the default(...) method.

Fallback to another backend.

Fallback to another backend using the same property key is configured using the database(), system() or spring() methods.

Fallback to another property

It is also possible to fallback to another property (of the same type) using the fallback(...) method.

Adding properties to the web UI

To activate the web UI add the module to your navigation tree.

Then in your client code add your properties to the PropertyProvider. Properties can be grouped logically into categories which will display them in separate tabs in a tabfolder.


By |April 5th, 2015|mango|

Jenkins walldisplay version 0.6.29

It has been some time since the last Jenkins walldisplay plugin release, thanks to numerous contributors a bunch of new features and bugfixes have been added in the meantime:

  • Bug fix – JENKINS-26873 Sorting by Status does not work correctly
  • Bug fix – JENKINS-26745 : fix javascript error on undefined object
  • New Feature – The gravatar URL can now be configured in the plugin configuration
  • New Feature – The Junit result display can be switched of now
  • New Feature – Jenkins display name feature is used for job name display

A screenshot of the configuration page for the plugin shows most of the current features: walldisplay_0.6.29_1

See here for a live Demo or the screenshot below for the current state:

Jenkins Wall Display  All

By |March 25th, 2015|development, jenkins|

Jenkins GitHub issue updater

For GitHu hosted project that is continuously built using Jenkins I needed the build number in every GitHub issue that was mentioned in the commits that were part of the build, or in other words:

“Scan the commit comments for numbers and check on GitHub if these numbers refer to an issue. If so comment the issue with build information from the running build.”

Quite unusually for Jenkins I couldn’t find a plugin performing this task but thankfully to Jenkins and GitHubs public APIs this problem was easy to solve. The following script need information about the Jenkins server (which build, what build number, etc.) and the GitHub user and repository to match against. Because the script is meant to be run within a Jenkins build, it tries to determine the Jenkins server url, job name and build number from the well known Jenkins environment variables (JENKINS_URL, JOB_NAME and BUILD_NUMBER). An optional security token as well as all GitHub information can be passed as command line parameter, call the script with --help for a overview of the parameters:

help output

If invoked the script will dump its configuration (without the security tokens) and log some information while parsing the commit messages:

example script run

The source code is available on GitHub in form of a gist:

By |March 9th, 2015|development, jenkins|

Mango persistence services

Because Mangos persistence services may not be self explaining on the the first glimpse this post tries to put some light into this topic:

Two low level services provide basic persistence functions for entities/value objects, the IBaseEntityDAO for entities and the IBaseVODDAO for value objects (these two are nearly identical, in fact they are derived from the same basic service interface just with different generic types for entities/value objects).

The IBaseVODAO internally copies the data from/to the entity/value object using the copy service. Despite the fact that one the server you are free to use entities/value objects or both, it is advisable to always use the value object based IBaseEntityService as you would from any client side code. The IBaseEntityService provides some higher level persistence functions as well as validation based on the datatype metadata.


As IBaseEntityDAO and IBaseVODDAO provide generic persistence functionality for all entities or value objects you may want to add specialized persistence behavior for your own entities or value objects. This can be achieved be registering an entity/value object specific EntityDAO/VODAO.

For each entity an basic Base{entity name}EntityDAO/Base{entity name}VODAO is generated. This default implementation defaults to the normal entity/value object DAO behavior. You can override this default implementation to add your own business logic.

entity DAO example

To create a new entity specific DAO extend the generated BaseDAO for the entity:

and register the DAO in your Spring application context:

By |March 5th, 2015|development, gwt, mango|