php-zalando, a PHP client to interact with the Zalando API

zalando-logoAgain it’s been a while since I created a public PHP library to interact with another API but here we go. For a project we needed to interact with Zalando but unfortunately their API didn’t have a PHP wrapper/library to interact with it. Until now!

I present you php-zalando (packagist link) based on the Zalando API.

API endpoints covered so far

  • {articleId}
  • {articleId}/media
  • {articleId}/reviews
  • {articleId}/reviews-summary
  • {articleId}/units
  • {articleId}/units/{unitId}
  • {articleIds}

Nearly all API endpoints are covered, but there are still some items to do:
– achieve 100% code coverage (34% at the moment)
– add all optional filters/parameters, different per endpoint


Via Composer

Add php-zalando in your composer.json or create a new composer.json:

     "require": {
          "cschalenborgh/php-zalando": "dev-master"

Now tell composer to download the library by running the command:

php composer.phar install

That’s it! Because of Composer’s autoloading you should now be able to use this library. Don’t forget to include the namespace.

Behind the scenes @ Coolblue HQ

What is Coolblue?

Last week I was invited to go visit Coolblue behind the scenes in Rotterdam, NL. Coolblue has always been one of my favorite online dutch/belgian webshops because of their great customer service, their clear but simple websites, and most important, their great range of products at very affordable prices.

They roughly have about 50k unique products in 150-200 different webshops (they launch 2 new webshops per week on average), which they all have in stock at all time in one of their huge warehouses. Their biggest warehouse is more than 40.000 square meters which is about 10-20 FIFA football fields! How big is that! Ordering before 23.59 means it’s delivered at your doorstep for free the next day, or the same day when paying additional shipping costs. Also on sunday!

Why would they organise behind the scenes events?

The main idea behind this ‘Behind the scenes’ is that they are looking for talent programmers. They’re looking for roughly 100 new developers. They have about 40 right now so it seems they have big plans. Are they going international? Who knows! But their CEO didn’t really deny it. Their global employee basis is about 750 people.

Since I’m a freelancer I won’t be able to work for them so why did I go to this event? Well.. because their main dev talk was about scalability. I find this a very interesting and challenging topic so getting insights from one of the biggest webshops in the Benelux was a great opportunity for me to learn.

So what did I learn at Coolblue?

  1. use microservices / hypermedia api’s to flatten out your infrastructure. This allows you to easily isolate bugs and/or maintain those API’s without having to retest stable services on every deploy.
  2. every microservice uses an isolated datastore. CouchDB for customer data, PostgreSQL for order+payment information, ElasticSearch for the product catalog.
  3. RabbitMQ as a central state change notification mechanism. This is the glue between your microservices.
  4. use git pull requests to review & validate your team members changes
  5. CentOS RPM packages to distribute everything
  6. Puppet labs or Chef to install & maintain your (virtual) servers
  7. Statsd + Graphite for advanced reporting
  8. Nagios for alerting
  9. Create a “Chaos Monkey”. It’s single purpose is to once in a while kill your live environment. Your devs should then come with solutions to auto-fix these downtimes. This is how Netflix stayed online during the massive AWS outage a while ago:

Have a look at the slides from their software architect.

Why should you attend one of these meetings?

  1. If you want a developer job @ Coolblue obviously.
  2. If you want to learn more about scalability.
  3. If you want to network with hundreds of other devs.

Inform yourself on this page:

Here’s a nice dutch article about the same event:

QNAP 412 Turbo Nas – no more disk space

I have a QNAP 412 Turbo Nas and ran out of disk space. No problem you say?
Well.. apparently QNAP’s are very fragily when it comes to 100% disk space usage.

Symptoms could be:

  • • console: ls freezes, rm’s freeze
  • • winscp/ftp/web admin: cannot delete files (times out)
  • • reboot: NAS becomes unavailable, doesn’t show up in your network anymore, QFinder can’t find your nas
  • • even my Windows 7 box became unstable because I mapped a few network drives to my shares

15-10-2013 10-09-11

I’ve tried upgrading my firmware (remove your disks first! use 1 old hdd) but this doesn’t help. As soon as you insert your old drives, it returns to slowmotion mode.. (aka. timeouts).

I then contacted QNAP via their support forum and here’s their solution:

  1. Turn off your QNAP NAS
  2. Pull out all HDD’s
  3. Restart QNAP without the HDD’s
  4. Use QFinder to detect the IP
  5. Put back the HDD’s
  6. Don’t follow the installation wizard (web)! Instead, connect via SSH (use putty?), login credentials will probably be admin/admin
  7. Execute these commands (mount the raid):


mdadm -A /dev/md9 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm -A /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
cd /mnt
(Under /mnt)
mkdir HDA_ROOT
cd /share
(Under /share)
mkdir MD0_DATA
mount /dev/md9 /mnt/HDA_ROOT
mount /dev/md0 /share/MD0_DATA -t ext4
(or mount /dev/md0 /share/MD_DATA -t ext3)
  1. Your shares should now be accesible again in /share/MD0_DATA so you can delete some data. QNAP recommends atleast 3GB free space.
  2. Reboot your NAS with this command: ‘reboot’
  3. Your NAS should now be accesible again.


So all in all this fixes the problem and I still have all my data. But I do think QNAP should have added an extra fail-safe to prevent this from happening. We’re not all technical people and most people won’t even know what SSH is..

Raspberry Pi PLC/Domotica testcase

Here’s a quick tutorial on how to build a hardware on/off switch which sends this signals to a RESTful web API using Raspberry PI with Raspbian. This is in fact a small PLC testcase (proof of concept). The possibilities are in fact endless!
I’m planning to use this to monitor certain events around my house. E.g. is a door open/closed? Is a device on/off?

Download & install Raspberry

Download the latest version of Raspbian onto your Raspberry PI SD card:

Updates & depencies

Do some updates + install extra depencies:

apt-get update
apt-get install python, py-pycurl

Setup the hardware

In order to know the GPIO’s pins you’ll have to find the input/output pins. Here’s a map:

Connect your Raspberry’s GPIO (the big black serial thing) to some switch or toggle. Here’s how I did it (testcase):


We actually need 3 pins. One for I/O, one for power, and one for grounding (safety first!). Make sure you solder the right cable to the right GPIO pin (see map above).

Now you might experience the naming of these pins are confusing. That’s because there’s 3 type’s of naming conventions used here..

Pin Numbers RPi.GPIO Raspberry Pi Name BCM2835
P1_01 1 3V3
P1_02 2 5V0
P1_03 3 SDA0 GPIO0
P1_04 4 DNC
P1_05 5 SCL0 GPIO1
P1_06 6 GND
P1_07 7 GPIO7 GPIO4
P1_08 8 TXD GPIO14
P1_09 9 DNC
P1_10 10 RXD GPIO15
P1_11 11 GPIO0 GPIO17
P1_12 12 GPIO1 GPIO18
P1_13 13 GPIO2 GPIO21
P1_14 14 DNC
P1_15 15 GPIO3 GPIO22
P1_16 16 GPIO4 GPIO23
P1_17 17 DNC
P1_18 18 GPIO5 GPIO24
P1_19 19 SPI_MOSI GPIO10
P1_20 20 DNC
P1_22 22 GPIO6 GPIO25
P1_23 23 SPI_SCLK GPIO11
P1_24 24 SPI_CE0_N GPIO8
P1_25 25 DNC
P1_26 26 SPI_CE1_N GPIO7

Anyway, let’s move on and try & catch the GPIO’s input using python.

Read GPIO signals using Python (daemon script)

import RPi.GPIO as GPIO
import time
import os
buttonPin = 07
while True:
  if (GPIO.input(buttonPin)):
    os.system("sudo python /home/pi/")
    #print "button called"

import time
import RPi.GPIO as GPIO
import datetime
import pycurl, json
buttonPin = 07
# reset state
last_state = -1
while True:
  input = GPIO.input(buttonPin)
  now =
  # check if value changed
  if (input != last_state) :
    	print "Button state is changed:",input, " @ ",now
	api_url = ""
	data = "location_id=1&status=%s" % input
	c = pycurl.Curl()
	c.setopt(pycurl.URL, api_url)
	c.setopt(pycurl.POST, 1)
	c.setopt(pycurl.POSTFIELDS, data)
  # update previous input
  last_state = input
  # slight pause to debounce

You can run this script doing this:

sudo python /home/pi/

Or add it to /etc/rc.local (so it runs after each reboot)

python /home/pi/
exit 0


Here’s a quick (and unsafe) ‘API’ script for receiving the Raspberry signals:

header('Cache-Control: no-cache, must-revalidate');
header('Expires: Mon, 26 Jul 1997 05:00:00 GMT');
header('Content-type: application/json');
$dbh = new PDO('mysql:host=localhost; dbname=database', 'username', 'password');
$response = array(
    'status'    => 'nok'
    $status = $_POST['status'];
    $location_id = $_POST['location_id'];
    // create log
    $sql = "INSERT INTO status_log (location_id, status, created_at, updated_at) VALUES (:location_id, :status, NOW(), NOW())";
    $q = $dbh->prepare($sql);
    $q->execute(array(':location_id' => $location_id,
                      ':status'      => $status));
    // update location  
    $sql = "UPDATE location SET status=:status, updated_at=NOW() WHERE id=:location_id";
    $q = $dbh->prepare($sql);
    $q->execute(array(':location_id' => $location_id,
                      ':status'      => $status));
    // output
    $response = array(
        'status'    => 'ok'
echo json_encode($response);

Now I’m very curious what sort of applications you guys are building with this Raspberry Pi “plc implementation”. Feel free to post them in the comments section.

Usefull links


Facebook: the numbers (infographic)


Google Universal Analytics: the next big thing?

Today I came across a video about Universal Analytics, Google’s self-proclaimed next big thing.

But is it really? In short? Yes! Here’s why.

What is the difference between Google Analytics and Google Universal Analytics?

Universal Analytics tracks much more than just visits to a website likes Google Analytics does. Where Google Analytics (based on Urchin) tracks visits, Universal Analytics tracks visitors. This allows us to do cross-device tracking which mean even more detailed statistics!


Offline conversions

With the Measurement Protocol you’re now able to ‘plug in’ extra user data, provided by offline/external appliances such as an RFID chip, apps, sensors, etc. Have a look at the YouTube video in my introduction. As a geek, I absolutely love this. The more data the merrier.

Customisable session timeouts

You can now alter this ourselves. Previously this limit was a hard fixed 30 minutes.


Ga.js becomes analytics.js. The include code also changes.

universal analytics code

Custom dimensions/metrics

Custom dimensions/metrics allows you to create specific metrics in combination with your CRM’s database. See this as custom variables 2.0.

External cost structure

External input, external output. We’re now able to import external costs structures so we can calculate our conversions and ROI even more accurate. Back in the old days.. we could only import Google Adwords costs.

And much, much more..

Here’s some more useful and interesting links I found about Google Universal Analytics:

Did I forgot to mention something very important about Universal Analytics? Please leave a note in the comments 🙂


Popular websites 10 years ago

Here’s some screenshots of popular websites (Linkedin, Facebook, Google, Twitter, eBay, YouTube, Wikipedia, Amazon, Hotmail, Blogger, Apple), but 10 years ago.. Pretty neat how things evolved right? Who can still remember this?


Deploying PHP projects with Jenkins on OS X


Continuous deployment, automated unit testing, code analysis & reports, git repo’s, Laravel 4 using Composer. This must be a dream project right? Well, yes, if you have it working correctly. It took me a while to get everything working together but now it works like a charm. I chose to deploy via my macbook, but some people might find it handier to install this setup on a public webserver so they can bind their commits to automated deployments, use it as a team, ….
Anyway, this tutorial will be about deployment for php projects (Laravel 4 in particular) on OS X. I’m assuming you already have a webserver up & running (I’m using MAMP).

1) Install Jenkins

Go to and install Jenkins for Mac OS X, make sure to use seperate new user. Here’s a great tutorial about this:

– Jenkins in your dock:
– JenkinsMobi (iOS app):

2) Configure Jenkins

I strongly recommend you to follow this tutorial as it has almost everything documented for deploying php apps through Jenkins. My setup is actually based on this documentation.

Download the Jenkins plugins listed on the website. I’m using these:
– Jenkins Mailer Plugin
– External Monitor Job Type Plugin
– Ant Plugin
– Static Analysis Utitlities
– Checkstyle Plug-in
– Credentials Plugin
– Jenkins CVS Plug-in
– Duplicate Code Scanner Plug-in
– Jenkins Email Extension Plugin
– FTP publisher plugin
– Jenkins GIT client plugin
– Jenkins GIT plugin
– GitHub API Plugin
– Github plugin
– HTML Publisher plugin
– Jenkins JDepend Plugin
– Plot plugin
– PMD Plug-in
– Publish Over FTP
– xUnit Plugin

3) Configure a new project

Install the jenkins-php’s job template tutorial. When you want to create a new project, simply copy the job template project and modify it as you want.

4) Connect your git repo

Connect your git repo to jenkins. Make sure to use the git protocol ( if you setup key pairs. If you’re using the http:// github url it will keep asking for credentials even though your key pairs are correctly installed.

5) Install Composer

Use this tutorial to install Composer:

And put it in your bin directory so every user can use this great piece of software:

cp composer.phar /usr/local/bin/composer

If you’re using composer you will at one point notice it will clone your depency repo’s using git everytime you build your application. And this gave me some Jenkins problems.. Even though the normal ‘composer update’ worked like a charm under my jenkins user, Jenkins itself was giving problems.

[exec] [RuntimeException]
[exec] Failed to clone, git was not found, check that it is installed and in your PATH env.
[exec] sh: git: command not found

more @

To fix this, go to http://localhost:8080/configure -> Global configuration -> Environment variables
And add this:
name = PATH
value = /usr/local/git/bin:$PATH

As you can see in the following picture, composer is now fully working in our build process:

6) Install some additional PHP packages

Now of course we want code statistics, automated unit testing, auto generated API documentation, coverage reports, etc so we need to install some extra tools:

pear (
Follow this quick guide to install pear:

phpdox (

sudo pear config-set auto_discover 1
sudo pear install

phpunit (

sudo pear channel-discover
sudo pear install phpunit/PHPUnit

phploc (

sudo pear config-set auto_discover 1
sudo pear install

pdepend (

sudo pear channel-discover
sudo pear install pdepend/PHP_Depend-beta


sudo pear channel-discover
sudo pear install --alldeps phpqatools/PHP_CodeBrowser

7) Create your build file

Next we create our build.xml file and we make sure it’s executed correctly by Jenkins (check Project -> Building steps). Put this file in your /jobs/PROJECT directory.

Here’s mine:

Note: you can test the build file manually by executing this command. So no need to build via Jenkins and download 24 composer repo’s before it starts executing the ant build file.

ant -f build.xml -v

You will also need some additional XML config files:

PROJECT/phpcs.xml ->

PROJECT/phpdox.xml ->

PROJECT/phpmd.xml ->

You will also have to alter your Laravel’s phpunit.xml file: (the one in your root folder)

8) Build your application

Now try to build your application through Jenkins. Enjoy!

Here’s how it should look like:

And some fancy screenshots:


These are just some Jenkins screenshots.. our build file is also generating documentation, code coverage, getc which you can find in PROJECT/workspace/build/ and PROJECT/build

Interesting links: