php-zalando, a PHP client to interact with the Zalando API

zalando-logoAgain it’s been a while since I created a public PHP library to interact with another API but here we go. For a project we needed to interact with Zalando but unfortunately their API didn’t have a PHP wrapper/library to interact with it. Until now!

I present you php-zalando (packagist link) based on the Zalando API.

API endpoints covered so far

  • https://api.zalando.com/article-reviews
  • https://api.zalando.com/article-reviews/{reviewId}
  • https://api.zalando.com/article-reviews-summaries
  • https://api.zalando.com/article-reviews-summaries/{articleModelId}
  • https://api.zalando.com/articles
  • https://api.zalando.com/articles/ {articleId}
  • https://api.zalando.com/articles/ {articleId}/media
  • https://api.zalando.com/articles/ {articleId}/reviews
  • https://api.zalando.com/articles/ {articleId}/reviews-summary
  • https://api.zalando.com/articles/ {articleId}/units
  • https://api.zalando.com/articles/ {articleId}/units/{unitId}
  • https://api.zalando.com/brands
  • https://api.zalando.com/brands/{key}
  • https://api.zalando.com/categories
  • https://api.zalando.com/categories/{key}
  • https://api.zalando.com/domains
  • https://api.zalando.com/facets?{filters}
  • https://api.zalando.com/filters
  • https://api.zalando.com/filters/{name}
  • https://api.zalando.com/recommendations/ {articleIds}

Nearly all API endpoints are covered, but there are still some items to do:
– achieve 100% code coverage (34% at the moment)
– add all optional filters/parameters, different per endpoint

Installation

Via Composer

Add php-zalando in your composer.json or create a new composer.json:

{
     "require": {
          "cschalenborgh/php-zalando": "dev-master"
     }
}

Now tell composer to download the library by running the command:

php composer.phar install

That’s it! Because of Composer’s autoloading you should now be able to use this library. Don’t forget to include the namespace.

phpMyAdmin on Laravel Homestead

What is Laravel Homestead?

“Laravel Homestead is an official, pre-packaged Vagrant “box” that provides you a wonderful development environment without requiring you to install PHP, HHVM, a web server, and any other server software on your local machine. No more worrying about messing up your operating system! Vagrant boxes are completely disposable. If something goes wrong, you can destroy and re-create the box in minutes!” – laravel.com

That’s great!

But how can I reach my databases via phpMyAdmin?

Unfortunately it doesn’t come with phpMyAdmin out of the box. You’re forced to setup a local PMA install, or use an external application like Sequel Pro.
In this short blog post I’ll teach you how to setup phpMyAdmin on your Laravel Homestead box. Assuming you followed all the instructions on laravel.com to setup your box, start by SSH’ing into it:

homestead ssh

Install phpMyAdmin via apt-get:

sudo apt-get install phpmyadmin

(do NOT select apache2 or lighttpd. Just continue without them).

Next, make a symlink between the PMA directory and your web root:

sudo ln -s /usr/share/phpmyadmin/ /home/vagrant/Sites/phpmyadmin
cd ~/Sites && serve phpmyadmin.app /home/vagrant/Sites/phpmyadmin

Go back to your local environment and modify the hosts file like this:

127.0.0.1  phpmyadmin.app

phpMyAdmin is now reachable via http://phpmyadmin.app:8000

Or like this:

192.168.10.10  phpmyadmin.app

phpMyAdmin is now reachable via http://phpmyadmin.app

That’s it!

Pushing Laravel logs to Loggly

logo_loggly

Laravel uses the Monolog logging library for logging, however, it’s saving all logs to a local directory by default. Not a very useful thing in a production environment.

That’s where Loggly comes into play. Loggly acts as a central (cloud)platform that can receive logs from a multitude of platforms such as php frameworks (Laravel being the one), operating systems, their own Loggly API, and a lot of other frameworks and services. In this short tutorial I’ll show you how to implement Loggly into your multi-environment Laravel project.

1) Update composer

First we need to update composer so we’re 100% that we’re using the latest Monolog version, because older versions don’t support Loggly.

composer self-update
composer update

2) Setup Loggly account

Go to https://www.loggly.com and create a (free) account.

3) Add Loggly credentials to your Laravel environment

Now the nice thing about Loggly is that it’s pretty easy to configure. Go to https://www.loggly.com/tokens and setup a token for your project. Do this to keep things seperated. Got a big project? Then you might want to use source groups as well.
Now create a new file in your Laravel folder: /app/config/services.php and add this:

return array(
	// credentials for loggly.com
	'loggly' => array(
		'key'	=> 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx',
		'tag'	=> 'ProjectName_' .strtolower(App::environment()),
	),
);

Now if you’re using Laravel’s environment configuration you should do this:

return array(
	// credentials for loggly.com
	'loggly' => array(
		'key'	=> getenv('services.loggly.key'),
		'tag'	=> 'ProjectName_' .strtolower(App::environment()),
	),
);

And then store it in a similar way in your .env.local.php file, or on Laravel Forge.

4) Install the logging code

Now this is where the magic happens. Open up /app/start/global.php and find the “Application Error Logger” section. Replace it with this code:

/*
|--------------------------------------------------------------------------
| Application Error Logger
|--------------------------------------------------------------------------
|
| Here we will configure the error logger setup for the application which
| is built on top of the wonderful Monolog library. By default we will
| build a rotating log file setup which creates a new file each day.
|
*/
 
$logFile = 'log-'.php_sapi_name().'.txt';
Log::useDailyFiles(storage_path().'/logs/'.$logFile);
 
/*
 * Setup Loggly Handler
 */
$handler = new \Monolog\Handler\LogglyHandler(Config::get('services.loggly.key'),\Monolog\Logger::DEBUG);
$handler->setTag(Config::get('services.loggly.tag'));
 
$logger = Log::getMonolog();
$logger->pushHandler($handler);

This way, every environment will have it’s own tag and you can easily see in the Loggly dashboard when something happens on your dev/staging environment, or if it’s on live. I also strongly advise to setup alerts within Loggly so you get notified once you got a couple of 500 errors coming around.
Now the nice thing about Loggly is that you can go back in time and easily see in the stats how many errors occured in the past 24 hours, good luck doing that with log files! This is especially useful for keeping a close eye on cronjobs and incoming API calls.

5) Testing

Throw some 404 errors by opening up a page in your project and appending some random characters. Wait a couple of minutes and there we go. You should see something like this:
Screen Shot 2014-10-02 at 11.05.36

Also worth noting is that all other Laravel logging functions will also push to Loggly.

More info about Laravel’s logging features: http://laravel.com/docs/4.2/errors#logging
More info about Loggly: https://www.loggly.com

php-invoiceocean, a PHP client to interact with the InvoiceOcean.com API

It’s been a while since I created a public PHP library to interact with another API but here we go.

php-invoiceocean is a PHP client to communicatie with the InvoiceOcean.com API.

What is InvoicingOcean?

“The easiest way to online invoicing”. Over 70 000 companies are using the InvoiceOcean software. The application’s simplicity and intuitive interface is aimed at quick and efficient invoice issuing. Because of the SaaS environment, your data is securely stored in the Cloud and available to access from anywhere in the world. Whether you are a small or medium business owner or an individual entrepreneur, InvoiceOcean will make your work easier.

A few facts

– it’s RESTful
– works with Json exclusively (although they have XML api’s, I like Json more)
– it’s pretty smart in a way that you only have to define the method names, no parameter or http method checking
– works with composer (obviously)

Clone this repo @ https://github.com/kryap/php-invoiceocean

How to use?

composer require kryap/php-invoiceocean
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
- Installing kryap/php-invoiceocean (dev-master a8ccf4d)
Cloning a8ccf4dfd3b8daa611087822f27da2a773c073ce
 
Writing lock file
Generating autoload files
Generating optimized class loader

Here’s how you could use it:

<?php
 
$io = new InvoiceOceanClient('username', 'api_token_goes_here');
$clients = $io->getClients();
 
var_dump($clients);
 
?>

Need more documentation about the InvoiceOcean API?

Visit these links:

Behind the scenes @ Coolblue HQ

What is Coolblue?

Last week I was invited to go visit Coolblue behind the scenes in Rotterdam, NL. Coolblue has always been one of my favorite online dutch/belgian webshops because of their great customer service, their clear but simple websites, and most important, their great range of products at very affordable prices.

They roughly have about 50k unique products in 150-200 different webshops (they launch 2 new webshops per week on average), which they all have in stock at all time in one of their huge warehouses. Their biggest warehouse is more than 40.000 square meters which is about 10-20 FIFA football fields! How big is that! Ordering before 23.59 means it’s delivered at your doorstep for free the next day, or the same day when paying additional shipping costs. Also on sunday!

Why would they organise behind the scenes events?

The main idea behind this ‘Behind the scenes’ is that they are looking for talent programmers. They’re looking for roughly 100 new developers. They have about 40 right now so it seems they have big plans. Are they going international? Who knows! But their CEO didn’t really deny it. Their global employee basis is about 750 people.

Since I’m a freelancer I won’t be able to work for them so why did I go to this event? Well.. because their main dev talk was about scalability. I find this a very interesting and challenging topic so getting insights from one of the biggest webshops in the Benelux was a great opportunity for me to learn.

So what did I learn at Coolblue?

  1. use microservices / hypermedia api’s to flatten out your infrastructure. This allows you to easily isolate bugs and/or maintain those API’s without having to retest stable services on every deploy.
  2. every microservice uses an isolated datastore. CouchDB for customer data, PostgreSQL for order+payment information, ElasticSearch for the product catalog.
  3. RabbitMQ as a central state change notification mechanism. This is the glue between your microservices.
  4. use git pull requests to review & validate your team members changes
  5. CentOS RPM packages to distribute everything
  6. Puppet labs or Chef to install & maintain your (virtual) servers
  7. Statsd + Graphite for advanced reporting
  8. Nagios for alerting
  9. Create a “Chaos Monkey”. It’s single purpose is to once in a while kill your live environment. Your devs should then come with solutions to auto-fix these downtimes. This is how Netflix stayed online during the massive AWS outage a while ago: http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html

Have a look at the slides from their software architect.

Why should you attend one of these meetings?

  1. If you want a developer job @ Coolblue obviously.
  2. If you want to learn more about scalability.
  3. If you want to network with hundreds of other devs.

Inform yourself on this page: http://www.coolblue.nl/behindthescenes

Here’s a nice dutch article about the same event: http://www.dailybits.be/item/coolblue-behind-scenes/

QNAP 412 Turbo Nas – no more disk space

I have a QNAP 412 Turbo Nas and ran out of disk space. No problem you say?
Well.. apparently QNAP’s are very fragily when it comes to 100% disk space usage.

Symptoms could be:

  • • console: ls freezes, rm’s freeze
  • • winscp/ftp/web admin: cannot delete files (times out)
  • • reboot: NAS becomes unavailable, doesn’t show up in your network anymore, QFinder can’t find your nas
  • • even my Windows 7 box became unstable because I mapped a few network drives to my shares

15-10-2013 10-09-11

I’ve tried upgrading my firmware (remove your disks first! use 1 old hdd) but this doesn’t help. As soon as you insert your old drives, it returns to slowmotion mode.. (aka. timeouts).

I then contacted QNAP via their support forum and here’s their solution:

  1. Turn off your QNAP NAS
  2. Pull out all HDD’s
  3. Restart QNAP without the HDD’s
  4. Use QFinder to detect the IP
  5. Put back the HDD’s
  6. Don’t follow the installation wizard (web)! Instead, connect via SSH (use putty?), login credentials will probably be admin/admin
  7. Execute these commands (mount the raid):

 

mdadm -A /dev/md9 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm -A /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
cd /mnt
(Under /mnt)
mkdir HDA_ROOT
cd /share
(Under /share)
mkdir MD0_DATA
 
mount /dev/md9 /mnt/HDA_ROOT
mount /dev/md0 /share/MD0_DATA -t ext4
(or mount /dev/md0 /share/MD_DATA -t ext3)
  1. Your shares should now be accesible again in /share/MD0_DATA so you can delete some data. QNAP recommends atleast 3GB free space.
  2. Reboot your NAS with this command: ‘reboot’
  3. Your NAS should now be accesible again.

 

So all in all this fixes the problem and I still have all my data. But I do think QNAP should have added an extra fail-safe to prevent this from happening. We’re not all technical people and most people won’t even know what SSH is..

Raspberry Pi PLC/Domotica testcase

Here’s a quick tutorial on how to build a hardware on/off switch which sends this signals to a RESTful web API using Raspberry PI with Raspbian. This is in fact a small PLC testcase (proof of concept). The possibilities are in fact endless!
I’m planning to use this to monitor certain events around my house. E.g. is a door open/closed? Is a device on/off?

Download & install Raspberry

Download the latest version of Raspbian onto your Raspberry PI SD card:
http://www.raspberrypi.org/downloads

Updates & depencies

Do some updates + install extra depencies:

apt-get update
apt-get install python, py-pycurl

Setup the hardware

In order to know the GPIO’s pins you’ll have to find the input/output pins. Here’s a map:

raspberry-gpio
Connect your Raspberry’s GPIO (the big black serial thing) to some switch or toggle. Here’s how I did it (testcase):

raspberry-pi-gpio-switch

We actually need 3 pins. One for I/O, one for power, and one for grounding (safety first!). Make sure you solder the right cable to the right GPIO pin (see map above).

Now you might experience the naming of these pins are confusing. That’s because there’s 3 type’s of naming conventions used here..

Pin Numbers RPi.GPIO Raspberry Pi Name BCM2835
P1_01 1 3V3
P1_02 2 5V0
P1_03 3 SDA0 GPIO0
P1_04 4 DNC
P1_05 5 SCL0 GPIO1
P1_06 6 GND
P1_07 7 GPIO7 GPIO4
P1_08 8 TXD GPIO14
P1_09 9 DNC
P1_10 10 RXD GPIO15
P1_11 11 GPIO0 GPIO17
P1_12 12 GPIO1 GPIO18
P1_13 13 GPIO2 GPIO21
P1_14 14 DNC
P1_15 15 GPIO3 GPIO22
P1_16 16 GPIO4 GPIO23
P1_17 17 DNC
P1_18 18 GPIO5 GPIO24
P1_19 19 SPI_MOSI GPIO10
P1_20 20 DNC
P1_21 21 SPI_MISO GPIO9
P1_22 22 GPIO6 GPIO25
P1_23 23 SPI_SCLK GPIO11
P1_24 24 SPI_CE0_N GPIO8
P1_25 25 DNC
P1_26 26 SPI_CE1_N GPIO7

Anyway, let’s move on and try & catch the GPIO’s input using python.

Read GPIO signals using Python

plc.py (daemon script)

import RPi.GPIO as GPIO
import time
import os
 
buttonPin = 07
GPIO.setmode(GPIO.BOARD)
GPIO.setup(buttonPin,GPIO.IN)
 
while True:
  if (GPIO.input(buttonPin)):
    os.system("sudo python /home/pi/plc_handle.py")
    #print "button called"

plc_handle.py

import time
import RPi.GPIO as GPIO
import datetime
import pycurl, json
 
buttonPin = 07
GPIO.setmode(GPIO.BOARD)
GPIO.setup(buttonPin,GPIO.IN)
 
# reset state
last_state = -1
 
while True:
  input = GPIO.input(buttonPin)
  now = datetime.datetime.now()
 
  # check if value changed
  if (input != last_state) :
    	print "Button state is changed:",input, " @ ",now
	api_url = "webserver.com/api/input.php"
	data = "location_id=1&amp;status=%s" % input
	c = pycurl.Curl()
	c.setopt(pycurl.URL, api_url)
	c.setopt(pycurl.POST, 1)
	c.setopt(pycurl.POSTFIELDS, data)
	c.perform()
 
  # update previous input
  last_state = input
 
  # slight pause to debounce
  time.sleep(1)

You can run this script doing this:

sudo python /home/pi/plc.py

Or add it to /etc/rc.local (so it runs after each reboot)

python /home/pi/plc.py
exit 0

Web API

Here’s a quick (and unsafe) ‘API’ script for receiving the Raspberry signals:
input.php

<?
header('Cache-Control: no-cache, must-revalidate');
header('Expires: Mon, 26 Jul 1997 05:00:00 GMT');
header('Content-type: application/json');
 
$dbh = new PDO('mysql:host=localhost; dbname=database', 'username', 'password');
 
$response = array(
    'status'    => 'nok'
);
 
if(!Empty($_POST['location_id']))
{
    $status = $_POST['status'];
    $location_id = $_POST['location_id'];
 
    // create log
    $sql = "INSERT INTO status_log (location_id, status, created_at, updated_at) VALUES (:location_id, :status, NOW(), NOW())";
    $q = $dbh->prepare($sql);
    $q->execute(array(':location_id' => $location_id,
                      ':status'      => $status));
 
    // update location  
    $sql = "UPDATE location SET status=:status, updated_at=NOW() WHERE id=:location_id";
    $q = $dbh->prepare($sql);
    $q->execute(array(':location_id' => $location_id,
                      ':status'      => $status));
 
    // output
    $response = array(
        'status'    => 'ok'
    ); 
}
 
echo json_encode($response);
?>

 
Now I’m very curious what sort of applications you guys are building with this Raspberry Pi “plc implementation”. Feel free to post them in the comments section.

Usefull links

 

A/B split testing with PHP

What is split testing?

A/B split testing is the art of setting up multiple random variants/tests in a controlled experiment. Example tests are different call to actions, different types/locations of buttons, different images, etc.
Your goal? Finding the best converting test, learn from this, and implement this further in your website, thus getting more & more conversions out of existing visitors. Learn why people click and don’t click.

ab-split-testing

Why should you use split testing?

When using split testing, you’ll learn how your visitors think, what works best on your site, and how you can get the most out of all visitors. Optimising websites takes time, but this time may be cheaper then putting another 1000$ in advertising. First optimise, then do some more advertising.

How can you split test in PHP?

There are several PHP libraries which you can use for split testing, and I tested most of them. Here’s my favourites:

Must we use PHP libraries?

Nope.. There are several other ways to integrate split testing in your projects. Here’s some more:

What should we test? Example tests:

And here’s some more articles to help you get started..

 

Any feedback or tips & tricks regarding split testing are more than welcome in the comment section. Good luck!