apply ACL without reload all configuration

I was looking this afternoon for some workaround in web environment let me to modify an ACL (add or delete ip) without reload all configuration when I found haproxy with lucky!

Thierry FOURNIER told us this workaround here. He suggest us combine ACL matching integers AND fetch data from map file.

frontend input-pool
        default_backend output-pool
        acl abuser src,map_ip_int(/etc/haproxy/abusers.lst,0) -m int eq 1
        http-request tarpit if abuser

backend output-pool
        balance roundrobin
        server  app1_1 :81 cookie app1inst1 check inter 2000 rise 2 fall 5
        server  app1_2 :80 cookie app1inst2 check inter 2000 rise 2 fall 5

Finally, we have to enable socket stat file launch these commands:

## Block http request from
echo "add map /etc/haproxy/abusers.lst 1" | socat - unix:/tmp/haproxy 
## Allow http request from
echo "del map /etc/haproxy/abusers.lst 1" | socat - unix:/tmp/haproxy

Also, this is another useful command

echo "show stat"| socat unix-connect:/tmp/haproxy stdio

I attached in this post a little gif with my testing this behavior in my vagrant lab.
(Round robin balance is defined in virtual box instance running Ubuntu14 LTS, haproxy opens socket defined in balancing http requests to and
demo acl dinamic map

Useful tool, Socat

Kibana help us to analyze CDN logs

The logs analysis is too useful to know the state, understand the different behaviors and trends of every component in our platform. Furthermore it allow us fix mistakes, prevent failures and improve the product. Splunk has been the best solution that i has tested. The main disadvantage of Splunk is its price.

Due to the high cost of Splunk I’ve chosen Kibana+Elastcisearch+Logstash to analyze logs from my company CDNs, such as Akamai and Amazon Cloudfront. The main goal of this post is show us an alternative workaround cheaper than the another one.

Firstly, we’re going to import the Akamai logs, this is log format from official documentation:



This is the logstash filter:

filter {
    grok {
      type => "esw3c_waf"
      match => { "message" => "%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] (?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest}) %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{QS:cookies} \"%{WORD:WafPolicy}\|%{DATA:WafAlertRules}\|%{DATA:WafDenyRules}\"" }

    date {
      type => "esw3c_waf"
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
      locale => "en"


Moreover, we can see Cloudfront log format here

#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes
07/01/2012 01:13:11 FRA2 182 GET /view/my/file.html 200 Mozilla/4.0%20(compatible;%20MSIE%205.0b1;%20Mac_PowerPC) - zip=98101 RefreshHit MRVMF7KydIvxMWfJIglgwHQwZsbG2IhRJ07sn9AkKUFSHS9EXAMPLE== http -

For one hand, logstash has got a S3 input type to read gzip log file directly from S3 bucket. For other hand, this is the filter applied:

filter {
    grok {
    type => "aws"
    pattern => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x-edge-location}\t(?:%{NUMBER:sc-bytes}|-)\t%{IPORHOST:c-ip}\t%{WORD:cs-method}\t%{HOSTNAME:cs-host}\t%{NOTSPACE:cs-uri-stem}\t%{NUMBER:sc-status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User-Agent}\t%{GREEDYDATA:cs-uri-stem}\t%{GREEDYDATA:cookies}\t%{WORD:x-edge-result-type}\t%{NOTSPACE:x-edge-request-id}\t%{HOSTNAME:x-host-header}\t%{URIPROTO:cs-protocol}\t%{INT:cs-bytes}"
mutate {
    type => "aws"
        add_field => [ "listener_timestamp", "%{date} %{time}" ]
date {
      type => "aws"
      match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]

Also, I recommend enable LifeCycle in S3 Bucket to set the log’s purge, it increases so quick!

Well, next step is install Elasticsearch 1.0, it has been released recently and i’m so proud because it was announced me by Honza Kral in Fosdem’2014

I want highlight two essential plugins that sysadmins will like it. The first one is HEAD and the other one is MARVEL.

HEAD plugin screenshot, 1 index for day/cdn

Marvel plugin screenshot:

These is a useful links:
GROK patterns list
Tool to build expressions

Finally, we use Kibana to create reports and charts, a Javascript application reads data from Elasticsearch instance. My first step was create two basic dashboard.

AWS Cloudfront Dashboard

Akamai Dashboard

A cheap web balancer: Nginx+haproxy+pacemaker


I’m going to explain you how can you mount a too cheap and efficient web balancer, I’ve rolled out in productive and non-productive environments publishing all the web applications of multiples environments from continuos integration. I’ve used this solution in our productive environment after I had done benchmarks tests with “ab” and “siege” tool, but i will explain the tuning parameters and these benchmark tests in other post.

I’ve published all website for all environments dividing the main domain into a several subdomains
(p.e.: such as,,,, etc…

Furthermore, in production environments we can use this segmentation splitting by countries (,,…), platforms (,…) or content type (,…)

The list of the main goals:

  • Low cost: I’ve used open source technologies such as centOS, Nginx, Haproxy, Pacemaker, Corosync..
  • High availability: I’ve used virtualization technologies with HA feature enabled where virtual machines are running in different physicals machines, we have also high availability in service level through pacemaker+corosync daemons
  • SSL offloading, we centralize all httpS negotiation by nginx daemon, after that, all traffic rear nginx is plain http
  • Flexibility and control: haproxy bring us customize the balance algorithm to analyze load averages, free memory, connection status to databases, etc…
  • Security: we might use diferents virtual lans, all traffic between each one is filtered by firewall and we use httpS protocol in secure http transactions
  • We use this list technologies: Nginx,Haproxy, Pacemaker, Corosync.
    Look at the diagram of this solution:

    I explain this shortly, i just describe all components.
    Firstly, the browsers request content to CDN or our origins. Then, all requests come in to our origin filtered by firewall. One time traffic is filtered it goes to nginx, this split it by domain and do httpS negotiation. Then traffic is sended to haproxy, this balance all traffic to differents web servers. I’m going to secure all this infrastructure, we define 3 virtual lans, dmz, frontend and backend. All this traffic is filtered by firewall. Two virtual machines running CentOs will be deployed in DMZ, this machines are going to run Nginx and Haproxy in active-passive mode using pacemaker-corosync to manage this behavior. Web servers will be deployed in frontend vlan and databases and shared filesystems such as mysql, postgre, cassandra, mongo, cifs, hfs, nfs, etc.. will be deployed in backend vlan.

    I try to explain better using just one environment in this example, I’ll describe up to down. This will be a little bit more technical explanation. Maybe, I hope this diagram help you to understand easier the architecture

    Diagrama Lògic

    Diagrama Lògic

    First of all, in DMZ vlan will be running nginx and haproxy with 2 virtual ip pools, one pool for each one. I’ve rolled out in vmware virtual infrastructure but you can do this using more cheapers solutions like Xenserver, KVM, etc… Tip: you must check each virtual machine is running in different physical machine, we need add this rule in our virtual infrastructure. The request come in nginx daemon running as a reverse proxy splitting traffic using multiples virtual hosts. Furthermore, nginx will do the httpS negotiation and will use plain http protocol to transfer traffic to haproxy adding HTTP header when transaction was httpS originally. Apache will recognize this http header and we’ll set the properly variable to mask it to application.
    Tip: We use a SSL certificate using wildcard in the Common-Name, it will be easier to manage this.
    Other Tip: set the properly parameters in this equation: max_clients=worker_processes * worker_connections / 4.

    All web traffic has splitted by nginx and transfered to haproxy. We will define a pool of “upstream” in haproxy parameter, one pool for each environment. Haproxy let us to select the best balance algorithm, i use Round Robin commonly. We also define ip ranges for every environment.
    Tip: we must define too large ip ranges, where we have many free ip, if we have performance troubles this bring us to deploy more front servers without restart haproxy daemon.
    In this example, haproxy will check the health status of any web server, haproxy will request a php script whose return will be the string “OK” only if node have a properly load average, all connections to each database is successfully and it has free memory. Note that haproxy also bring us to do sticky balancers analyzing HTTP headers like these: JSESSIONID in Java apps, PHPSESSID in PHP apps, ASPSESSIONID in ASP apps. Tip: You must to know each request need about 17KB and you will can define maxconn parameter.

    Finally, varnish-cache instance receives all requests in each server, the non-cacheable content and caducated content are requested to Apache in the same node, but i know these daemons need a special explanation, i will explain us in other post.

    Look at the configurations files below.
    Nginx configuration:

    upstream http-example-int01 {
        keepalive 16;
    server {
        server_name ~^.*-int01\.example\.com$;
            location / {
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $host;
                    proxy_pass http://http-example-int01/;
                    proxy_redirect off;
    server {
            listen ssl;
        server_name ~^.*-int01\.example\.com$;
            ssl on;
            ssl_certificate /etc/nginx/ssl/crt/concat.pem;
            ssl_certificate_key /etc/nginx/ssl/key/example.key;
            location / {
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header X-Forwarded-Proto https;
                    proxy_set_header Host $host;
                    proxy_pass http://http-example-int01/;
                    proxy_redirect off;

    Haproxy configuration

    frontend example-int01
        default_backend example-int01
    backend  example-int01
            option forwardfor
            option httpchk GET /healthcheck.php
            http-check expect string OK
            server  web01 x.y.z.w:80 check inter 2000 fall 3
            server  web02 x.y.z.w:80 check inter 2000 fall 3
            server  web03 x.y.z.w:80 check inter 2000 fall 3
            server  web04 x.y.z.w:80 check inter 2000 fall 3
            server  web05 x.y.z.w:80 check inter 2000 fall 3

    Apache configuration

        DocumentRoot "/srv/www/example/fa-front/public"
       <Directory "/srv/www/example/fa-front/public">
          Options -Indexes FollowSymLinks
          AllowOverride None
          Allow from All
          Order Allow,Deny
          RewriteEngine On
          RewriteCond %{HTTP:X-Forwarded-Proto} https
          RewriteRule .* - [E=HTTPS:on]
          RewriteCond %{REQUEST_FILENAME} -s [OR]
          RewriteCond %{REQUEST_FILENAME} -l [OR]
          RewriteCond %{REQUEST_FILENAME} -d
          RewriteRule ^.*$ - [NC,L]
          RewriteRule ^.*$ index.php [NC,L]
       SetEnv APPLICATION_ENV int01
       DirectoryIndex index.php
       LogFormat "%v %{Host}i %h %l %u %t \"%r\" %>s %b %{User-agent}i" marc.int01
       CustomLog /var/log/httpd/cloud-example-front.log example

    Pacemaker configuration

    node balance01
    node balance02
    primitive nginx lsb:nginx \
            op monitor interval="1s" \
            meta target-role="Started
    primitive haproxy lsb:haproxy \
            op monitor interval="1s" \
            meta target-role="Started"
    primitive lb1-vip ocf:heartbeat:IPaddr2 \
            params ip="x.x.x.x" iflabel="nginx-vip" cidr_netmask="32" \
            op monitor interval="1s"
    primitive lb2-vip ocf:heartbeat:IPaddr2 \
            params ip="y.y.y.y" iflabel="haproxy-vip" cidr_netmask="32" \
            op monitor interval="1s"
    group haproxy_cluster lb2-vip haproxy \
            meta target-role="Started"
    group nginx_cluster lb1-vip  nginx \
            meta target-role="Started"
    property $id="cib-bootstrap-options" \
            dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
            cluster-infrastructure="openais" \
            expected-quorum-votes="2" \
            stonith-enabled="false" \
            last-lrm-refresh="1355137974" \
    rsc_defaults $id="rsc-options" \

    Tmux, the best choice!

    We’re going to talk about terminal multiplexor! We’re talking about TMUX!!


    Few months ago each one on system’s department was starting working with this multiplexor terminal and we’re so satisfied , in fact, i’ve needed write this post!

    I also remember with nostalgy when I started to use “screen”, a very useful tool used to:

  • run long processes mixed the time command ( /usr/bin/time –format=”%E %S %U %P” –output=”log file”) and the other “&>>” can help us too much
  • run critical processes such as new code deploy in production environment
  • run scripts in unattended mode like a daemon, it can be the first step before to be demonized.
  • to share scripts or processes with other collegues or resume jobs from differents working shifts
  • Moreover, Terminator(Linux) and iTerm(Mac) has been the consoles used for simultaneous typing to arbitrary groups of terminals to do some tasks such as new code deployment or changes in configurations. Furthermore, you can arrange terminals in a grid for each tab, really useful!!!

    If this iTerm/Terminator features are really useful, you can imagine combine with screen features, here we have TMUX!! a few weeks ago in a very rainy sunday afternoon i learned many functionalities and features customizing my work environment, it was so productivity, i recommend for any system administrator.

    here we go:

  • use multiples tab, grouping by roles/environment the servers managed
  • split each tab in a different panels
  • send the same command or signal to all panels for a tab
  • resize for each panel quickly
  • shift from each panel or tab easily
  • no need for X11, awesome!
  • define the buffer size for each panel, you can
  • customize profiles like a template where we define tab, panels, sizes and we can use it as script
  • customize the shortcut for any action
  • I’ve attached my tmux configuration profile (~/.tmux.conf), i’m going to comment my preferit parameters:
    File: tmux.conf
    Path: $HOME/.tmux.conf

    # Alert on activity in any panel
    set -g visual-activity on
    # Set buffer size of any terminal in 10000 lines
    set -g buffer-limit 10000
    # C^B+r reload the tmux configuration without close/open tmux instances
    unbind r
    bind r source-file ~/.tmux.conf  \; display "Reloaded!"
    #C^B+a :(All) write or send signal simultaneously to all panels in a tab
    unbind a
    bind a setw synchronize-panes on
    # C^B+o :(One) write or send signal just one panel
    unbind o
    bind o setw synchronize-panes off
    # index panel and index tab start in 1, default is 0
    set -g base-index 1
    setw -g pane-base-index 1
    # Custom status bar
    # Powerline symbols: ⮂ ⮃ ⮀ ⮁ ⭤
    set -g status-utf8 on
    set -g status-left-length 32
    set -g status-right-length 150
    set -g status-interval 2
    set -g status-left '#[fg=colour15,bg=colour238,bold] #S #[fg=colour238,bg=colour234,nobold]⮀'
    set -g status-right '#[fg=colour245]⮃ %R ⮃ %d %b #[fg=colour254,bg=colour234,nobold]⮂#[fg=colour16,bg=colour254,bold] #h '
    set -g window-status-format "#[fg=white,bg=colour234] #I #W "
    set -g window-status-current-format "#[fg=colour234,bg=colour39]⮀#[fg=colour16,bg=colour39,noreverse,bold] #I ⮁ #W #F #[fg=colour39,bg=colour234,nobold]⮀"

    Here you have a script to easily use it, very powerful in any emergency if you need manage several servers
    The previous gif I show us how can I launch the script, i’ve connected to 8 nodes with the same role (magenta web servers). As you can see my script add a new tmux tab and split it in 8 panells and connect each one to different node. For run this, you must run inside one tmux console.

    Finally, the sources i’ve read:
    Zooming Tmux Panes
    Simple Remote Pairing with Wemux
    Splitting terminal with tmux
    Tmux Documentation

    Projects Links:
    Tmux Project
    Wemux Project

    WordPress running on RaspberryPi

    This is my first post in my fresh wordpress installation. Just now i’ve finished installing wordpress and i’m going to collect all steps. I’ve selected install Debian as operating system, then i use nginx, php-fpm and mysql daemons for running wordpress. This are all steps:

      • I had installed berryboot in SD card, and then i installed debian as a operative system. More info in this link
      • You must run “apt-get update” to update the repository source and i’ve installed my favourite editor.
      • I’ve installed all daemons needed:
    $sudo apt-get install nginx php5-fpm php5-cgi php5-cli php5-common php5-curl php5-gd php5-mcrypt php5-mysql mysql-server
    • Set php-fpm to work with nginx daemons, for that we will use a socket unix file to communicate to nginx daemons, and then we can configure nginx virtual host and set a specific global variable from php.

    File: /etc/nginx/sites-available/

    server {
            listen   80; ## listen for ipv4; this line is default and implied
            root /usr/share/nginx/www;
            index index.php;
            location / {
                    rewrite  ^/?$  /blog/  redirect;
                    try_files $uri $uri/ /index.php;
            location /blog/ {
                    try_files $uri $uri/ /blog/index.php?$args;
            location ~ \.php$ {
                    fastcgi_split_path_info ^(.+\.php)(/.+)$;
                    fastcgi_pass unix:/var/run/php5-fpm.sock;
                    fastcgi_index index.php;
                    include fastcgi_params;
            location = /favicon.ico {
                    log_not_found off;
                    access_log off;
            location = /robots.txt {
                    allow all;
                    log_not_found off;
                    access_log off;
            location ~ /\.ht {
                    deny all;
            location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
                    expires max;
                    log_not_found off;

    We must check the php5-fpm is listening in the properly unix file

    $ grep listen /etc/php5/fpm/pool.d/www.conf
    listen = /var/run/php5-fpm.sock

    Finally, we modify the cgi.fix_pathinfo to 0 in the file /etc/php5/fpm/php.ini

    $grep cgi.fix_pathinfo /etc/php5/fpm/php.ini
    • Firstly install the php files, then prepare the mysql database and finally set database credentials.

    Download wordpress installation files and unzip.

    $ cd /usr/share/nginx/www/
    $ wget
    $ unzip
    $ mv wordpress blog
    $ rm

    We prepare mysql database:

    mysql> CREATE DATABASE wordpress;
    Query OK, 1 row affected (6.58 sec)
    mysql> GRANT ALL PRIVILEGES ON wordpress.* TO "wordpress"@"localhost"IDENTIFIED BY "wordpress";
    Query OK, 0 rows affected (0.01 sec)
    mysql> flush privileges;
    Query OK, 0 rows affected (0.02 sec)
    mysql> exit;

    Set database credentials in wordpress application

    cp wp-config-sample.php wp-config.php
    vim wp-config.php

    File /usr/share/nginx/www/blog/wp-config.php

    /** The name of the database for WordPress */
    define('DB_NAME', 'wordpress');
    /** MySQL database username */
    define('DB_USER', 'wordpress');
    /** MySQL database password */
    define('DB_PASSWORD', 'wordpress');
    /** MySQL hostname */
    define('DB_HOST', 'localhost');

    Fix permissions in files and restart all daemons

    $sudo chown -R www-data.www-data /usr/share/nginx/www/
    $sudo service nginx restart 
    $sudo service php5-fpm restart

    Open any browser and you can do the first http request, then you can see the installation wizard is triggered.
    Set admin blog credentials and the wizard creates mysql data structure.

    I recommend you use url friendlies, it can increase the user experience, for example this url:
    For that, we add this lines in nginx virtual host configuration.
    File: /etc/nginx/sites-available/

            location /blog/ {
                    try_files $uri $uri/ /blog/index.php?$args;
    • Finally, add and set the wordpress pluggins, i’ve listed the plugging i like it.

    SyntaxHighlighter Evolved
    WP to Twitter
    NextScripts: Social Networks Auto-Poster
    Author Spotlight (Widget)
    ExtraWatch Live Stats and Visitor Counter FREE
    Google Analytics
    Google Analytics for WordPress
    Social Login for wordpress
    User Photo

    • Create categories and define menu tabs
    • Select, download and install any theme and add the menu

    I write this first post and my blog is ready to collect my technical experiences!!!