Azure gateway issues with wild card certificates

Within my current company, we are using azure application gateway to host the application and it is a nice load balancer but with a let’s encrypt wild card certificate I kept on getting the following message on the “Backend health” page where we were trying to use https from the application gateway to the server (keeping things secure is always nice 🙂 )

The Common Name (CN) of the backend server certificate does not match the host header entered in the health probe configuration (v2 SKU) or the FQDN in the backend pool (v1 SKU). Verify if the hostname matches with the CN of the backend server certificate. To learn more visit - https://aka.ms/backendcertcnmismatch.

The main problem is using a wild card certificate e.g. for example.com and a listener pointing to a single subdomain e.g. test.example.com, is that we have to set up “Health probes” to confirm that the backend server was actually hosting test.example.com instead of checking example.com.

So, the nginx daemon was running on the linux server using the wild card certificate we generated via azure functions to generate the let’s encrypt certificate using the hostname of test.example.com and then setup a health probe on the application gateway as below.

and then the health probe will start to work instead of giving the error above, below are the health probes with a status of 200

If the above doesn’t help, just shout.

Morse code

As a test, I was asked to write a morse code task where some of the signals weren’t able to be determined from the input. So in this case instead of having either the dot (.) or dash (-) we have a question mark (?) where it could be either a dot / dash.

Here is an image of the morse code alphabet of dots / dashes and here is a link to a view of the “tree” of the morse code of the dichotomic search, basically binary search of the morse code.

So, we are only going down 3 levels, below are some tests to confirm that the process is working fine.

  • .- = A
  • . = E
  • … = S
  • -.- = K
  • ? = ET
  • -? = NM

So, I approached this test by thinking that I need to have an “item” that stores the current value (e.g. character) and left / right item, and here is my item.h definition.

class item {
    private:
        char value;
        item *left;
        item *right;
    public:
        item(char value,item *left,item *right);
        item(char value);
        item(item *left,item *right);

        item();

        char getValue();

        item* leftItem();
        item* rightItem();
};

And then to build the 3 levels of the search tree

    item* head = new item(
        new item('e',
            new item('i', 
                new item('s'),
                new item('u')
            ),
            new item('a',
                new item('r'),
                new item('w')
            )
        ),
        new item('t',
            new item('n', 
                new item('d'),
                new item('k')
            ),
            new item('m',
                new item('g'),
                new item('o')
            )
        )
    );

and then the last part was literally reading in the signal to be processed and going down the tree either left / right or both for undermined signal.

string output(string signal, item* codes) {
    string ret="";
    if (signal.size() ==0 ) 
        return string(1,codes->getValue());
    for (string::size_type i=0; i < signal.size();i++) {
        if ('?' == signal[i]) {
            ret += output("."+signal.substr(i+1),codes);
            ret += output("-"+signal.substr(i+1),codes);
            return ret;
        } else if ('.' == signal[i]) {
            return output(signal.substr(i+1),codes->leftItem());
        } else if ('-' == signal[i]) {
            return output(signal.substr(i+1),codes->rightItem());
        } else {
            throw invalid_argument(string("Invalid character at this point of the string ").append(signal));
        }
    }
    return ret;
}

If you want to view the whole code, here is the link to the zip file of the morse code.

Docker composer – postgres / pgadmin / nginx / php-fpm with different php versions

This post is using the default docker files that have already been pre-defined, the second post will use an alpine mini root file system as a base and then add on the additional packages for each part of this setup.

Folder listings

My folder listing for the below.

./nginx:
conf.d  Dockerfile  nginx.conf  sites

./nginx/conf.d:
default.conf

./nginx/sites:
default.conf

./php-fpm:
Dockerfile  xdebug.ini

(extras are for the logs / postgres volumne mounted data)
./logs

./posgtres-data

Enviornment file

To start with, I like to create a env(ironment) file that will denote internal settings used within the docker file builds. This file below will denote the php version that I wish to use and also since we are using php-fpm in this development environment, then might as well enable xdebug as well.

Below is the local.env — please save as that as well, shall include a git repo below.

PHP_VERSION=8.0

POSTGRES_PASSWORD= example
POSTGRES_USER= postgres
      
PGADMIN_DEFAULT_PASSWORD= example
PGADMIN_DEFAULT_EMAIL=ian@codingfriends.com

XDEBUG_PORT=9000

This starts of with the PHP version, follows onto the postgres default details with also the PGADMIN default login details with the tail of the environment details having the xdebug_port defined.

Docker-composer file

The next part is the docker-composer file, this describes how the containers will either rely on each other (depends_on within the docker composer file) networks to use and also the ports to expose. As a side thing, I always found the ports to be funny way around so it is <outside of the container>:<internal to the container> so port 8080:80 will expose the interal port 80 (websites that aren’t using SSL run on this port for example) to the outside world on port 8080 e.g. to host you can do http://localhost:8080 to view the containers port 80 service.

Also, if you are not altering the base docker files that are being pulled down from the docker repo (shall do another post about altering this to a local repo or AWS ECS) then there is no build descriptions akin to the db container description below, but for php-fpm / nginx there are some extra build steps required for this demo hence the Dockerfile(s) in those areas.

# Use postgres/example user/password credentials
version: '3.1'

services:

  db:
    image: postgres
    restart: always
    hostname: postgresDB
    container_name: postgresDB
    environment:
      POSTGRES_PASSWORD: $POSTGRES_PASSWORD
      POSTGRES_USER: $POSTGRES_USER
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    ports:
      - 5432:5432
    networks:
      - iKnowNW

  pgdamin4:
    image: dpage/pgadmin4
    hostname: pgadmin4
    container_name: pgadmin4
    depends_on:
      - db 
    restart: always
    environment:
      PGADMIN_DEFAULT_PASSWORD: $PGADMIN_DEFAULT_PASSWORD
      PGADMIN_DEFAULT_EMAIL: $PGADMIN_DEFAULT_EMAIL
    ports:
      - 8080:80
    networks:
      - iKnowNW

  php-fpm:
    build:
      context: ./php-fpm
      args:
          - PHP_VERSION=${PHP_VERSION}
          - XDEBUG_PORT=${XDEBUG_PORT}
    depends_on:
      - db
    environment:
      - XDEBUG_CONFIG=client_port=${XDEBUG_PORT}
    volumes:
      - ../src:/var/www
      - ./logs:/var/logs
      - ./php-fpm/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini
    networks:
      - iKnowNW      

  nginx:
    build: 
      context: ./nginx
    volumes:
      - ../src:/var/www
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/conf.d/:/etc/nginx/conf.d
      - ./nginx/sites/:/etc/nginx/sites-available
      - ./logs:/var/log
    depends_on:
      - php-fpm
    ports:
      - 8081:80
    networks:
      - iKnowNW        

networks:
  iKnowNW:
      driver: bridge

PHP-FPM

So the main things are within the php-fpm confirguration above

args:
– PHP_VERSION=${PHP_VERSION})

environment:
– XDEBUG_CONFIG=client_port=${XDEBUG_PORT}

Both of these are import, because they will pass details (arguments / enviornment variables) to the php-fpm DockerFile build process, so lets start with that — below is the php-fpm Dockerfile


ARG PHP_VERSION
ARG XDEBUG_PORT 

FROM php:${PHP_VERSION}-fpm-alpine

RUN apk --update --no-cache add git postgresql-dev
RUN apk add --no-cache $PHPIZE_DEPS
RUN pecl install xdebug 
RUN docker-php-ext-install pdo pdo_pgsql 
RUN docker-php-ext-enable xdebug

WORKDIR /var/www
EXPOSE ${XDEBUG_PORT}

So the ARG variable is what was passed in from the docker-composer file within this instance it would be 8.0 denoting the php version to use. Also with exposing the xdebug port to the outside world of the container.

Additional xdebug settings are included in the xdebug.ini file below, this will be “copied” into the container during the build process.

xdebug.start_with_request=yes
xdebug.mode=debug
xdebug.log=/var/logs/xdebug/xdebug.log
xdebug.discover_client_host=1

Nginx

This one is the biggest folder, as the following steps will take place

  • Insert the nginx.conf file — this is the service configuration — into the container
  • Insert the default site configuration that uses the php-fpm container using the fast cgi protocol.
  • Insert the configuration of the php-upstream of the php-fpm container

Lets start with the nginx.conf file, it just describes the nginx service with connections / logs and where the http configurations are placed etc.

user  nginx;
worker_processes  4;
daemon off;

error_log  /var/log/nginx/error.log debug;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    access_log  /var/log/nginx/access.log;
    sendfile        on;
    keepalive_timeout  65;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-available/*.conf;
}

The default.conf is the default site, as from above it is placed into the /etc/nginx/sites-available folder.

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    server_name localhost;
    root /var/www/public;
    index index.php index.html index.htm;

    location / {
         try_files $uri $uri/ /index.php$is_args$args;
    }

    location ~ \.php$ {
        try_files $uri /index.php =404;
        fastcgi_pass php-upstream;
        fastcgi_index index.php;
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_read_timeout 600;
        include fastcgi_params;
    }

    location ~ /\.ht {
        deny all;
    }
}

The most important part here is the “root” key value pair above, this is where the local php folder needs to be mounted for the site to work.

The last part is php-fpm upstream service (conf.d/default.conf)

upstream php-upstream {
    server php-fpm:9000;
}

Literally, the php-fpm value above matches the container name from the docker-compose file above.

And then the actually Dockerfile is very small!!

FROM nginx:alpine
WORKDIR /var/www
CMD ["nginx"]
EXPOSE 80

Literally describing where to get the nginx base container from e.g. nginx repository with the tag of alpine. Then define the WORKDIR (working directory) for where a container is “cd” into for the command (CMD) to be executed.

Final part!

Create the subdirectories

  • logs
  • postgres-data

To store the containers data that isn’t lost after the container has been stopped / killed (containers are in theory ephemeral — short existence)

Last step is to run the build process and then view your code that you have within your ../src directory (this is where the PHP hosting code will be — I am using the a sub folder within there called public e.g. ../src/public/ will be where the website viewable code controllers etc after a symfony creation script)

To view the PGADMIN page — just goto http://localhost:8080/

To view the PHP hosting code — just goto http://localhost:8081/

Have fun — if there are any issues, please contact me!! but here is my start.sh script to either build /run the container setup above.

#/bin/bash
## place your local enviornment file name here
ENV=local.env

case "$1" in 
start)
   echo "STARTING"
   docker-compose --env-file=$ENV -f docker-compose.yml up
   ;;
stop)
   echo "STOPPING"
   docker-compose --env-file=$ENV -f docker-compose.yml down
   ;;
restart)
   $0 stop
   $0 start
   ;;
rebuild)
   echo "REBUILD"
   docker-compose --env-file=$ENV -f docker-compose.yml up --build
   ;;
upgarde)
   echo "UPGRADING CONTAINERS"
   docker-compose pull
   ;;

*)
   echo "Usage: $0 {start|stop|rebulid|upgrade}"
esac

exit 0 

Operating Systems

Operating systems are the guts of the user interface with the hardware.

There are a few operation systems that cater for different people,

  1. Windows (business, personal) the main one at present
  2. Linux (business, personal) the one trying to get open source out there and make it a very interesting alternative to Windows
  3. Solaris (business)
  4. OpenSolaris (personal, business)
  5. Unix (business)

there are of course a few more out there but I would say that these are the main ones trying to gain your usage and I am interesting in all of them and shall demo installs, usage tips, development help and allot more things that are of interest.

Linux – Steam – Half life 2

I have started to use Arch linux as my main linux distro and to be honest it is very close to gentoo linux with regards to custom building what type of setup you are wanting, but without the pain of compiling.

You do lose some of the configuration options compared to gentoo, but to be honest not enough to be spending that time compiling!.

So I am using kde, with also having steam to play some games :), but the half life 2 game kept on crashing and upon ssh’ing onto the box there was the error within dmesg of

[ 2334.498295] INFO: task hl2_linux:4442 blocked for more than 120 seconds.
[ 2334.498298]       Not tainted 4.8.13-1-ARCH #1
[ 2334.498300] "echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2334.498303] hl2_linux       D ffff8807cdeeba88     0  4442      1 0x20020006
[ 2334.498308]  ffff8807cdeeba88 00ff8807cdeebb50 ffff88081af15580 ffff8807c8169c80
[ 2334.498312]  0000000000000000 ffff8807cdeec000 ffff880817c1c580 ffff880817c1c548
[ 2334.498317]  ffff880817c18000 ffff8807641ef828 ffff8807cdeebaa0 ffffffff815f40ec
[ 2334.498321] Call Trace:
[ 2334.498324]  [<ffffffff815f40ec>] schedule+0x3c/0x90
[ 2334.498359]  [<ffffffffa0144ab8>] amd_sched_entity_fini+0x68/0x100 [amdgpu]
[ 2334.498364]  [<ffffffff810c0450>] ? wake_atomic_t_function+0x60/0x60
[ 2334.498396]  [<ffffffffa010c2dd>] amdgpu_ctx_fini+0xcd/0x110 [amdgpu]
[ 2334.498427]  [<ffffffffa010cb65>] amdgpu_ctx_mgr_fini+0x65/0xa0 [amdgpu]
[ 2334.498454]  [<ffffffffa00e520e>] amdgpu_driver_postclose_kms+0x3e/0xd0 [amdgpu]
[ 2334.498465]  [<ffffffffa0004703>] drm_release+0x203/0x380 [drm]
[ 2334.498469]  [<ffffffff8120b42f>] __fput+0x9f/0x1e0
[ 2334.498472]  [<ffffffff8120b5ae>] ____fput+0xe/0x10
[ 2334.498475]  [<ffffffff8109a0d0>] task_work_run+0x80/0xa0
[ 2334.498479]  [<ffffffff810806e2>] do_exit+0x2c2/0xb50
[ 2334.498483]  [<ffffffff810b4155>] ? put_prev_entity+0x35/0x8c0
[ 2334.498487]  [<ffffffff81080feb>] do_group_exit+0x3b/0xb0
[ 2334.498490]  [<ffffffff8108be08>] get_signal+0x268/0x640
[ 2334.498494]  [<ffffffff8102d0f7>] do_signal+0x37/0x6b0
[ 2334.498498]  [<ffffffff815f7241>] ? do_nanosleep+0x91/0xf0
[ 2334.498501]  [<ffffffff810ecdb0>] ? hrtimer_init+0x120/0x120
[ 2334.498504]  [<ffffffff815f720a>] ? do_nanosleep+0x5a/0xf0
[ 2334.498508]  [<ffffffff81003651>] exit_to_usermode_loop+0xa1/0xc0
[ 2334.498511]  [<ffffffff81003df7>] do_fast_syscall_32+0x157/0x170
[ 2334.498515]  [<ffffffff815f987b>] entry_SYSCALL_compat+0x3b/0x40
</ffffffff815f987b></ffffffff81003df7></ffffffff81003651></ffffffff815f720a></ffffffff810ecdb0></ffffffff815f7241></ffffffff8102d0f7></ffffffff8108be08></ffffffff81080feb></ffffffff810b4155></ffffffff810806e2></ffffffff8109a0d0></ffffffff8120b5ae></ffffffff8120b42f></ffffffffa0004703></ffffffffa00e520e></ffffffffa010cb65></ffffffffa010c2dd></ffffffff810c0450></ffffffffa0144ab8></ffffffff815f40ec>

of which was to do with the AMDGPU-PRO graphics driver.

So with some digging around you are able to alter the video settings within half life 2, so if you goto “options->Video” the below will be shown

Then click on “advanced”, which will show

If you turn “Multicore Rendering” to disabled and also “Wait for vertical sync” to Enabled, this should stop half life 2 from crashing.

Happy gaming.!!!.

php 7 – generator – yield’s

php 7 has now implement generators where you are able to return (yield) results from a method and only the ones that you will need.

// define the output as type Generator -- e.g. yield results.. the compiler will still work without this Generator keyword
function yieldSomeNumbers() : Generator {  
    yield 10;
    yield 13;
}
 
foreach (yieldSomeNumbers() as $v) {
    var_dump($v);
}

Will output

10
13

The ‘Generator’ at the end of the method ” : Generator” is not actual needed as the compiler will append it as such on the fly since the method is yield results.

For example, lets say that you are doing a search of numbers from 1-100 and are searching for the value of 10, so before generators the code would have been something like

function generateNumbers() {
    return range(1,100);   // load up a array of values 1-100 e.g. 1,2,3,4,5...
}
 
foreach (generateNumbers() as $v){
    var_dump($v);
    if ($v == 10) {
        var_dump("FOUND");
        break;
    }
}

The ‘foreach (generateNumbers()’ will be using the full array where as

function generateSomeNumbers() : Generator {
    foreach (range(1,100) as $v) {
        yield $v;
    }
}
 
foreach (generateSomeNumbers() as $v){
    var_dump($v);
    if ($v == 10) {
        var_dump("FOUND");
        break;
    }
}

will only return each yield upon request.