Laravel Websockets, some gotchas!

There is a really nice websocket composer package for laravel but a couple of issues that you may hit along the way! So the package is by beyondcode and it is here. A really nice package, but here are some gotchas that I came across.

  1. 403 Forbidden error on the auth URL
  2. 401 Unauthorized error on the auth URL
  3. Using namespaces within the listen to Echo method

So, the first 2 are kinder within the manual but I didn’t read it and in turn came up with the issues!. So going to split the 3 issues into separate areas below

403 Forbidden error on the auth URL

When you aren’t using the websocket locally, then you may hit request error of 403 and this is because of the gate defined within the Beyondcode class WebSocketsServiceProvider and method called registerDashboardGate

protected function registerDashboardGate() { Gate::define(‘viewWebSocketsDashboard’, function ($user = null) { return app()->environment(‘local’); });

and this as you can tell, is checking if the current application is running within the “local” environment setting and hence why you didn’t notice it whilst testing locally, the problem is within the config/websockets.php file

‘middleware’ =>

[ ‘web’, Authorize::class, ],

where the Authorize::class is the beyondcode class calling the above method, so we just need to replace with our authorization middleware e.g. jwt.auth.

401 Unauthorized error on the auth URL

The next issue I noticed was an error similar to

$key cannot obtain from null

This is because of the AuthenticateDashboard method that has

$app = App::findById($request->header(‘x-app-id’));
$broadcaster = new PusherBroadcaster(new Pusher( $app->key,

This is because the x-app-id wasn’t being passed in within the auth request, so altering the laravel Echo javascript object creation to include the following key value pair (please note I change the websockets.php path configuration to “websockets”

window.Echo = new Echo({ …….

authorizer : (channel, options) => {
return {
authorize: (socketId, callback) => {
axios.post(‘/websockets/auth’, {
socket_id : socketId,
channel_name: channel.name,
},
{ headers: { ‘x-app-id’: ‘<your ID from within the websockets.php configuration file, normally this is apps>’ } })
.then((response) => {
// error report
callback(false, response.data)
})
.catch((error) => {
// error report
callback(true, error)
// throw new Error(error)
})
},
}
},

Using namespaces within the listen to Echo method

The last part was the namespace issue within the echo listen to the method, where I don’t want to define the namespace of the broadcast every time like (where the App.Events was the namespace within php)

window.Echo.private(`dashboard.${dashboardID}’) .listen(‘App.Events.DashboardUpdate’, (e) => { e.message.value.forEach(function (value) { vm.$set(vm.data, value.key, value.value) }) })

So, like the above fix, we just need to add an option to the new Echo object within the javascript

window.Echo = new Echo({ …….

namespace : ‘App.Events’,

Those were the things that caused me to think, so thought it may be a good idea to post about it encase anyone else has the same issue, if you need more information on each please say!

Use Azure Key Vault Store of Wild card certificate created via certbot and update the local nginx webserver and Azure Application Gateway

A follow on from the Azure wildcard certbot certificate with Azure DNS TXT updates, this part is store the wild card SSL certificate that certbot created and place into azure key vault.

I am going to break this down into 2 parts So, to start with I create a script to basically use on your linux server and the second is via the application gateway

  1. clean out any old processing of the certificates
  2. pull down the certificate from the azure key vault store
  3. convert the PFX certificate into the
    1. full chain
    2. private key
  4. update the nginx hosting certificate files
  5. reload nginx

So here are the steps in full, I am using a sub directory called “certs” to process the certificate

echo “clean out the certs”
rm certs/*

Change the following to your azure key vault and wildcard certificate name, please note we have to use the encoding base64

az keyvault secret download –file certs/wild.pfx –id https://<azure key vault name>.vault.azure.net/secrets/<certificate name> –encoding base64

And now just convert that PFX file into the private key and then the full certificate file (I am channing the client certificates and the CA certificates into the full.crt file)

openssl pkcs12 -in certs/wild.pfx -passin pass: -nocerts -nodes -out certs/priv .key
openssl pkcs12 -in certs/wild.pfx -passin pass: -clcerts -nokeys -out certs/ful l.crt
openssl pkcs12 -in certs/wild.pfx -passin pass: -cacerts -nokeys -chain >> cer ts/full.crt

And then just push the certs folder into the nginx configured ssl directory that links to your nginx configure SSL options

cp certs/* /etc/nginx/ssl/<domain name>/

final part, reload nginx to use the new certificate!

systemctl reload nginx

To use on the Application Gateway — well that is very simple as just going to the application gateway -> listeners -> choose your listener you want to update and then choose the certificate from the key vault.

AND THAT IS IT 🙂 — if you need any more advice on certain areas please say!

Use Azure Application Gateway with the certbot wild card certificate stored within the Azure Key Vault

A follow on from the Azure wildcard certbot certificate with Azure DNS TXT updates, this part is to store the wild card SSL certificate that certbot created and place into azure key vault.

So in essence the process is to

  1. Check for new certificates via the certbot renewal process
  2. If there is a new certificate create a PFX
  3. Upload this PFX file into the key store

So, to start with lets obtain a new certificate (if there is one!!) , please change your domain name to the domain name that you using, the preferred challenge should be to whatever you have setup to be the default process for this certificate, I am using the azure DNS zone so using the azure certbot-azure-dns certbot plugin

certbot certonly –manual -d ‘*.<domain name>’ –preferred-challenges=dns

The next part is the most important part!, it is creating the PFX file from the new ticket! Please note you have to pass in the whole key chain e.g the chain / fullchain files. (change the domain name again to what you are using)

openssl pkcs12 -export -out <domain name>.pfx -inkey /etc/letsencrypt/live/<domain name>/privkey.pem -in /etc/letsencrypt/live/<domain name>/cert.pem  -in /etc/letsencrypt/live/<domain name>/cert.pem -in /etc/letsencrypt/live/<domain name>/chain.pem -in /etc/letsencrypt/live/<domain name>/fullchain.pem

And then the last part is to just upload that certificate into the azure key vault (you have to have enabled the managed identity for key vault of the service you are using)

az keyvault certificate import –vault-name “<key vault name>” -n “<certificate common name in the file store e.g my domin>” -f <domain name>.pfx

AND THAT IS IT 🙂 — if you need any more advice on certain areas please say!

Azure wildcard certbot certificate with Azure DNS TXT updates

The requirement was to set up a wildcard certificate on azure, so I used a nice tool called certbot that can generate single subdomain certificates or a wild card certificate.

For the creation of a wild car certificate, we need to be able to alter the DNS TXT records and via azure we can achieve this via the

  1. Install certbot onto a server (I am using a linux server)
  2. User role to only allow DNS TXT record updates
  3. Allow the managed identity (in essence the server that is altering the DNS TXT records)

So, to start things off lets install the certbot and I am using the snap package manager and also the nginx as the web server.

snap install core
snap refresh core
snap install –classic certbot
ln -s /snap/bin/certbot /usr/bin/certbot

The certbot plugin to converse with the azure dns is certbot-dns-azure, as this isn’t part of the offical packages we have to use the “–edge” option below and also allow the certbot trust this plugin.

snap set certbot trust-plugin-with-root=ok
snap install –edge certbot-dns-azure
snap connect certbot:plugin certbot-dns-azure

For the certbot dns azure plugin to function automatically we need to create a file within the .azure folder called dns.ini ( e.g. /root/.azure/dns.ini or ~/.azure/dns.ini)

certbot renew –dry-run

To fill in the options below are as below (small hint! the subscription ID + resource group name could be obtained from the URL when you goto the dns zone within azure you want to use!)

  1. linux server ID = I got this after the creation of the role access
  2. dns cloud name = is the azure named resource of your dns zone
  3. subscriptionID = your subscription ID
  4. resource groupname = the groupname of resource

dns_azure_msi_client_id = <linux server ID>

dns_azure_zone1 = <dns cloud name>:/subscriptions/<subscriptionID>/resourceGroups/<resource groupname>/providers/Microsoft.Network/dnsZones/<dns cloudname>

The last part of this solution is to allow the managed identy (e.g. the linux server) to update the DNS TXT records for the azure dns zone.

Iif you goto your subscription -> access control (IAM) — On the left menu -> Add(top bar) custom role

In the new custom role, please define the basics e.g. name, but within the permissions use the “add permission” and include the one below (Microsoft.Network/dnszones/TXT

The json would be something akin to

{ "id": "/subscriptions/<subscriptionID>/providers/Microsoft.Authorization/roleDefinitions/<roleID>", 
"properties": { "roleName": "DNS TXT Contributor", "description": "User role only allows DNS TXT updates.", "assignableScopes": [ "/subscriptions/<subscriptionID>" ], "permissions": [ { "actions": [ "Microsoft.Network/dnszones/TXT/read", "Microsoft.Network/dnszones/TXT/write" ], "notActions": [], "dataActions": [], "notDataActions": [] } ] }}

and then within the DNS Zone (your DNS configuration) go to Access Control (IAM) and then click “add role assignment” where the role will be the role that you created above

and then we just need to associate it (members) with your managed identities (e.g. your webserver!)

The next post (once I have created it, there will be a link here!) will be about how to create the certificate and then convert it into a PFX file for uploading into the azure key vault file storage.

AND THAT IS IT 🙂 — if you need any more advice on certain areas please say!

Azure devops git (ssh) config on linux

I am using fedora linux as my development environment OS, I love it but when you are working with azure that mainly believes you are using windows. Then you have to make some changes to your configuration files that windows (may do??) for you.

So, after I created my ssh key for the development, I keep on getting an issue to pull/push up my local git repo where the CLI would just hang or error out, so after doing the -v (verbose mode)

git pull -v

The issue highlighted itself with the following error message

Unable to negotiate with 51.104.26.0 port 22: no matching host key type found. Their offer: ssh-rsa

So, all I did was to update the ~/.ssh/config (your local username home directory .ssh config(uration) file. Please note the last bits, the HostkeyAlgorithms and PubkeyAcceptedKeyTypes

Host ssh.dev.azure.com
       PreferredAuthentications publickey
       IdentityFile ~/your/key
       UpdateHostKeys no
       IdentitiesOnly yes
       HostkeyAlgorithms +ssh-rsa
       PubkeyAcceptedKeyTypes +ssh-rsa,rsa-sha2-256,rsa-sha2-512

After that, there were no issues, I suppose it is the classic of reading the error message!

Black holes and my idea of them

I have been watching the different programs about black holes and universe creation theories, so here is my theory on the matter.

How about if a black whole is the creation of a universe, the starting point, as in the matter that is it sucking around a worm hole is forced into another space( and/or time) and that is where all of the matter for the new universe comes from, which would answer the string theory of 10/11 dimensional space.

So in a bigger explanation, a black worm hole is created in one universe and sucks the matter around it to such a high speed that it forces it to another space (and/or time) though a worm hole and this is where the “big band” happens for the new universe and then after a x amount of time the black worm hole clasped onto itself and thus no more matter will come through which is why the present universe that we are in is constantly expanding from a point. In the new universe then has time to create itself before more black worm holes appear to transfer matter from one universe to another (which is why there is galaxy’s where could have been small black worm holes that just deposited smallish amounts of matter into that area).

This is just my theory on this, hey I wonder if it will be true ?

About me

About the site

I created this site for a few reasons. The main reason was that I found that many tutorial sites are closed and because I prefer the idea of openness, of exchanging and sharing information, I designed this site to give people the chance to share their knowledge and to talk about their different ideas.

About me

I also like to learn about programming and I would have needed a massive notebook to try and keep all the information about the different languages etc I have come across! I love to code / DevOps and I love solving any PC related problem that involves coding, for example, c/c++/php/pyhon/java/sql/CI,CD etc… (any new language as well), these are my main interests, so if you have something that you want to have custom-built – an application/website/class or if you want your company to have a Linux setup for a stable environment which saves lots of money, please feel free to email me for a chat!

What can I say, I prefer Linux OS because it is more fun, in my honest opinion, the development environment aspects are better as is the server/stable environment. Well, I suppose that covers the major aspects of any OS. Personally, I prefer openness and Linux fits that role even though I am at home within a Windows environment as well.

My philosophy of life is to enjoy it! I don’t get hung up on minor issues and above all, I just try to be true to myself 🙂

So, there you have it, the inspiration that created this site – enjoy!!

If you wish to email me, for any reason, please feel free to do so (not spam though!!!) via the contact me page.