[{"content":"Many people type www in front of the domain name out of habit. At the same time, others omit it. This could cause problems.\nAs a webmaster, it is my job to make sure that neither of these groups run into an error. However, I don\u0026rsquo;t want to support both versions for every single page search engine optimization reasons.\nMultiple domains for the same content is bad for Your SEO ranking. Search engines count links pointing to Your URLs. However, they don\u0026rsquo;t know whether www.crossquiz.net and crossquiz.net are the same website, so they treat them as different entities. If You have 5 backlinks to www and 5 backlinks to `non-www, another site with 7 backlinks will rank higher than Your two URLs, although You actually have 10 total. According to Semrush, each subdomain has its own ranking.\nThe solution to this problem is to redirect users from one version to the other with an \u0026ldquo;HTTP 301 redirect\u0026rdquo; status code. This will A) cause the browser to change the URL so when they copy the URL, they will copy the same one and B) will tell the search engine that these two links are identical.\nIn order to do this with the NGINX Ingress Controller You can use the from-to-www-redirect annotation in your ingress YAML file:\nnginx.ingress.kubernetes.io/from-to-www-redirect: \u0026#34;true\u0026#34; Keep in mind that You will still need an HTTPS certificate for both versions. However, in Your ingress YAML file, its enough to just define the path that You want. If You define both paths, then the redirect won\u0026rsquo;t work.\nSo in my example, I want people who enter www.crossquiz.net to be redirected to the domain crossquiz.net. Here is my full ingress rule:\ningress.yaml\napiversion: networking.k8s.io/v1 kind: ingress metadata: name: crossquiz-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/use-regex: \u0026#34;true\u0026#34; nginx.ingress.kubernetes.io/from-to-www-redirect: \u0026#34;true\u0026#34; cert-manager.io/cluster-issuer: letsencrypt spec: tls: - hosts: - crossquiz.net - www.crossquiz.net secretname: tls-secret rules: - host: crossquiz.net http: paths: - backend: service: name: crossquiz port: number: 80 path: /(.*) pathtype: implementationspecific This might be a bit different in Your case, but the important part is that You have:\nthe annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \u0026quot;true\u0026quot; must be set the HTTPS host must be listed so that the certificate can be acquired (I use LetsEncrypt) the path that You want people to use must be set (in my case crossquiz.net) Now whenever someone visits www.crossquiz.net, they will get redirected to crossquiz.net without noticing. Their browser will show it accordingly.\n","permalink":"https://en.quisl.de/posts/kubernetes-redirecting-www-to-nonwww/","summary":"Many people type www in front of the domain name out of habit. At the same time, others omit it. This could cause problems.\nAs a webmaster, it is my job to make sure that neither of these groups run into an error. However, I don\u0026rsquo;t want to support both versions for every single page search engine optimization reasons.\nMultiple domains for the same content is bad for Your SEO ranking.","title":"Redirect www to non www with K8s"},{"content":"This article shows you how to install MySQL with an Azure Files Storage backend on Azure Kubernetes Service (AKS). Azure Files Storage is a highly available and scalable file storage that provides network shares in the cloud. A major benefit of Azure Files over Azure Managed Disk is that multiple containers can access the same storage.\nThis article will walk you through the deployment step-by-step so that you can ensure your installation of MySQL with Azure Files Storage and AKS runs smoothly. We will discuss how to create the needed resources in Kubernetes. I.e. how you start the MySQL Docker Image as a deployment in the pod, configure it, provide it as a service and test it at the end.\nHave fun!\nPreparation For the overview of these instructions, I use a separate .yaml file for each Kubernetes resource. Of course you can also put them all in the same file if you separate them with three dashes ---.\nIn principle, we need five different types of resources:\nStorage Class ConfigMap PersistentVolumeClaim Deployment Service Therefore, we first create the following five files: storageclass.yaml, configmap.yaml, pvc.yaml, deploy.yaml and ` ``service.yaml```. These will contain the complete resource description for k8s.\nI use a new namespace test-ns for the test files. If you don\u0026rsquo;t already have one, you can create it like this:\nkubectl create namespace test-ns Storage Class AKS comes with a few predefined StorageClasses. Including the azurefile-csi StorageClass. Normally it works perfectly. Unfortunately, we cannot use them with MySQL for the following reason.\nThe problem with azurefile-csi MySQL tries to change the read and write permissions as well as the owner of the files. Also, MySQL tries to lure some files for InnoDB tables. The problem with this is that the AzureFiles file system does not store this information at all.\nIf you try with azurefile-csi, you might see errors like this in the logs:\n2023-01-10T16:04:17.986024Z 0 [ERROR] [MY-012574] [InnoDB] Unable to lock ./#innodb_redo/#ib_redo0 error: 13 2023-01-10T16:04:17.991042Z 0 [ERROR] [MY-012894] [InnoDB] Unable to open \u0026#39;./#innodb_redo/#ib_redo0\u0026#39; (error: 11). It is also possible that the container does not start at all:\nBack-off restarting failed container Solution As a workaround, we need to create our own StorageClass. To do this, we fill the corresponding file with the following content.\nstorageclass.yaml\napiVersion: storage.k8s.io/v1 metadata: name: mysql-azurefile kind: StorageClass mountOptions: - dir_mode=0777 - file_mode=0777 - uid=999 - gid=999 - mfsymlinks - nobrl - cache=strict - nosharesock parameters: skuName: Standard_LRS provisioner: file.csi.azure.com reclaimPolicy: Delete volumeBindingMode: Immediate Owner and group must be set to 999, then MySQL is happy and doesn\u0026rsquo;t try to change anything. Important: Some versions may behave differently. With MySQL container 5.7.16 and 8.0 this worked with 999.\nuid=999 gid=999 Next we tell the Azure File Storage to set the permissions to 0777. This gives the containers that integrate this storage the illusion that they have full rights.\ndir_mode=0777 file_mode=0777 We can now roll out this file on our Kubernetes cluster.\nkubectl apply -f storageclass.yaml You should note that a storage class in Kubernetes is not tied to a namespace. If you already have \u0026ldquo;mysql-azurefile\u0026rdquo;, you have to change this name. You can check it with kubectl get storageclass.\nConfig Map A ConfigMap is a Kubernetes resource that stores configuration data as key-value pairs. You can use them for configuring containers in a Kubernetes cluster. ConfigMaps allow you to inject environment variables, files, and other configurations into containers without packaging them directly into the image.\nThis allows us to later use the original MySQL image with our own configuration.\nconfigmap.yaml\napiVersion: v1 kind: ConfigMap metadata: name: mysqld-cnf data: mysqld.cnf: | [mysqld] pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql # log-error = /var/log/mysql/error.log # By default we only accept connections from localhost #bind-address = 127.0.0.1 # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_allowed_packet=500M And roll out.\nkubectl apply -f configmap.yaml -n test-ns Persistent Volume Claim A Persistent Volume Claim (PVC) is a request to the Kubernetes cluster to provision a Persistent Volume (PV). PVCs are used by Kubernetes pods to reserve memory for the data they need to keep even after they have restarted. They can also be used to make data accessible between different pods.\nIn our scenario we just have to make sure that we use the mysql-azurefile storage class defined above.\npvc.yaml\napiVersion: v1 kind: PersistentVolumeClaim metadata: name: azurefile-pvc spec: accessModes: - ReadWriteMany storageClassName: mysql-azurefile resources: requests: storage: 10Gi Instead of \u0026ldquo;Storage: 10Gi\u0026rdquo; (~10 gigabytes) you can of course enter any other size. According to the documentation, the maximum limit is currently 5 TiB or 100 TiB with the feature for large file shares.\nThanks to the ReadWriteMany access mode, it is possible for several pods to connect to the storage at the same time.\nActivate.\nkubectl apply -f pvc.yaml -n test-ns Deployment A deployment serves to provision a desired number of pods that run a specific application or service. It is also possible to specify a deployment that provides a specific version of an application or service. You can also use deployments to update an existing application or service.\nHere we define which container we want to use. I use the label \u0026ldquo;prj: mysqltest\u0026rdquo; here to be able to connect it to the service after.\ndeploy.yaml\napiVersion: apps/v1 kind: Deployment metadata: name: batchest-mysql labels: prj: mysqltest spec: selector: matchLabels: prj: mysqltest strategy: type: Recreate template: metadata: labels: prj: mysqltest spec: securityContext: runAsUser: 999 runAsGroup: 999 containers: - image: mysql:8.0 resources: requests: memory: \u0026#34;250Mi\u0026#34; cpu: \u0026#34;100m\u0026#34; limits: memory: \u0026#34;400Mi\u0026#34; cpu: \u0026#34;1000m\u0026#34; name: mysql # Fuer 5.7 willst du evtl diese args verwenden: #args: # - \u0026#34;--ignore-db-dir\u0026#34; # - \u0026#34;lost+found\u0026#34; env: - name: MYSQL_DATABASE value: mysql_db_name # creates a database called mysql_db_name # Fuer 5.7 willst du evtl MYSQL_USER auf root setzen: # - name: MYSQL_USER # value: root - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysqlsecret key: MYSQL_DB_PASSWORD ports: - containerPort: 3306 name: mysql volumeMounts: - mountPath: /etc/mysql/conf.d readOnly: true name: mysqld-cnf - name: azurefile-pv mountPath: /var/lib/mysql subPath: sql volumes: - name: mysqld-cnf configMap: name: mysqld-cnf items: - key: mysqld.cnf path: meinsql.cnf - name: azurefile-pv persistentVolumeClaim: claimName: azurefile-pvc Instead of the mysql:8.0 image you can of course also use the still widespread version 5.7. You also need to check and possibly increase the RAM/CPU limits.\nNote that I include 2 volumes here: one for the ConfigMap mysqld-cnf and one for the persistent volume claim azurefile-pvc.\nHowever, it doesn\u0026rsquo;t work that way yet. I didn\u0026rsquo;t want to write the password directly into the configuration. That\u0026rsquo;s why I say in deployment that the password is in a secret. Of course, you also have to generate this secret:\nkubectl create secret generic mysqlsecret --from-literal=MYSQL_DB_PASSWORD=ABC123 -n test-ns This configuration sets up the MySQL user root with the password ABC123.\nThe pod is started by the deployment after the following command.\nkubectl apply -f deploy.yaml -n test-ns Services A service in Kubernetes is a part of the network infrastructure that can be viewed as the entry and exit point for client requests. First, it allows the pods within the cluster to communicate with each other and provides a unified, logical address at which multiple pods can be reached. On the other hand, a service also allows access to pods from the external network if you configure it accordingly.\nIn this example, I\u0026rsquo;m using an internal service because I don\u0026rsquo;t want my MySQL server to be reachable from outside the cluster.\nservice.yaml\napiVersion: v1 kind: Service metadata: name: mysqlservice labels: prj: mysqltest spec: ports: - port: 3306 selector: prj: mysqltest clusterIP: None kubectl apply -f service.yaml -n test-ns So our container can be reached under the name mysqlservice.\nFinal test Here\u0026rsquo;s a quick test to see if everything works.\nList Pods kubectl get pods -n test-ns NAME READY STATUS RESTARTS AGE batchest-mysql-858cb8f7-l9tm4 1/1 Running 0 5m Connect to Pod kubectl exec -it batchest-mysql-858cb8f7-l9tm4 -n test-ns -- /bin/sh sh-4.4$ Login to MySQL mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \\g. Your MySQL connection id is 8 Server version: 8.0.31 MySQL Community Server - GPL Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type \u0026#39;help;\u0026#39; or \u0026#39;\\h\u0026#39; for help. Type \u0026#39;\\c\u0026#39; to clear the current input statement. mysql\u0026gt; ","permalink":"https://en.quisl.de/posts/mysql-azure-files-kubernetes/","summary":"This article shows you how to install MySQL with an Azure Files Storage backend on Azure Kubernetes Service (AKS). Azure Files Storage is a highly available and scalable file storage that provides network shares in the cloud. A major benefit of Azure Files over Azure Managed Disk is that multiple containers can access the same storage.\nThis article will walk you through the deployment step-by-step so that you can ensure your installation of MySQL with Azure Files Storage and AKS runs smoothly.","title":"Mysql on Azure Files with Kubernetes"},{"content":"Twitch is currently the largest streaming platform. Due to the high number of viewers, many steamers not only use human moderation teams, but also moderation bots.\nBots can be useful when moderating Twitch chats as they can help you enforce rules and keep the chat organized. Here are a few tasks that bots can do:\nFilter out inappropriate or spam messages Issuing warnings to users who break the rules Answers to frequently asked questions or commands from users Entertainment through for example a question and answer quiz Connect external systems such as subscriber alarms or games with chat integration By automating these tasks, bots can help your moderators keep chat clean and focused, giving you more time to interact with your viewers. Unlike human moderators, bots don\u0026rsquo;t get tired.\nIn this tutorial you will learn how to implement a simple chatbot in Python. We use the TwitchIO Module because it can be implemented quickly and expanded modularly.\nTwitch chat interfaces Twitch uses Internet Relay Chat (IRC) for its chat functionality. IRC is a protocol that allows users to communicate with each other. It is designed for group communication in channels, but also allows one-to-one communication via private messages.\nTwitch chat servers use a modified version of the IRC protocol that includes additional functions and features specific to Twitch. To connect and interact with Twitch chat, you need to use an IRC library or client compatible with Twitch IRC server.\nAlternatively, you can also use the Twitch API to access chat data and features. The Twitch API uses HTTP (Hypertext Transfer Protocol) to communicate with the Twitch servers and provides a way to programmatically access data and functionality on the Twitch platform.\nIn this tutorial we use the TwitchIO module, which handles the communication. So we don\u0026rsquo;t have to worry about the logs anymore.\nWhat is TwitchIO TwitchIO is an asynchronous Python wrapper around both the Twitch API and IRC with a powerful command extension for creating Twitch chat bots. TwitchIO covers almost the entire new Twitch API and offers support for Commands, PubSub, Webhooks and EventSub.\nThe latest version is currently 2.5.0.\nInstall TwitchIO Like most Python modules, you can easily install TwitchIO with Pypi.\npip install twitchio Preparation If you want to operate a Twitch bot, you must first create it as a new account on Twitch. You then have to generate an OAuth token. TwitchIO needs this token to connect to the Twitchchat.\nCreate a new Twitch account You can register the account like your own on Twitch. If you are already logged in, you must of course log out first.\nBy the way, in your profile settings you will find an option that allows you to create multiple accounts with the same email address or phone number.\nGenerate login token Once you have created your new account, all you need is a login token. You can generate this yourself according to the documentation or simply use the generator by swiftyspiffy for test purposes.\nTwitch bot base Here is an example of a Twitchbot that responds to the command \u0026ldquo;!hello\u0026rdquo; with \u0026ldquo;Hello\u0026rdquo; + name.\nfrom twitchio.ext import commands token = \u0026#34;\u0026#34; # put your token here class Bot(commands.Bot): def __init__(self): super().__init__( token=token, prefix=\u0026#34;!\u0026#34;, #prefix for commands initial_channels=[\u0026#34;nymn\u0026#34;, \u0026#34;forsen\u0026#34;], # list of channels ) async def event_ready(self): print(f\u0026#34;Logged in as | {self.nick}\u0026#34;) print(f\u0026#34;User id is | {self.user_id}\u0026#34;) @commands.command() async def hello(self, ctx: commands.Context): # example command \u0026#34;hello\u0026#34; # Send a hello back! await ctx.send(f\u0026#34;Hello {ctx.author.name}!\u0026#34;) async def event_message(self, message): # this function will be executed on every chat message if message.echo: #message.echo are the bots own messages return # we ignore those m = message print( # print on console f\u0026#34;#{message.author.channel.name}-{message.author.name}({message.timestamp}): {message.content}\u0026#34; ) await self.handle_commands(message) # go through commands bot = Bot() # initialize bot bot.run() # execute bot As you can see you can put all the configuration in the Bot class.\nPrefix for commands When initializing, you must also specify a prefix in addition to the channels to which your bot should connect. This prefix prevents the bot from recognizing each line as a command. Instead, only lines beginning with this prefix are considered.\nIn the example, the prefix is an exclamation mark.\nAny methods you decorate with the @commands.command() decorator are commands. For example the hello() method from the example. hello() is executed every time someone writes !hello in the chat.\nEvent message The event_message() method is executed on every message event in one of the connected chats. In the example we simply displayed the content of the incoming chat messages on the console.\nEvent ready The event_ready() method is always executed after login.\nDocs Of course, this was just a small example. TwitchIO can do much more. You can find the full range of functions in the official documentation.\n","permalink":"https://en.quisl.de/posts/twitchio-bot/","summary":"Twitch is currently the largest streaming platform. Due to the high number of viewers, many steamers not only use human moderation teams, but also moderation bots.\nBots can be useful when moderating Twitch chats as they can help you enforce rules and keep the chat organized. Here are a few tasks that bots can do:\nFilter out inappropriate or spam messages Issuing warnings to users who break the rules Answers to frequently asked questions or commands from users Entertainment through for example a question and answer quiz Connect external systems such as subscriber alarms or games with chat integration By automating these tasks, bots can help your moderators keep chat clean and focused, giving you more time to interact with your viewers.","title":"How to program a Twitch bot with Python"},{"content":"Cookie banners are used to obtain consent for the use of cookies from your website visitors. This is a legal requirement in many countries, including the European Union and its member states - including Germany - as well as other countries around the world.\nThey help protect user privacy and give them control over how their information is collected and used by your site.\nIn this post, I want to give you a tool that I used to create the cookie banner for my blog: Cookieconsent.\nWhat are cookies Cookies are small pieces of data that are stored on a user\u0026rsquo;s device when they visit a website. These cookies can be used to track user behavior as they navigate the website. They can also be used to store information that the user enters on the website, such as credentials or settings.\nWeb beacons and similar tracking methods also require explicit consent unless they are absolutely necessary for the operation of the website.\nWhen do I need consent Once you use cookies or web beacons for tracking on your website, you may only do so after the visitor has consented. In addition, there are also some external services that also track your visitors, even if you don\u0026rsquo;t want them to\u0026hellip;\nOf course, these services may also only be integrated after clarification and approval. Here are a few examples of services that you may only activate after approval:\nGoogle Analytics Google AdSense Google Fonts embedded YouTube video embedded twitch stream embedded image from Imgur Facebook like button and many more\u0026hellip; Especially with non-European services you have to be careful and read their data protection declaration before you integrate them.\nAt the moment You don\u0026rsquo;t need cookie consent for personal websites. But keep in mind that blogs with ads or affiliate links are a business and not personal.\nWhat should the cookie banner look like In Germany, the requirements for cookie banners are regulated in the Telemedia Act (TMG) and in the Federal Data Protection Act (BDSG). I\u0026rsquo;m not a lawyer, but broadly speaking, these laws require websites to inform users about the use of cookies and to obtain their consent to that use.\nThe banner must\u0026hellip;\n\u0026hellip; be clearly visible and easy to understand: The banner should be prominently displayed on the website and use language that is easy to understand even for users who are not familiar with technical terms.\n\u0026hellip; Inform about the use of cookies: The banner should explain what cookies are and how they are used on the website. It should also inform users about the types of cookies that are used, e.g. B. whether they are used for tracking or to store user preferences.\n\u0026hellip; obtain user consent: The banner should prompt the user to consent to the use of cookies on the site. This consent should be obtained through an active action, such as B. clicking a button or ticking a box.\n\u0026hellip; contain a link to the site\u0026rsquo;s privacy policy: The banner should contain a link to the site\u0026rsquo;s privacy policy, which should provide more detailed information about the site\u0026rsquo;s use of cookies and other data collection practices.\nYou should note that German law requires websites to give users the option to opt out of the use of cookies used for tracking purposes. This should be made clear in the cookie banner and privacy policy. The best thing you can do is speak to your attorney about it all.\nHow to use the cookieconsent framework There are several frameworks that you can use to implement such a banner. Not all of these comply with the above guidelines.\nI use cookieconsent from the GitHub user orestbida. Its available under the MIT license. That means you can use it for both personal and commercial purposes.\nBefore you can include it, you need to make the following three files accessible as static files. It is best to copy them together into a cookieconsent folder.\ncookieconsent.js cookieconsent.css cookieconsent-init.js File cookieconsent.js This is the framework itself.\nYou should not include this file directly from GitHub! For one, you don\u0026rsquo;t know if it will be deleted one day. On the other hand, you give the IP address of your website visitors to GitHub before they have the opportunity to file an objection.\ndownload\nFile cookieconsent.css This is the file where you can change the design and colors to fit the banner to your website. If you don\u0026rsquo;t want to change anything, the original colors look very chic. Both light color scheme and dark mode are already included.\ndownload\ncookieconsent-init.js file The init file is where you will need to make most of the changes. This is where the full text and settings of your special cookie banner come in. In this post I only go into the necessary settings. You can find more information on GitHub.\nThis file consists of at least the initCookieConsent() and the run() command.\nAs an example, here is my current cookieconsent-init.js file with comments.\n// obtain plugin const cc = initCookieConsent(); // the papermod theme uses \u0026#34;dark\u0026#34; to determin dark mode while // cookieconsent uses c_darkmode so we sync it: var bodyclasses = document.body.classList if (bodyclasses.contains(\u0026#34;dark\u0026#34;)) { bodyclasses.add(\u0026#39;c_darkmode\u0026#39;); } else { bodyclasses.remove(\u0026#39;c_darkmode\u0026#39;); } // run plugin with your configuration cc.run({ current_lang: \u0026#39;en\u0026#39;, autoclear_cookies: true, // default: false page_scripts: true, // default: false // mode: \u0026#39;opt-in\u0026#39; // default: \u0026#39;opt-in\u0026#39;; value: \u0026#39;opt-in\u0026#39; or \u0026#39;opt-out\u0026#39; // delay: 0, // default: 0 // auto_language: null // default: null; could also be \u0026#39;browser\u0026#39; or \u0026#39;document\u0026#39; // autorun: true, // default: true force_consent: true, // default: false // hide_from_bots: false, // default: false // remove_cookie_tables: false // default: false // cookie_name: \u0026#39;cc_cookie\u0026#39;, // default: \u0026#39;cc_cookie\u0026#39; // cookie_expiration: 182, // default: 182 (days) // cookie_necessary_only_expiration: 182 // default: disabled // cookie_domain: location.hostname, // default: current domain // cookie_path: \u0026#39;/\u0026#39;, // default: root // cookie_same_site: \u0026#39;Lax\u0026#39;, // default: \u0026#39;Lax\u0026#39; // use_rfc_cookie: false, // default: false // revision: 0, // default: 0 onFirstAction(user_preferences, cookie) { // callback triggered only once }, onAccept(cookie) { // ... }, onChange(cookie, changed_preferences) { location.reload(); }, gui_options: { consent_modal: { layout: \u0026#39;cloud\u0026#39;, // box/cloud/bar position: \u0026#39;top center\u0026#39;, // bottom/middle/top + left/right/center transition: \u0026#39;slide\u0026#39;, // zoom/slide swap_buttons: true, // enable to invert buttons }, settings_modal: { layout: \u0026#39;box\u0026#39;, // box/bar position: \u0026#39;left\u0026#39;, // left/right transition: \u0026#39;zoom\u0026#39;, // zoom/slide }, }, languages: { en: { consent_modal: { title: \u0026#39;We use cookies! \u0026lt;img src=\u0026#34;/cookieconsent/cookies.webp\u0026#34;\u0026gt;\u0026lt;/img\u0026gt;\u0026#39;, description: \u0026#39;Hi, this website uses essential cookies to ensure its proper operation and tracking cookies and comparable technologies like web beacons to understand how you interact with this website and to provide you with targeted ads. The latter will be set only after consent. \u0026lt;br\u0026gt;\u0026lt;button type=\u0026#34;button\u0026#34; data-cc=\u0026#34;c-settings\u0026#34; class=\u0026#34;cc-link\u0026#34;\u0026gt;Let me choose\u0026lt;/button\u0026gt;\u0026#39;, primary_btn: { text: \u0026#39;Accept all\u0026#39;, role: \u0026#39;accept_all\u0026#39;, // \u0026#39;accept_selected\u0026#39; or \u0026#39;accept_all\u0026#39; }, secondary_btn: { text: \u0026#39;Reject all\u0026#39;, role: \u0026#39;accept_necessary\u0026#39;, // \u0026#39;settings\u0026#39; or \u0026#39;accept_necessary\u0026#39; }, }, settings_modal: { title: \u0026#39;Cookie preferences\u0026#39;, save_settings_btn: \u0026#39;Save settings\u0026#39;, accept_all_btn: \u0026#39;Accept all\u0026#39;, reject_all_btn: \u0026#39;Reject all\u0026#39;, close_btn_label: \u0026#39;Close\u0026#39;, cookie_table_headers: [ { col1: \u0026#39;Name\u0026#39; }, { col2: \u0026#39;Domain\u0026#39; }, { col3: \u0026#39;Expiration\u0026#39; }, { col4: \u0026#39;Description\u0026#39; }, ], blocks: [ { title: \u0026#39;Cookie usage 📢\u0026#39;, description: \u0026#39;We use cookies and comparable technologies like web-beacons to ensure the basic functionalities of the website and to enhance your online experience. You can choose for each category to opt-in/out whenever you want. For more details related to cookies and other sensitive data, please read the full \u0026lt;a href=\u0026#34;/privacy\u0026#34; class=\u0026#34;cc-link\u0026#34;\u0026gt;privacy policy\u0026lt;/a\u0026gt;.\u0026#39;, }, { title: \u0026#39;Strictly necessary cookies\u0026#39;, description: \u0026#39;These cookies are essential for the proper functioning of my website. Without these cookies, the website would not work properly\u0026#39;, toggle: { value: \u0026#39;necessary\u0026#39;, enabled: true, readonly: true, // cookie categories with readonly=true are all treated as \u0026#34;necessary cookies\u0026#34; }, cookie_table: [ // list of all expected cookies { col1: \u0026#39;cc_cookie\u0026#39;, // match all cookies starting with \u0026#34;_ga\u0026#34; col2: \u0026#39;batchest.com\u0026#39;, col3: \u0026#39;6 months\u0026#39;, col4: \u0026#39;Stores your answers to this cookie consent tool.\u0026#39;, }, ], }, { title: \u0026#39;Advertisement and Targeting cookies and web beacons\u0026#39;, description: \u0026#39;These cookies collect information about how you use the website, which pages you visited and which links you clicked on. \u0026#39;, toggle: { value: \u0026#39;ads\u0026#39;, enabled: false, readonly: false, }, }, { title: \u0026#39;Performance and Analytics cookies and web beacons\u0026#39;, description: \u0026#39;These cookies and web beacons can collect information about you (IP Address, Browser information etc.) or about which pages you visited and which links you clicked.\u0026#39;, toggle: { value: \u0026#39;analytics\u0026#39;, // your cookie category enabled: false, readonly: false, }, }, { title: \u0026#39;External resources\u0026#39;, description: \u0026#39;This website does not use external resources that might misuse your data. However if you don\\\u0026#39;t want us to load external resources under any circumstances, feel free to disable this checkbox. This might impact your experience on this website as it will disable serval things such as images (like Imgur), videos (like YouTube) or embedded streams (like Twitch).\u0026#39;, toggle: { value: \u0026#39;analytics\u0026#39;, // your cookie category enabled: false, readonly: false, }, }, { title: \u0026#39;More information\u0026#39;, description: \u0026#39;For any queries in relation to our policy on cookies and your choices, please \u0026lt;a class=\u0026#34;cc-link\u0026#34; href=\u0026#34;/\u0026#34;\u0026gt;contact us\u0026lt;/a\u0026gt;.\u0026#39;, }, ], }, }, }, }); It is also best to store this file together with the other two so that you can integrate it into your website.\nInclude cookieconsent files Once you have gone through all the settings and have the three files ready, you can integrate them into your \u0026lt;body\u0026gt; or \u0026lt;head\u0026gt; tag in the HTML code of your website. Preferably before you integrate other scripts.\n\u0026lt;body\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;/cookieconsent/cookieconsent.css\u0026#34; media=\u0026#34;print\u0026#34; onload=\u0026#34;this.media=\u0026#39;all\u0026#39;\u0026#34;\u0026gt; \u0026lt;script defer src=\u0026#34;/cookieconsent/cookieconsent.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script defer src=\u0026#34;/cookieconsent/cookieconsent-init.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; ... \u0026lt;/body\u0026gt; Now the banner should appear when you refresh the webpage.\nScript execution only after approval The aim of the exercise is that the scripts are only executed if the user has given his consent.\nIn order to have cookieconsent scripts started as soon as the user has consented to a certain cookie category, you must provide them with two special tags:\ntype data-cookiecategory type=\u0026quot;text/plain\u0026quot; prevents the initial execution when the page is loaded. Through data-cookiecategory cookieconsent learns that it should manage this script and which category it should use.\nExample Google Analytics This is how you could run the Google Analytics script only after agreeing to the \u0026ldquo;analytics\u0026rdquo; cookie category:\n\u0026lt;!-- Google tag (gtag.js) --\u0026gt; \u0026lt;script async src=\u0026#34;https://www.googletagmanager.com/gtag/js?id=UA-123456789-0\u0026#34; type=\u0026#34;text/plain\u0026#34; data-cookiecategory=\u0026#34;analytics\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script type=\u0026#34;text/plain\u0026#34; data-cookiecategory=\u0026#34;analytics\u0026#34;\u0026gt; window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag(\u0026#39;js\u0026#39;, new Date()); gtag(\u0026#39;config\u0026#39;, \u0026#39;UA-123456789-0\u0026#39;); \u0026lt;/script\u0026gt; Example Google Adsense This is how you could run the Google Adsense script only after agreeing to the \u0026ldquo;ads\u0026rdquo; cookie category:\n\u0026lt;script async type=\u0026#34;text/plain\u0026#34; data-cookiecategory=\u0026#34;ads\u0026#34; src=\u0026#34;https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-1234567891234567\u0026#34; crossorigin=\u0026#34;anonymous\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; Change cookie settings You should give your visitors the option to change their consent afterwards. For example, some users flatly refuse everything. Later they might notice that they have to do without integrated external content such as a Twitch stream.\nYou can implement a link to call up the cookie settings menu as follows:\n\u0026lt;a href=\u0026#34;#\u0026#34; style=\u0026#34;color:silver\u0026#34; data-cc=\u0026#34;c-settings\u0026#34;\u0026gt;Cookie Settings\u0026lt;/a\u0026gt; Of course, this link only works after the cookieconsent-init.js script is loaded. I have integrated such a button into the footer of the website.\n","permalink":"https://en.quisl.de/posts/cookie-banner-with-cookie-consent/","summary":"Cookie banners are used to obtain consent for the use of cookies from your website visitors. This is a legal requirement in many countries, including the European Union and its member states - including Germany - as well as other countries around the world.\nThey help protect user privacy and give them control over how their information is collected and used by your site.\nIn this post, I want to give you a tool that I used to create the cookie banner for my blog: Cookieconsent.","title":"How to create a perfect cookie banner for your website with JavaScript"},{"content":"If you want to offer your website not only with HTTP, but also with the secure HTTPS protocol, you need a signed SSL certificate. Here are some reasons why you need HTTPS\u0026hellip;.\nSecurity: HTTPS helps secure the connection between a client and a server by encrypting the data transmitted between them.\nTrust: HTTPS gives your website a certain level of trust and credibility. Today\u0026rsquo;s browsers usually display a warning if a website only offers HTTP or the certificate is not signed by a certification authority such as Let\u0026rsquo;s Encrypt.\nSEO: Search engines use SSL/TLS certificates as a ranking factor. This means that an SSL/TLS certificate can help improve your website\u0026rsquo;s search engine rankings.\nRoughly speaking, HTTPS is the state of the art that every website should support.\nYou get the signed certificate from a certification authority (CA). In this article we use Let\u0026rsquo;s Encrypt.\nTheory To be honest, you could ignore the theory and jump straight to practice because the cert manager and the ingress controller do everything for you. But for possible troubleshooting here is the theory.\nWhat is Let\u0026rsquo;s Encrypt Let\u0026rsquo;s Encrypt is a free and open Certificate Authority (CA). It offers Domain Validation (DV) SSL/TLS certificates used to secure and encrypt data sent between a client (e.g. a web browser) and a server (e.g. a website).\nIts big advantage is that the certificate obtaining process with Let\u0026rsquo;s Encrypt is completely automatic. This means that you only have to set up the system once and then never have to worry about it again.\nYou don\u0026rsquo;t need an account with Let\u0026rsquo;s Encrypt because all communication between your Kubernetes cluster and the Let\u0026rsquo;s Encrypt website is handled automatically.\nHow does the certificate obtaining process work with Let\u0026rsquo;s Encrypt In order to obtain an SSL/TLS certificate from Let\u0026rsquo;s Encrypt, your system must go through two steps: 1. Domain Validation and 2. Certificate Issuance.\nThese certificates are usually valid for 90 days and are then automatically renewed.\nDomain Validation In the first step, your Kubernetes cluster must prove that it owns the domain for which it wants the certificate. This works like this:\nKubernetes asks Lets Encrypt for domain validation. Let\u0026rsquo;s Encrypt issues a challenge* and sends a nonce to sign. Kubernetes solves the challenge and signs the nonce with its private key. Let\u0026rsquo;s Encrypt checks if the challenges have been solved and verifies the signature on the nonce. If everything worked, the key pair used by Kubernetes is now an authorized key pair. * A challenge can be providing a DNS record, or simply hosting a file and path chosen by Let\u0026rsquo;s Encrypt.\nCertificate issuance With the authorized key pair, your Kubernetes cluster can request, renew, and revoke certificates for your domain. It works like this:\nKubernetes issues a \u0026ldquo;PKCS#10\u0026rdquo; certificate signing request (CSR). This CSR is signed by the authorized key pair. Let\u0026rsquo;s Encrypt verifies both signatures. If everything looks good, they issue a certificate for the requested domain with the CSR\u0026rsquo;s public key and send it back. Kubernetes can now use this certificate for SSL communication. Practice I\u0026rsquo;m using Azure Kubernetes Service (AKS) version 1.23. It should work the same way on a self-hosted Kubernetes cluster, please give me feedback!\nThe goal of this exercise is that you can apply for a certificate using ingress rules.\nIn principle, your Kubernetes cluster needs 3 things:\nthe Cert-Manager a ClusterIssuer resource an ingress controller Install Cert-Manager The Cert-Manager adds certificates and certificate issuers as resource types to Kubernetes and ensures that certificates are valid and up-to-date, and tries to renew certificates before they expire.\nIn principle, it can communicate with different systems such as Hashicorp Vault or Venafi. But we just need it for Let\u0026rsquo;s Encrypt.\nI am currently using the Cert Manager in the latest version 1.10.1. This version supports the Kubernetes versions 1.20 to 1.26. Check version incompatibilities here.\nYou can easily roll it out on your Kubernetes server using the Helm chart from the Jetstack Repository.\nhelm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.10.1 --set installCRDs=true These commands create a new namespace called \u0026ldquo;cert-manager\u0026rdquo;. Due to the --set installCRDs=true flag, additional custom resource definitions are created.\nYou will notice that there are 3 new pods running.\nInstall ClusterIssuer Now that Cert-Manager is installed, we need to tell it that we want to use Let\u0026rsquo;s Encrypt as the Certificate Authority (CA). This can be done by creating a ClusterIssuer resource. Simply create it with a .yaml file.\nclusterissuer.yaml\napiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: YOUR@EMAIL.COM privateKeySecretRef: name: letsencrypt solvers: - http01: ingress: class: nginx Replace YOUR@EMAIL.COM with your email address. Letsencrypt uses it to warn you about expiring certificates if the automatic renewal did not work. You don\u0026rsquo;t need an account with Let\u0026rsquo;s Encrypt or anything like that.\nDeploy this YAML as usual on your K8s cluster.\nkubectl apply -f clusterissuer.yaml Install Ingress Controller If you don\u0026rsquo;t have an ingress controller, now is the ideal time to install one. We will use the NGINX ingress controller.\nAn ingress controller is the system that receives incoming requests and forwards them to your internal services based on ingress rules.\nHere\u0026rsquo;s how to install it with Helm into the ingress-basic namespace.\nhelm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace ingress-basic --set controller.service.annotations.\u0026#34;service\\.beta\\.kubernetes\\.io/azure-load-balancer-health-probe-request-path\u0026#34;=/healthz --set controller.replicaCount=3 --set controller.service.externalTrafficPolicy=Local Note that I set \u0026ldquo;replicaCount=3\u0026rdquo; here. The larger the replica number, the more requests your cluster can accept at the same time. However, each replica also needs some CPU and RAM. A good value is 1 replica per node.\nWith controller.service.externalTrafficPolicy=Local the IP addresses are forwarded to your pods. You may not need this in your setup. Usually this shouldn\u0026rsquo;t cause any problems.\nThis launches a few pods aswel.\nIngress Rules: Practical Example Since last week I\u0026rsquo;m hosting my german blog on Azure Blob as Static Website instead of WordPress.\nThere are two reasons for this. On the one hand, WordPress was quite slow with the growing number of plugins and, on the other hand, I always have to pay the complete managed disk, regardless on how many gigabytes I used. With Blob Storages I only pay for what I actually use.\nThe Blob Storage is located on www.quisl.de using a DNS C entry. But I would like the blog to be accessible directly on quisl.de (without www). This is where the Kubernetes server comes into play as a forwarder.\nFirst I create a service that points as ExternalName from www.quisl.de.\nservice.yaml\napiVersion: v1 kind: Service metadata: name: blog spec: type: ExternalName externalName: www.quisl.de ports: - port: 80 targetPort: 80 name: http protocol: TCP - port: 443 targetPort: 443 name: https protocol: TCP And now I can write an ingress rule that forwards all traffic from quisl.de to this service.\ningress.yaml\napiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: blog-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/use-regex: \u0026#34;true\u0026#34; cert-manager.io/cluster-issuer: letsencrypt nginx.ingress.kubernetes.io/backend-protocol: https nginx.ingress.kubernetes.io/upstream-vhost: www.quisl.de spec: tls: - hosts: - quisl.de secretName: tls-secret rules: - host: quisl.de http: paths: - backend: service: name: blog port: number: 443 path: /(.*) pathType: ImplementationSpecific I pushed the whole thing to my cluster.\nkubectl apply -f .\\service.yaml kubectl apply -f .\\ingress.yaml Cert-Manager takes care of the certificate in the background and quisl.de was reachable within a minute.\nConclusion Now everything should work. If not, I can recommend the Troubleshooting Website from Cert-Manager.\nWith the help of ingress rules, you can request a certificate for every URL. Of course, the URL must each have an A record that points to the IP of your Kubernetes cluster. This way the Cert Manager can complete the challenge and receive a Let\u0026rsquo;s Encrypt SSL certificate.\n","permalink":"https://en.quisl.de/posts/k8s-with-letsencrypt/","summary":"If you want to offer your website not only with HTTP, but also with the secure HTTPS protocol, you need a signed SSL certificate. Here are some reasons why you need HTTPS\u0026hellip;.\nSecurity: HTTPS helps secure the connection between a client and a server by encrypting the data transmitted between them.\nTrust: HTTPS gives your website a certain level of trust and credibility. Today\u0026rsquo;s browsers usually display a warning if a website only offers HTTP or the certificate is not signed by a certification authority such as Let\u0026rsquo;s Encrypt.","title":"Automatic SSL certificate handling with Let's Encrypt on Kubernetes"},{"content":"Kubernetes is a powerful tool for managing containerized applications, but sometimes resources can become stuck and difficult to remove. This can happen for a variety of reasons, such as conflicts with other resources, problems with the resource itself, or issues with the Kubernetes cluster.\nHere are some things you can try to delete a resource\u0026hellip;\nThe regular way Delete a single resource:\nkubectl delete RESOURCETYPE RESOURCENAME -n NAMESPACE While -n NAMESPACE can be omited if the resource is in the default namespace or if resources of this resource type can\u0026rsquo;t be in namespaces. Like Cluster Roles, Storage classes or Namespaces.\nDelete all resources in a namespace:\nkubectl delete -n wordpress --all Now you have to wait for a while. But if these commands do not work there might be a problem.You can still try to force the deletion\u0026hellip;\nEdit the YAML to remove finalizers Finalizers tell Kubernetes what to do before a resource can be deleted. Sometimes things get stuck here. So one thing you can try is removing the finalizers from the stuck resources by hand.\nYou can edit the YAML of the stuck resource like this:\nkubectl edit RESOURCETYPE RESOURCENAME -n NAMESPACE This will open your editor with a YAML file.\nSearch for the key \u0026ldquo;finalizers:\nspec: finalizers: -xxxxxxxxxxxxxxx blablabla: asodijoi Delete the line \u0026ldquo;finalizers:\u0026rdquo; and the lines that belong to it. Don\u0026rsquo;t delete the lines that follow. In this example \u0026ldquo;blablabla\u0026rdquo; must remain but \u0026ldquo;-xxxxxxxxxxxxxxx\u0026rdquo; must be deleted.\nForce delete Resource You can try the force deletion with \u0026ndash;force \u0026ndash;grace-period=0.\nkubectl delete RESOURCETYPE RESOURCENAME -n NAMESPACE --force --grace-period=0 Delete a stucking namespace If the resource is of type Namespace then you can check if its actually empty:\nkubectl get -n NAMESPACENAME all This should show nothing. Otherwise You might want to try to delete those resources first:\nkubectl delete -n NAMESPACENAME --all ","permalink":"https://en.quisl.de/posts/delete-stuck-k8s-resources/","summary":"Kubernetes is a powerful tool for managing containerized applications, but sometimes resources can become stuck and difficult to remove. This can happen for a variety of reasons, such as conflicts with other resources, problems with the resource itself, or issues with the Kubernetes cluster.\nHere are some things you can try to delete a resource\u0026hellip;\nThe regular way Delete a single resource:\nkubectl delete RESOURCETYPE RESOURCENAME -n NAMESPACE While -n NAMESPACE can be omited if the resource is in the default namespace or if resources of this resource type can\u0026rsquo;t be in namespaces.","title":"How To Delete Stuck Kubernetes Resources"},{"content":"As a website operator or blogger, you usually want to improve the visibility and ranking of your website or blog in search engines such as Google. To do this, you should consider using search engine optimization (SEO) techniques.\nSEO is the process of making your website more attractive to search engines so that it ranks higher on the results page when people search for keywords related to your content.\nThis can be accomplished through a variety of tactics. By implementing these techniques, you can increase the amount of organic traffic your blog receives from search engines, which can lead to more readers and potential marketing opportunities.\nHere are my 8 tips\u0026hellip;\nUse good keywords Strategically use interesting keywords in your page titles, headings, and content to improve your search engine rankings.\nOne way to find good keywords for your blog is to use a keyword research tool like Google\u0026rsquo;s Keyword Planner. This tool allows you to enter a keyword or phrase related to your blog\u0026rsquo;s content and it will provide a list of related keywords and their search volume. You can use this information to identify keywords that are popular with your target audience.\nAnother way to come up with good keywords is to think about the words and phrases your potential readers might use when searching for content like yours. For example, if you\u0026rsquo;re a fashion blogger, your readers might search for keywords like \u0026ldquo;fashion tips\u0026rdquo; or \u0026ldquo;style advice.\u0026rdquo;\nYou can also use Google\u0026rsquo;s autocomplete feature to see what keywords and phrases are commonly searched for related to your blog\u0026rsquo;s topic.\nYou can then integrate the found keywords into your blog posts and page titles.\nHave fast loading times A web response time between 200 milliseconds and 1 second is usually considered acceptable as users are unlikely to notice the delay, especially on static blogs.\nOptimize your site\u0026rsquo;s loading time by reducing the number of large images and other elements that could slow down your site.\nTo see if your website is fast enough, you can use a service like PageSpeed Insights.\nUse social media and backlinks Backlinks are links that point to your website. They are an important factor in how search engines like Google determine the ranking of your website. Quality backlinks pointing to your website can help improve your search engine rankings and increase your website\u0026rsquo;s visibility to potential customers.\nOne way to get good backlinks for your website is to create quality, original content that people want to link to. This can include blog posts, articles, infographics, videos, and other types of content that add value. You can then promote that content on social media, forums, and other online platforms to get other people to link to it.\nAnother way to get backlinks is to contact other websites and ask them to link to your content. This can include websites relevant to your industry as well as websites that have high domain authority. I.e. websites that already have a high ranking on Google. This will give them some of their authority to your site. You can also consider guest posting on other blogs and building backlinks to your own website there.\nNote that backlink quality is more important than quantity. It\u0026rsquo;s better to have a few high quality backlinks from reputable websites than a lot of low quality backlinks from spammy or irrelevant websites.\nOverall, building good backlinks for your site requires a combination of creating quality content, promoting that content, and reaching out to other sites to ask for backlinks. In this way, you can help improve your search engine rankings and increase your website\u0026rsquo;s visibility to potential customers or readers.\nCreate good content Create unique, high-quality content that provides value to your visitors and differentiates your website from the competition.\nThis is of course easier said than done. To find good topics to write about, you can start by thinking about your target audience\u0026rsquo;s interests and passions. What would you like to learn more about? What problems do they have that your blog could help solve?\nI write this blog about software development and IT\u0026hellip; So about things I had to learn myself at some point. At that time I would have been happy about my contributions. So take a look at your recent Google searches which didn\u0026rsquo;t lead to results\u0026hellip; Do you have a solution for any of these yet? Then write about it!\nAdditionally, you can check out what other bloggers are writing about in your niche and consider covering similar topics from a unique perspective to offer new information and insights.\nYou can also use social media and online forums to see what people are talking about and what questions they are asking about your blog topic. All of these can be sources of inspiration for finding good topics to write about.\nIntegrate sitemaps A sitemap is most commonly an XML file that you can use to provide information about pages, videos, and other files on your site, and the relationships between those files. It is like the table of contents of your website. Search engines like Google look at this file to know which pages might be indexed. You can also use the sitemap to provide information about which pages and files on your website you consider particularly important. It provides additional information, such as when a page was last updated, or information on alternative language versions of the page.\nIf you offer such a sitemap, you increase the chance that the search engines will index your pages at all. In a Google documentation they explain how how indexing works.\nSitemaps are usually found at \u0026ldquo;EXAMPLE.COM/sitemap.xml\u0026rdquo;. As an example you can look at the sitemap of my blog: https://en.quisl.de/sitemap.xml\nRegister with search engines Once you have created your sitemap, you can wait until the Google crawler finds your website. This can sometimes take a long time. Especially if you don\u0026rsquo;t have that many backlinks yet.\nTo speed up the process, you should submit your sitemap to Google. This can be done in the Google Search Console. There you have to create an account or sign in with your Google account and confirm ownership of your website. You can then submit your sitemap via the menu item Indexing\u0026ndash;\u0026gt;Sitemaps.\nWrite alt tags for images Use alt tags to describe your images and make them more search engine friendly.\nFirst, alt tags provide a textual description of the image that allows search engines to understand the content of the image. This can help your blog\u0026rsquo;s pages rank higher for relevant keywords, since search engines can now see that the page contains those keywords in the alt tags.\nSecond, alt tags can improve your blog\u0026rsquo;s accessibility for users who are visually impaired and rely on screen readers to access the content. Screen readers can read the alt tags of images aloud, providing a better experience for those users.\nFinally, alt tags can also be useful for users browsing your blog with a slow internet connection or whose web browsers have images disabled. In these cases, the alt tags are displayed instead of the images, so users can still understand the content of the images.\nOverall, using image alt tags on your blog can help with search engine optimization, improve accessibility, and provide a better experience for your readers.\nStay up to date Stay up to date with the latest SEO trends and best practices to ensure your website stays competitive and your search engine rankings continue to improve. Good sources are mostly the search engines and their employees themselves.\n","permalink":"https://en.quisl.de/posts/seo-strategies/","summary":"As a website operator or blogger, you usually want to improve the visibility and ranking of your website or blog in search engines such as Google. To do this, you should consider using search engine optimization (SEO) techniques.\nSEO is the process of making your website more attractive to search engines so that it ranks higher on the results page when people search for keywords related to your content.\nThis can be accomplished through a variety of tactics.","title":"Increase traffic and search engine ranking"},{"content":"I recently showed a way to host static websites on Azure Blob for cheap. You can use absu to update the content of Your website. Everything is great\u0026hellip; Right?\nHowever, sometimes the static website does not update immediatly: You upload Your new files to the blob storage but when You access Your website, it still loads the old files. Updating the blob files work, but updating the static websites appears not to work. Even deleting and reuploading does not help. This is because both, Your browser AND Azure tends to cache things.\nSolution What to do if You update the static website and it doesn\u0026rsquo;t deliver the new content? It is possible to force a cache refresh!\nRefresh Azure Cache After uploading new content to Your website, You can purge the CDN cache in the web surface of the Azure portal. This can be accomplished in the resource page of the CDN endpoint by clicking on the \u0026ldquo;Purge\u0026rdquo; button.\nRefresh Browser Cache Most browsers can enforce a cache refresh with [CTRL] + [F5] OR [Shift] + [F5] on Windows and Linux. If You use Mac then try using [Command] + [R].\n","permalink":"https://en.quisl.de/posts/refresh_azure_blob_website/","summary":"I recently showed a way to host static websites on Azure Blob for cheap. You can use absu to update the content of Your website. Everything is great\u0026hellip; Right?\nHowever, sometimes the static website does not update immediatly: You upload Your new files to the blob storage but when You access Your website, it still loads the old files. Updating the blob files work, but updating the static websites appears not to work.","title":"Refresh static Azure Blob Website"},{"content":"As humans, we are involved in many projects\u0026hellip; Some are private, while others are business related. But we do all of them because there is some kind of goal or reward. But usually there is always also some kind of hardship involved.\nworking in a business can reward You with money, but it takes up all of Your time drinking at parties can be fun, but sobering up the next morning is the opposite playing an instrument can endear you to other people, but practicing is often demotivating because of the difficulty These hardships can make You feel about quitting. These feelings are not bad. In fact, they are necessary to improve Your life quality! It\u0026rsquo;s better to quit than doing something unhealthy or bad for the rest of Your life.\nSteven Bartlett invented a \u0026ldquo;quitting-framework\u0026rdquo; that can help You to decide whether it\u0026rsquo;s time to quit something. Check the cover image of this post. I found out about this in an Ali Abdaal video. It goes as follows\u0026hellip;\nFirst, You ask Yourself the question: \u0026ldquo;why am I thinking about quitting?\u0026rdquo;. The most common answers are just these two: Either it sucks, or it\u0026rsquo;s too hard.\nIt\u0026rsquo;s too hard If it\u0026rsquo;s too hard, then You ask Yourself if the challenge is worth the reward. If the answer is Yes, then You should keep going. But if the answer is no, then You definitely should quit.\nThis is the case with the instrument example from above. How bad do You want to play the instrument? If Your answer is \u0026ldquo;it would be nice, but I don\u0026rsquo;t really need it\u0026rdquo;, then maybe it\u0026rsquo;s time to quit.\nBut if playing that instrument is Your dream, then don\u0026rsquo;t let anything stop You. Go practice!\nIt sucks What if practicing suck? Well, that\u0026rsquo;s a different problem. Ask Yourself, \u0026ldquo;what can I do to make it suck less\u0026rdquo;. Here are some tips for the musical instrument example:\nchange the teacher practice more slowly have goals create an atmosphere have joy play for Your parents Think about your dream. If none of the things above are worth the effort, then You should really use Your time doing something else. Playing an instrument is really not the only way to get people\u0026rsquo;s attention. Become a good speaker? A comedian? A football player? Or maybe You can become the best strawberry cake baker in the world? Who knows\u0026hellip;\nYes I know\u0026hellip; its really straight forward but sometimes we are thinking in too many details.\n","permalink":"https://en.quisl.de/posts/quitting/","summary":"As humans, we are involved in many projects\u0026hellip; Some are private, while others are business related. But we do all of them because there is some kind of goal or reward. But usually there is always also some kind of hardship involved.\nworking in a business can reward You with money, but it takes up all of Your time drinking at parties can be fun, but sobering up the next morning is the opposite playing an instrument can endear you to other people, but practicing is often demotivating because of the difficulty These hardships can make You feel about quitting.","title":"When is it time to quit"},{"content":"Some awesome tools that I use for my streams on Twitch. Feel free to check them out!\nOpen Broadcast Software OBS is a screencasting and streaming tool. It\u0026rsquo;s probably the most important tool that You ever need for streaming on Twitch right now. It can merge multiple video sources like cameras, videos, screen recording or even a browser and arrange them.\nStreamElements Overlay and alerting tool with tons of features for stream monetizing. StreamElements integrates into OBS via a plugin\u0026hellip;\noverlays via browser source donation analytic alerts on\u0026hellip; subscriptions follows donations bits raids chatbot (predefined answers) There is even a marketplace to connect sponsors and Twitch streamers. Also, it gives you the possibility to open your own merch store. I haven\u0026rsquo;t tested it yet, tho.\nEmote tools Emote tools for handling third party emotes.\nEmote Platforms Third party emote providers 7TV, FFZ and BTTV.\nSevenTV 7TV gives you 300 free channel emote slots for Twitch and YouTube, a clean website and an animated profile picture.\nIf you want special emotes for a season like Christmas or Easter, you can use Emote sets, which allow a quick swapping of all emotes.\nThe only downside is, that it\u0026rsquo;s pretty new and still not as widely accepted as BTTV or FrankerFaceZ.\nIn order to see the emotes you will need to download a browser extension, a mobile app or a chatting tool like Chatterino7.\nFrankerFaceZ FrankerFaceZ might be the underdog among the 3 emote providers. Still, you get 50 free emote slots and pretty much every Chatting extension can display FFZ emotes.\nWhere FrankerFaceZ really shines is the Chrome add-on, which provies many Twitch improvements on its own:\nenhanced Twitch UI audio compressor latency compensation moderation tools word highlighting plugin manager view 7TV, BTTV and FFZ emotes \u0026hellip; BetterTwitchTV BTTV is the older brother of FrankerFaceZ. It will give you 50 free emote slots. It\u0026rsquo;s still the most widely used and accepted third party emote platform. If a Twitch tool supports third party emotes, it will definitely support BTTV emotes. But to be sure, BTTV also provides its own Chrome plugin.\nEmote Checker The Emote Checker is a tool that I wrote as my first JavaScript project. It can show if you have emotes in any of the three emote providers with overlapping names. This can help you free up some of those valuable emote slots!\nGempbot Gempbot creates a channel points reward button in your chat that viewers can use to activate a 7TV or BTTV emote in your channel. Gempbot will never overwrite emotes that you have already manually activated. You tell Gempbot how many emote slots it can control. When all of its slots are used up, it swaps the oldest emote for the next redeemed one.\nChatbots Chatbots are technical users in Your chat and help with moderating or just provide fun.\nYarp bot Yarp bot is a chatbot that supposedly uses AI to find out what a viewer asked in chat and then react to it appropriately. It can detect things like: greetings, goodbyes, brbs, lurks or questions for\nthe game / subject schedule chat rules socials / discord drops \u0026hellip; Sadly, it\u0026rsquo;s just available for English and there is no plan to support other chat languages yet.\nBotbear Botbear is a chat bot with many fancy chat features like\u0026hellip;.\ntrivia quiz in chat show latest emote changes ping viewers on category changes (on demand) and many more chat gimmicks.\nSupibot Supibot probably the most versatile chatbot out there with a ton of commands including\u0026hellip;\nnews chat statistics currency conversions image generation checkups (definitions, google, urban dict, Wikipedia etc.) jokes \u0026hellip; And many more gimmicks. The developer streams the whole development of this bot. You can watch him here.\nWith the alias command, you can combine several commands in a modular way to create your own commands.\nChatterino Chatterino is a great chatting tool, especially for offline chatters. It even allows You to see chat messages from before you joined the chat, use filters and be in multiple platforms. With Chatterino7 You can see emotes from all 7 platforms.\n","permalink":"https://en.quisl.de/posts/twitch-tools/","summary":"Some awesome tools that I use for my streams on Twitch. Feel free to check them out!\nOpen Broadcast Software OBS is a screencasting and streaming tool. It\u0026rsquo;s probably the most important tool that You ever need for streaming on Twitch right now. It can merge multiple video sources like cameras, videos, screen recording or even a browser and arrange them.\nStreamElements Overlay and alerting tool with tons of features for stream monetizing.","title":"Top Twitch Tools for Streamers"},{"content":"Redis is a simple but powerful message broker that You can use as a communication medium for a distributed micro-service environment with multiple replicas. In this tutorial, we will use its \u0026ldquo;key value\u0026rdquo; and \u0026ldquo;task queue\u0026rdquo; features to store and access data.\nRedis server Install and execute Docker Desktop.\nUse the following command to start a single Redis 7 instance within a Docker container and make it listen to port 9999 on localhost.\ndocker run -d -p 9999:6379 redis:7-alpine It will require approximatly 8 Mi of RAM when its empty. Of course this will increase as You store more data.\nPython client Install Python Install Redis for Python pip install redis==4.3.4 Redis key value client You can use key value pairs as a shared memory for Your replicas/nodes. For example to store the configuration of a programm.\nWrite This creates the key \u0026ldquo;keyname\u0026rdquo; and gives it the value \u0026ldquo;Value123\u0026rdquo;.\nimport redis r = redis.Redis(host=\u0026#34;localhost\u0026#34;, port=9999, db=0) r.set(\u0026#34;keyname\u0026#34;,\u0026#34;Value123\u0026#34;) Read This reads the key \u0026ldquo;keyname\u0026rdquo; and prints its value.\nimport redis r = redis.Redis(host=\u0026#34;localhost\u0026#34;, port=9999, db=0) key = r.get(\u0026#34;keyname\u0026#34;) print(key.decode(\u0026#34;utf-8\u0026#34;)) Value123 Note that the variable \u0026ldquo;key\u0026rdquo; will be a bytes class object. Therefore we need to convert it to a str class object (utf-8 string) with the decode() method in order to print it.\nRedis task queue client A queue is a list where entries disappear when once they are read. This means that any entry can be read just once. You can use task queues to distribute workload between workers.\nBy pushing to the right and reading from the left we will do FIFO (first-in-first-out) scheduling. However, You could switch this to LIFO (last-in-first-out) scheduling by reading from the same direction where You push (lpop() and lpush() or rpop() and rpush()).\nWrite This creates the queue \u0026ldquo;queuename\u0026rdquo; and adds the entry \u0026ldquo;value123\u0026rdquo; to the right end.\nimport redis r = redis.Redis(host=\u0026#34;localhost\u0026#34;, port=9999, db=0) r.rpush(\u0026#34;queuename\u0026#34;,\u0026#34;value123\u0026#34;) r.rpush(\u0026#34;queuename\u0026#34;,\u0026#34;value456\u0026#34;) FYI: The rpush() method will return the current amount of entries in the queue.\nRead This will read a value from the left of the queue.\nimport redis r = redis.Redis(host=\u0026#34;localhost\u0026#34;, port=9999, db=0) entry1 = r.lpop(\u0026#34;queuename\u0026#34;) print(entry1.decode(\u0026#34;utf-8\u0026#34;)) entry2 = r.lpop(\u0026#34;queuename\u0026#34;) print(entry2.decode(\u0026#34;utf-8\u0026#34;)) value123 value456 Note: If a queue is empty and You read from it, the lpop() and rpop() methods will return None. So be sure to catch that AttributeError.\n","permalink":"https://en.quisl.de/posts/redis-python/","summary":"Redis is a simple but powerful message broker that You can use as a communication medium for a distributed micro-service environment with multiple replicas. In this tutorial, we will use its \u0026ldquo;key value\u0026rdquo; and \u0026ldquo;task queue\u0026rdquo; features to store and access data.\nRedis server Install and execute Docker Desktop.\nUse the following command to start a single Redis 7 instance within a Docker container and make it listen to port 9999 on localhost.","title":"Redis with Python"},{"content":"Here is an idea on how to host a blog or any other static website for very cheap on Azure Blob Storage.\nThe price will vary with traffic and storage of course\u0026hellip; At the moment its € 0.0208 for one GB/Month for storage and € 0.0063 for 10.000 read operations. So you can\u0026rsquo;t get much cheaper than this (excluding free webspace that always has a catch like ads or expiry dates).\nYou will need an Azure account for this (click)\nCreate static files You could create the static HTML pages manually\u0026hellip; But if you prefer to write in Markdown or other easier formats, you should use a tool for static website creation like these: HUGO, MkDocs, Jekyll, NEXT.JS\nI created this very blog with HUGO by the way.\nPush static files to Azure After you created the static files, you need to\u0026hellip;\ncreate an Azure Blob Storage create a container called $web upload your static file to $web You can use the tool absu to do these steps automatically. Its also great for updating!\nTip: Make sure to keep an offline version of these files. I personally use a private repository on GitHub.\nSetup Azure Blob Storage enable static website (\u0026ldquo;Static website\u0026rdquo; in the storage settings page) set index document name (usually your index.html) set error document path (the file that is displayed on a 404 errors, 404.html on HUGO) After that, your website will be accessable on the web endpoint of your Azure Blob Storage. (\u0026lsquo;https://*.*.web.core.windows.net/\u0026rsquo;)\nSetup custom domain If you want to use a custom domain, you need to connect it to an Azure CDN (\u0026ldquo;Security + network\u0026rdquo; \u0026ndash;\u0026gt; \u0026ldquo;Azure CDN\u0026rdquo;) and then create an endpoint within it.\nIn order for this endpoint to work, you will need to setup a CNAME entry in your Domain name registrar that points to your Azure CDN endpoint (make it point to: \u0026lsquo;https://*.azureedge.net\u0026rsquo;).\nSometimes the domain registration process take a few minutes (or even hours). Then finally you can enable \u0026ldquo;Custom domain HTTPS\u0026rdquo; in your CDN endpoint so that Azure validates your domain and gives you a valid SSL certificate. This domain validation process can also take just as long.\nOnce that is done you should be able to connect to your new website with your custom domain!\nRedirect HTTP to HTTPS You can do that in the rules engine of Your endpoint:\nThe enforcehttps rule will do the redirect while the global rule will make sure that your cache resets when you upload a new version so that users get the newest data.\n","permalink":"https://en.quisl.de/posts/blog-on-azure-storage/","summary":"Here is an idea on how to host a blog or any other static website for very cheap on Azure Blob Storage.\nThe price will vary with traffic and storage of course\u0026hellip; At the moment its € 0.0208 for one GB/Month for storage and € 0.0063 for 10.000 read operations. So you can\u0026rsquo;t get much cheaper than this (excluding free webspace that always has a catch like ads or expiry dates).","title":"Host a blog on Azure Storage"},{"content":"Here is a neat one-liner in Python 3 to quickly check if entries from one Python list are available in another Python list. This can also be used to apply other functions.\nCode list1 = [\u0026#34;a\u0026#34;,\u0026#34;b\u0026#34;,\u0026#34;c\u0026#34;,\u0026#34;d\u0026#34;] list2 = [\u0026#34;b\u0026#34;,\u0026#34;c\u0026#34;,\u0026#34;d\u0026#34;] list3 = [\u0026#34;a\u0026#34;,\u0026#34;x\u0026#34;,\u0026#34;z\u0026#34;] # Check if all entries of list2 are in list1 print(all(value in list1 for value in list2)) # Check if all entries of list3 are in list1 print(all(value in list1 for value in list3)) True False Explanation all(x) is a built-in function that returns True if all entries in the Python list x are True.\nThe list x is generated with an inline for loop (compound for loop):\n(value in list1 for value in list2) This actually creates a generator instead of a list, but it can be used in the all() function since it has the __iter__() method implemented. If you wanted, you could also convert it to a list by putting brackets around them:\nlist1 = [\u0026#34;a\u0026#34;,\u0026#34;b\u0026#34;,\u0026#34;c\u0026#34;,\u0026#34;d\u0026#34;] list2 = [\u0026#34;b\u0026#34;,\u0026#34;c\u0026#34;,\u0026#34;d\u0026#34;] print(type(value in list1 for value in list2)) print(type([value in list1 for value in list2])) \u0026lt;class \u0026#39;generator\u0026#39;\u0026gt; \u0026lt;class \u0026#39;list\u0026#39;\u0026gt; This would be the long version:\nlist1 = [\u0026#34;a\u0026#34;,\u0026#34;b\u0026#34;,\u0026#34;c\u0026#34;,\u0026#34;d\u0026#34;] list2 = [\u0026#34;b\u0026#34;,\u0026#34;c\u0026#34;,\u0026#34;d\u0026#34;] results1 = [] for value in list2: results1.append(value in list1) list3 = [\u0026#34;a\u0026#34;,\u0026#34;x\u0026#34;,\u0026#34;z\u0026#34;] results2 = [] for value in list3: results2.append(value in list1) print(all(results1)) print(all(results2)) True False ","permalink":"https://en.quisl.de/posts/python-find-list-entries-in-other-list/","summary":"Here is a neat one-liner in Python 3 to quickly check if entries from one Python list are available in another Python list. This can also be used to apply other functions.\nCode list1 = [\u0026#34;a\u0026#34;,\u0026#34;b\u0026#34;,\u0026#34;c\u0026#34;,\u0026#34;d\u0026#34;] list2 = [\u0026#34;b\u0026#34;,\u0026#34;c\u0026#34;,\u0026#34;d\u0026#34;] list3 = [\u0026#34;a\u0026#34;,\u0026#34;x\u0026#34;,\u0026#34;z\u0026#34;] # Check if all entries of list2 are in list1 print(all(value in list1 for value in list2)) # Check if all entries of list3 are in list1 print(all(value in list1 for value in list3)) True False Explanation all(x) is a built-in function that returns True if all entries in the Python list x are True.","title":"Python find list entries in other list"},{"content":"There is a bug in the current Philips Hue: Lights addon (version 1.4) for Stream Deck by Elgato which occurs occasionally after starting the Stream Deck.\nWhen trying to connect your Stream Deck to the Philips Hue Bridge, it says \u0026lsquo;Unable to discover bridges\u0026rsquo; after a few miliseconds. It seems like its not even trying to connect.\nWorkaround get the IP address of your Philips Hue Bridge manually - I got mine from my router download the Stream Deck Philips Hue Plugin version 1.5 from the official repository on Github (*.streamDeckPlugin file ending) double click to install while Stream Deck is running to install it add new Bridge in Stream Deck notice there is a new button for adding a bridge manually, click that button. enter the IP Address of your Philips Hue Bridge Your Stream Deck should by work now.\nExplanation The \u0026ldquo;unable to discover bridges\u0026rdquo; error occurs because https://discovery.meethue.com blocks requests. I\u0026rsquo;m not exactly sure why, either the plugin or the discovery service is poorly programmed. However, this makes it that the Plugin can not use auto-discovery.\nThe Plugin shop within Stream Deck just offers version 1.4. However, the new version 1.5 allows you to manually add the Bridge via IP address, so it does not use the auto-discovery.\nI think in the future they will add this version to the shop so you no longer have to download it manually.\n","permalink":"https://en.quisl.de/posts/philips-hue-unable-to-discover-bridges/","summary":"There is a bug in the current Philips Hue: Lights addon (version 1.4) for Stream Deck by Elgato which occurs occasionally after starting the Stream Deck.\nWhen trying to connect your Stream Deck to the Philips Hue Bridge, it says \u0026lsquo;Unable to discover bridges\u0026rsquo; after a few miliseconds. It seems like its not even trying to connect.\nWorkaround get the IP address of your Philips Hue Bridge manually - I got mine from my router download the Stream Deck Philips Hue Plugin version 1.","title":"Stream Deck + Philips Hue error: 'Unable to Discover Bridges'"},{"content":"Our website can usually be used without providing any personal data. Insofar as personal data (e.g. name, address or e-mail addresses) is collected on our website, this is always done on a voluntary basis as far as possible. This data will not be passed on to third parties without your express consent. We would like to point out that data transmission on the Internet (e.g. when communicating by e-mail) can have security gaps. A complete protection of the data against access by third parties is not possible. The use of contact data published as part of the legal notice obligation by third parties for the purpose of sending unsolicited advertising and information material is hereby expressly prohibited. The site operators expressly reserve the right to take legal action in the event of unsolicited advertising being sent, such as spam e-mails.\nGoogle AdSense This website uses Google AdSense. This is a service provided by Google Inc., 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA, for the integration of advertisements. Google AdSense uses cookies. These are files which, by storing them on your PC, allow Google to analyze the data relating to your use of our website. In addition, Google AdSense also uses web beacons, invisible graphics that enable Google to analyze clicks on this website, traffic on this and similar information. The information obtained via cookies and web beacons, your IP address and the delivery of advertising formats are transmitted to a Google server located in the USA and stored there. Google may pass this collected information on to third parties if this is required by law or if Google commissions third parties to process the data. However, Google will merge your IP address with the other stored data. By making the appropriate settings in your Internet browser or our cookie consent tool, you can prevent the cookies mentioned from being stored on your PC. However, this means that the content of this website can no longer be used to the same extent. By using this website, you consent to the processing of your personal data by Google in the manner and for the purposes set out above. Please check the advertising information provided by google for a list of cookies that can be set by using this service.\n","permalink":"https://en.quisl.de/datapolicy/","summary":"Our website can usually be used without providing any personal data. Insofar as personal data (e.g. name, address or e-mail addresses) is collected on our website, this is always done on a voluntary basis as far as possible. This data will not be passed on to third parties without your express consent. We would like to point out that data transmission on the Internet (e.g. when communicating by e-mail) can have security gaps.","title":"Data Policy"},{"content":"According to § 5 TMG\nJonas Rabe Stuttgarter Platz 2 10627 Berlin USt-IdNr. DE326766854\nRepresented by: Jonas Rabe E-Mail: quisl (at) outlook.de\nResponsible for the content according to § 18 Abs. 2 MStV: Jonas Rabe Stuttgarter Platz 2 10627 Berlin\nDisclaimer:\nLiability for content: The contents of our pages were created with great care. However, we cannot guarantee that the content is correct, complete or up-to-date. As a service provider, we are responsible for our own content on these pages according to Section 7, Paragraph 1 of the German Telemedia Act (TMG). According to §§ 8 to 10 TMG, however, we as a service provider are not obliged to monitor transmitted or stored third-party information or to investigate circumstances that indicate illegal activity. Obligations to remove or block the use of information according to general laws remain unaffected. However, liability in this regard is only possible from the point in time at which knowledge of a specific infringement of the law is known. As soon as we become aware of any violations of the law, we will remove this content immediately.\nLiability for links: Our offer contains links to external third-party websites, the content of which we have no influence on. Therefore we cannot assume any liability for this external content. The respective provider or operator of the pages is always responsible for the content of the linked pages. The linked pages were checked for possible legal violations at the time of linking. Illegal content was not recognizable at the time of linking. However, a permanent control of the content of the linked pages is not reasonable without concrete evidence of an infringement. As soon as we become aware of legal violations, we will remove such links immediately.\nCopyright: The content and works on these pages created by the site operators are subject to German copyright law. The duplication, editing, distribution and any kind of exploitation outside the limits of copyright require the written consent of the respective author or creator. Downloads and copies of this site are only permitted for private, non-commercial use. Insofar as the content on this site was not created by the operator, the copyrights of third parties are observed. In particular contents of third parties are marked as such. Should you nevertheless become aware of a copyright infringement, we ask that you inform us accordingly. As soon as we become aware of legal violations, we will remove such content immediately.\n","permalink":"https://en.quisl.de/impressum/","summary":"According to § 5 TMG\nJonas Rabe Stuttgarter Platz 2 10627 Berlin USt-IdNr. DE326766854\nRepresented by: Jonas Rabe E-Mail: quisl (at) outlook.de\nResponsible for the content according to § 18 Abs. 2 MStV: Jonas Rabe Stuttgarter Platz 2 10627 Berlin\nDisclaimer:\nLiability for content: The contents of our pages were created with great care. However, we cannot guarantee that the content is correct, complete or up-to-date. As a service provider, we are responsible for our own content on these pages according to Section 7, Paragraph 1 of the German Telemedia Act (TMG).","title":"Impressum"}]