Anycast - Checker

Page content

When is an anycast ip an anycast ip ?

that’s a question i was asked recently. As LTNN (Long Term Networking Nerd), i’m aware of Unicast, Multicast, Broadcast and also Anycast. so, let’s have a look into this.

hint: this article is not about how to setup your own anycast network. this may follow soon ?!?

Terminologie

Unicast 1:1

Sending a message from one sender to one recipient

Multicast 1:many

Sending a message from one sender to multiple recipients

Broadcast 1:all

Sending a message from one sender to all recipients an a network

Anycast 1:clostest

Sending a message to one of serveral receivers, based on which receiver is clostest

Terraform

a tool for building and changing infrastructure across multiple clouds or onprem

Ansible

a tool for managing infrastructure like servers, network stuff, firewalls and other gear. it basically defines the final state of a ressource and automate the configuration management process

Docker

a platform for developers to build, package and deploy applications in containers.

Docker Swarm

a container orchestation tool that allows to manage and scale docker containers. comparable, but much simpler than kubernetes

Vultr

a cloud infrastructure provider that offers virtual machines and other services around the world. it’s cheaper and way simpler compared with aws, gcp or azure.

Let’s Start with the Setup

we build a few virtual Maschines, distributed around the World and managed centrally, of course.

Install Software

as ususual, i’m dooing this on a openbsd vm. you can easily adapt this to linux, mac or whatever ;)

which terraform || doas pkg_add terraform
which vultr-cli || doas pkg_add vultr-cli
which ansible || doas pkg_add ansible
mkdir samplesetup; cd samplesetup

Vultr

the vm’s will run on vultr. you can also adapt this to other cloud proviers as well. with vultr, you need an api key which you can create in your account.

export your API Key and you’re ready to go! Terraform also expects the Key, jues set an alias

export VULTR_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export TF_VULTR_API_KEY=$VULTR_API_KEY

Terraform

We start with Terraform. Our Machines will be fired up in 5 Locations:

[“ewr”, “lax”, “syd”, “fra”, “sgp”]

you can add, remove loctions on your own and need. ‘vultr-cli regions list’ will show you all 30 regions where vultr is located.

main.tf

here, oyu

cat << 'EOF' > main.tf
terraform {
  required_providers {
    vultr = {
      source = "vultr/vultr"
      version = "2.12.1"
    }
  }
}

resource "vultr_ssh_key" "my_ssh_key" {
  name = "my-ssh-key"
  ssh_key = "YOUR_SSH_KEY"
}

variable "regions" {
  type    = list(string)
  default = ["ewr", "lax", "syd", "fra", "sgp"]
}

resource "vultr_instance" "server" {
  for_each = toset(var.regions)
  region = each.value
  os_id = 477
  plan  = "vc2-1c-1gb"
  hostname = "debian-${each.value}-1"
  label = "debian-${each.value}-1"
  ssh_key_ids = ["SSH_KEY_ID"]
  enable_ipv6 = true
  backups = "disabled"
  activation_email = false
}

output "server_ips" {
  value = {
    for server in vultr_instance.server :
    server.region => server.main_ip
  }
}
EOF

SSH Keys

don’t know excatly why i have to provide both. the SSH Key and the SSH Key ID. just run ‘vultr-cli ssh-key list’ go the the appropriate key id.

sed -i 's/YOUR_SSH_KEY/ssh-ed25519 AAAAC3Nzxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/' main.tf
sed -i 's/SSH_KEY_ID/6ed5xxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/' main.tf

Run Terraform

let’s run terraform and build our machines. the output is printed on the screen and redirected to a file as well

terraform init
terraform plan
terraform apply -auto-approve |tee output

two minutes later …

vultr-cli instance list |grep debian
7c54ec3e-2637-4121-adf7-7234df486acf	66.135.xx.y debian-ewr-1	Debian 11 x64 (bullseye)	active	ewr	1	1024	25	1		[]
22ecc093-334b-4cd6-862d-11a3886dd420	45.77.xxx.y debian-syd-1	Debian 11 x64 (bullseye)	active	syd	1	1024	25	1		[]
9adf1eb7-2ca6-4527-9e10-ab22eb8071a4	45.32.xxx.y debian-sgp-1	Debian 11 x64 (bullseye)	active	sgp	1	1024	25	1		[]
0735baac-7fd8-4658-91ad-297816aa4bc1	45.76.xx.y  debian-lax-1	Debian 11 x64 (bullseye)	active	lax	1	1024	25	1		[]
506a5c4c-da2c-4500-ac85-980787d7a97f	136.244.x.y debian-fra-1	Debian 11 x64 (bullseye)	active	fra	1	1024	25	1		[]

… we have 5 debian servers. each maschine got it’s one host name with the location in. the vm’s cost around 5 USD / Month / Each. so, don’t worry if you have this running a few hours.

all machines are running latest version of debian (11.6 currently), got a root password and my ssh key for password less authentication. everything else is default and not configured. so, here is where ansible jumps in.

Ansible

let’s install ansible if not yet done

mkdir ansible; cd ansible

Ansible.cfg

we need a basic config files. just copy/paste it like this.

cat << 'EOF' > ansible.cfg
[defaults]
host_key_checking       = False
inventory               = ./inventory
remote_user             = root

# gathering facts
cache_plugin            = jsonfile
fact_caching            = jsonfile
gathering               = smart
fact_caching_connection = ~/.ansible_facts
fact_caching_timeout    = 86400

# timeout in seconds
timeout                 = 3

# how many session parallel
forks                   = 10

# no annoyingly .retry files anymore
retry_files_enabled     = False

# compression
var_compression_level   = 5
module_compression      = 'ZIP_DEFLATED'

# Tuning
display_skipped_hosts   = False

[ssh_connection]
ssh_args                = -o ControlMaster=auto -o ControlPersist=5m
control_path_dir        = ~/.ansible/cp
control_path            = %(directory)s/%%r@%%h
pipelining              = True
EOF

inventory.py

let’s build our inventory file based on the output of terraform. a little python snippet will parse the output of terraform and bring it into the right format.

cat << 'EOF' > inventory.py
#!/usr/bin/env python3

# Read the source data from a file
with open("../output", "r") as f:
    source = f.read()

# Parse the source data and create a dictionary of hostname to IP address mappings
ip_dict = {}
for line in source.splitlines():
    if line.strip().startswith('"'):
        parts = line.strip().split(" = ")
        hostname = parts[0].strip('"')
        ip_address = parts[1].strip('"')
        ip_dict[hostname] = ip_address

# Write the inventory file
with open("inventory", "w") as f:
    f.write('[debian_servers]\n')
    for hostname, ip_address in ip_dict.items():
        f.write(f"{hostname} ansible_host={ip_address}\n")
EOF
chmod u+x inventory.py
./inventory.py

Inventory

and you should get a inventory file like this

[debian_servers]
ewr ansible_host=66.135.x.y
fra ansible_host=136.244.x.y
lax ansible_host=45.76.x.y
sgp ansible_host=45.32.x.y
syd ansible_host=45.77.x.y

Ansible Ping

now, we wanna check if our machines are reachable. ansible got a “ping” command …

ansible debian_servers -m ping

… which simply returns a “pong” when the maschines are fully reachable.

fra | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
ewr | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
lax | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
syd | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
sgp | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Uptime ?

you can run any shell command on the remote servers as you want like this:

ansible debian_servers -m shell -a "uptime"

you get the uptime of the vm’s

fra | CHANGED | rc=0 >>
 03:13:09 up 13 min,  0 users,  load average: 0.00, 0.02, 0.03
ewr | CHANGED | rc=0 >>
 03:13:09 up 13 min,  0 users,  load average: 0.00, 0.00, 0.00
lax | CHANGED | rc=0 >>
 03:13:10 up 13 min,  0 users,  load average: 0.00, 0.02, 0.00
syd | CHANGED | rc=0 >>
 03:13:11 up 13 min,  0 users,  load average: 0.00, 0.01, 0.02
sgp | CHANGED | rc=0 >>
 03:13:15 up 13 min,  0 users,  load average: 0.00, 0.01, 0.00

Patch Servers

so, first thing i normally do is update all the packages

ansible debian_servers -m shell -a "apt-get -y update; apt-get -y upgrade"

Confirm Unicast

when we’re able to run commands an all the vm’s around the globe, wc can already confirm if we have a anycast ip or not. let’s ping a server which is physically in Zurich/Switzerland and known not to be a Unicast IP …

ansible debian_servers -m shell -a "ping -c 1 www.switch.ch |grep from"

the icmp request got returned on all 5 boxes, the latency varies vetween 8ms and 308ms. this is as expected to me. further, we see that we sent the request via IPv6, so, you machines are dualstacked and IPv6 is the preferred protocol. fine ;)

fra | CHANGED | rc=0 >>
64 bytes from prod.www.switch.ch (2001:620:0:ff::5c): icmp_seq=1 ttl=51 time=8.59 ms
ewr | CHANGED | rc=0 >>
64 bytes from prod.www.switch.ch (2001:620:0:ff::5c): icmp_seq=1 ttl=53 time=94.5 ms
syd | CHANGED | rc=0 >>
64 bytes from prod.www.switch.ch (2001:620:0:ff::5c): icmp_seq=1 ttl=52 time=374 ms
lax | CHANGED | rc=0 >>
64 bytes from prod.www.switch.ch (2001:620:0:ff::5c): icmp_seq=1 ttl=54 time=150 ms
sgp | CHANGED | rc=0 >>
64 bytes from prod.www.switch.ch (2001:620:0:ff::5c): icmp_seq=1 ttl=42 time=308 ms

do you see the latency from the different locations to Switzerland ?

Confirm Anycast

and what about an anycast ip like the public resolver from cloudflare ?

ansible debian_servers -m shell -a "ping -c 1 1.1.1.1 |grep from"

Completly different result … latency is below 7ms from any locations we tried …

fra | CHANGED | rc=0 >>
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=6.42 ms
ewr | CHANGED | rc=0 >>
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=2.79 ms
lax | CHANGED | rc=0 >>
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=0.931 ms
syd | CHANGED | rc=0 >>
64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=1.46 ms
sgp | CHANGED | rc=0 >>
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=1.21 ms

Compare CF vs Google

let’s put a simple wrapper around the command so we just can run “checkanycast”

alias checkanycast='ansible debian_servers -m shell -a "ping -c 1 ${ip} |grep ttl"'
ip=8.8.8.8; checkanycast

same result for google ..

fra | CHANGED | rc=0 >>
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=6.86 ms
ewr | CHANGED | rc=0 >>
64 bytes from 8.8.8.8: icmp_seq=1 ttl=118 time=1.22 ms
lax | CHANGED | rc=0 >>
64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=0.362 ms
syd | CHANGED | rc=0 >>
64 bytes from 8.8.8.8: icmp_seq=1 ttl=118 time=0.337 ms
sgp | CHANGED | rc=0 >>
64 bytes from 8.8.8.8: icmp_seq=1 ttl=118 time=1.13 ms

Checking Quad9

ip=9.9.9.9; checkanycast

Quad9 is also an AnyCast IP …

fra | CHANGED | rc=0 >>
64 bytes from 9.9.9.9: icmp_seq=1 ttl=59 time=0.654 ms
ewr | CHANGED | rc=0 >>
64 bytes from 9.9.9.9: icmp_seq=1 ttl=59 time=1.29 ms
lax | CHANGED | rc=0 >>
64 bytes from 9.9.9.9: icmp_seq=1 ttl=57 time=0.562 ms
syd | CHANGED | rc=0 >>
64 bytes from 9.9.9.9: icmp_seq=1 ttl=59 time=0.329 ms
sgp | CHANGED | rc=0 >>
64 bytes from 9.9.9.9: icmp_seq=1 ttl=59 time=1.39 ms

IPv6 ?

we tried with legacy ip .. what about the current protocol ?

host dns.google

we get host addresses, two for IPv4 and two for IPv6

dns.google has address 8.8.8.8
dns.google has address 8.8.4.4
dns.google has IPv6 address 2001:4860:4860::8844
dns.google has IPv6 address 2001:4860:4860::8888
ip=dns.google
checkanycast

let’s check google …

fra | CHANGED | rc=0 >>
64 bytes from dns.google (2001:4860:4860::8888): icmp_seq=1 ttl=119 time=0.856 ms
ewr | CHANGED | rc=0 >>
64 bytes from dns.google (2001:4860:4860::8888): icmp_seq=1 ttl=118 time=1.30 ms
lax | CHANGED | rc=0 >>
64 bytes from dns.google (2001:4860:4860::8888): icmp_seq=1 ttl=59 time=0.386 ms
syd | CHANGED | rc=0 >>
64 bytes from dns.google (2001:4860:4860::8888): icmp_seq=1 ttl=118 time=0.785 ms
sgp | CHANGED | rc=0 >>
64 bytes from dns.google (2001:4860:4860::8888): icmp_seq=1 ttl=118 time=1.27 ms

and you can see the response time is below 2ms, from each location. this proves that the ip address is an anycast ip!

what about my blog ?

i have a few mirrors of my blog, one of them is hosted via cloudflare … which is a cdn and so runs anycast. does it ?

ip=stoege.com
checkanycast

yes, it’s also behind an anycast address …

fra | CHANGED | rc=0 >>
64 bytes from 2a06:98c1:3120::3 (2a06:98c1:3120::3): icmp_seq=1 ttl=58 time=0.863 ms
ewr | CHANGED | rc=0 >>
64 bytes from 2606:4700:3032::ac43:8cb6 (2606:4700:3032::ac43:8cb6): icmp_seq=1 ttl=58 time=1.69 ms
lax | CHANGED | rc=0 >>
64 bytes from 2606:4700:3032::ac43:8cb6 (2606:4700:3032::ac43:8cb6): icmp_seq=1 ttl=58 time=0.942 ms
syd | CHANGED | rc=0 >>
64 bytes from 2606:4700:3036::6815:2ea5 (2606:4700:3036::6815:2ea5): icmp_seq=1 ttl=55 time=1.32 ms
sgp | CHANGED | rc=0 >>
64 bytes from 2606:4700:3036::6815:2ea5 (2606:4700:3036::6815:2ea5): icmp_seq=1 ttl=58 time=1.65 ms

cool, this page himself run’s on anycast with ultrafast latency and response time :)

Ansible Playbook

let’s proceed and run different Playbooks against our Infrastructure. Keep in mind that we got 5 VM’s with default setup from Vultr and we wanna tune/harden this a bit. thinking about ssh, firewall rules, name resolving, install software, etc.

Tune SSHD

let’s start with the SSHD. We wann restrict the Rool Login to our IP Range only.

cat << 'EOF' > 01_tune_sshd.sh
#!/usr/bin/env ansible-playbook
---
- name: Update sshd_config
  hosts: debian_servers
  become: yes

  tasks:
    - name: Update sshd_config
      blockinfile:
        path: /etc/ssh/sshd_config
        block: |
          Protocol 2
          PermitRootLogin without-password
          PasswordAuthentication no
          X11Forwarding no
          UseDNS no
          Match Address YOUR_PUBLIC_IPADDRESS
          PermitRootLogin yes
        backup: yes
        state: present
        create: yes
      notify: restart sshd

  handlers:
    - name: restart sshd
      systemd:
        name: sshd
        state: restarted
EOF

replace the IP Adress in the Playbook with your public ip …

sed -i 's/YOUR_PUBLIC_IPADDRESS/1.2.3.4/' 01_tune_sshd.sh

Nörd Alert: real nerds resolve their public ip with a script like this (i3) and and feed their public ip directly:

sed -i "s/YOUR_PUBLIC_IPADDRESS/$(i3 -4 -b)/" 01_tune_sshd.sh
chmod u+x 01_tune_sshd.sh
./01_tune_sshd.sh

make playbook executable

you may noticed that i added a “Shebang” Line at the beginning of the Playbook. So, we just can make the Playbook executeable and run it from the cli …

#!/usr/bin/env ansible-playbook

make executable and run it

chmod u+x 01_tune_sshd.sh
./01_tune_sshd.sh

and the output of the playbook …

./01_tune_sshd.sh

PLAY [Update sshd_config] *******************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************
ok: [fra]
ok: [ewr]
ok: [lax]
ok: [syd]
ok: [sgp]

TASK [Update sshd_config] *******************************************************************************************************
changed: [fra]
changed: [ewr]
changed: [lax]
changed: [syd]
changed: [sgp]

RUNNING HANDLER [restart sshd] **************************************************************************************************
changed: [fra]
changed: [ewr]
changed: [lax]
changed: [syd]
changed: [sgp]

PLAY RECAP **********************************************************************************************************************
ewr                        : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
fra                        : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
lax                        : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
sgp                        : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
syd                        : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Update Firewall

cat << 'EOF' > 02_firewall.yml
#!/usr/bin/env ansible-playbook
---
- name: Configure UFW firewall
  hosts: debian_servers
  become: true

  vars:
    ufw_rules:
      - { rule: "allow", protocol: "tcp", port: "80", comment "HTTP" }
      - { rule: "allow", protocol: "tcp", port: "443", comment "HTTPS" }
    #  - { rule: "deny", protocol: "tcp", port: "3306", comment: "MySQL" }
    allowed_ips:
      - YOUR_PUBLIC_IPADDRESS

  tasks:
    - name: Enable UFW
      community.general.ufw:
        state: enabled

    - name: Set logging
      community.general.ufw:
        logging: "on"

    - name: Configure UFW rules
      community.general.ufw:
        rule: "{{ item.rule }}"
        port: "{{ item.port }}"
        protocol: "{{ item.protocol | default('') }}"
        comment: "{{ item.comment | default('') }}"
      with_items: "{{ ufw_rules }}"

    - name: Allow SSH for Certain IP's
      community.general.ufw:
        rule: allow
        port: "22"
        proto: tcp
        from_ip: "{{ item }}"
      with_items: "{{ allowed_ips }}"

    - name: Remove SSH from 0.0.0.0
      community.general.ufw:
        rule: allow
        port: 22
        proto: tcp
        delete: true

    - name: Allow SSH from other servers in group
      community.general.ufw:
        rule: allow
        from_ip: "{{ hostvars[item]['ansible_host'] }}"
      loop: "{{ groups['debian_servers'] }}"
      when: item != inventory_hostname

    - name: Reload UFW
      community.general.ufw:
        state: reloaded
EOF

replace the public ip, make it executable and Run it

sed -i "s/YOUR_PUBLIC_IPADDRESS/$(i3 -4 -b)/" 02_firewall.yml
chmod u+x 02_firewall.yml
./02_firewall.yml

and the playbook run

./02_firewall.yml

PLAY [Configure UFW firewall] ***************************************************************************************************

TASK [Enable UFW] ***************************************************************************************************************
ok: [fra]
ok: [ewr]
ok: [lax]
ok: [syd]
ok: [sgp]

TASK [Set logging] **************************************************************************************************************
ok: [fra]
ok: [ewr]
ok: [lax]
ok: [syd]
ok: [sgp]

TASK [Configure UFW rules] ******************************************************************************************************
changed: [fra] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '80', 'comment "HTTP"': None})
changed: [ewr] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '80', 'comment "HTTP"': None})
changed: [fra] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '443', 'comment "HTTPS"': None})
changed: [ewr] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '443', 'comment "HTTPS"': None})
changed: [lax] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '80', 'comment "HTTP"': None})
changed: [syd] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '80', 'comment "HTTP"': None})
changed: [lax] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '443', 'comment "HTTPS"': None})
changed: [syd] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '443', 'comment "HTTPS"': None})
changed: [sgp] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '80', 'comment "HTTP"': None})
changed: [sgp] => (item={'rule': 'allow', 'protocol': 'tcp', 'port': '443', 'comment "HTTPS"': None})

TASK [Allow SSH for Certain IP's] ***********************************************************************************************
changed: [fra] => (item=xx.xx.xx.xx)
changed: [ewr] => (item=xx.xx.xx.xx)
changed: [lax] => (item=xx.xx.xx.xx)
changed: [syd] => (item=xx.xx.xx.xx)
changed: [sgp] => (item=xx.xx.xx.xx)

TASK [Remove SSH from 0.0.0.0] **************************************************************************************************
changed: [fra]
changed: [ewr]
changed: [lax]
changed: [syd]
changed: [sgp]

TASK [Allow SSH from other servers in group] ************************************************************************************
changed: [fra] => (item=ewr)
changed: [ewr] => (item=fra)
changed: [fra] => (item=lax)
...

TASK [Reload UFW] ***************************************************************************************************************
changed: [fra]
changed: [ewr]
changed: [lax]
changed: [syd]
changed: [sgp]

PLAY RECAP **********************************************************************************************************************
ewr                        : ok=7    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
fra                        : ok=7    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
lax                        : ok=7    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
sgp                        : ok=7    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
syd                        : ok=7    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Install Packages

we all like packages, do we ? let’s install them via playbook

cat << 'EOF' > 03_packages.yml
#!/usr/bin/env ansible-playbook
---
- name: Install packages
  hosts: debian_servers
  become: true

  vars:
    package_list:
      - tcpdump
      - nmap
      - htop
      - tmux
      - tmate
      - vnstat
      - uptimed
      - sudo
      - doas
      - mtr-tiny
      - fping

  tasks:
    - name: Update package cache
      apt:
        update_cache: yes

    - name: Install packages
      apt:
        name: "{{ package_list }}"
        state: present
EOF

# run it
chmod u+x 03_packages.yml
./03_packages.yml

running package playbook

./03_packages.yml

PLAY [Install packages] *********************************************************************************************************

TASK [Update package cache] *****************************************************************************************************
ok: [fra]
ok: [ewr]
ok: [lax]
ok: [sgp]
ok: [syd]

TASK [Install packages] *********************************************************************************************************
changed: [fra]
changed: [sgp]
changed: [ewr]
changed: [lax]
changed: [syd]

PLAY RECAP **********************************************************************************************************************
ewr                        : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
fra                        : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
lax                        : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
sgp                        : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
syd                        : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

profile

like to have some aliases on the boxes. you can easily skip or adapt it on your own.

cat << 'EOF' > 04_profile.yml
#!/usr/bin/env ansible-playbook
---
- name: Add section to /etc/profile
  hosts: debian_servers
  become: true
  vars:
    section_name: "My Custom Section"
    section_content: |

      ### fix locale
      export LC_CTYPE=en_US.UTF-8
      export LC_ALL=en_US.UTF-8

      ### aliases
      alias d='doas su -'
      alias ..='cd ..'
      alias ...='cd ../..'
      alias ....='cd ../../..'
      alias ll='ls -l'
      alias lla='ls -la'
      alias ssr='ssh -A root@localhost'

      ### docker
      alias docker_kill_all='docker rm -f $(docker ps -a -q)'
      alias dobi='docker compose build'
      alias dodo='docker compose down'
      alias dops='docker ps'
      alias dopsa='docker ps -a'
      alias dore='dodo; doup'
      alias dobire='dodo; dobi; doup'
      alias doup='docker compose up -d'
      alias doupd='docker compose up'

  tasks:
    - name: Append section to /etc/profile
      blockinfile:
        path: /etc/profile
        block: |
          {{ section_content }}
        marker: "# {mark} {{ section_name }}"
EOF

# run it
chmod u+x 04_profile.yml
./04_profile.yml

adding User

til now, all the scripts and commands were executed as root. this is may not what we want, so, let’s add at least one user.

cat << 'EOF' > 05_users.yml
#!/usr/bin/env ansible-playbook
---
- name: Create new user with SSH key
  hosts: debian_servers
  become: true

  vars:
    username: YOURUSERNAME
    ssh_key: "YOURSSHKEY"

  tasks:
    - name: Create user account
      user:
        name: "{{ username }}"
        create_home: yes
        shell: /bin/bash

    - name: Add SSH key to authorized_keys
      authorized_key:
        user: "{{ username }}"
        key: "{{ ssh_key }}"
        state: present
EOF

replace values and run it

sed -i 's/YOURUSERNAME/superduperuser/' 05_users.yml
sed -i 's/YOURSSHKEY/ssh-ed25519 AAAAC3N......../' 05_users.yml
chmod u+x 05_users.yml
./05_users.yml
./05_users.yml

PLAY [Create new user with SSH key] *********************************************************************************************

TASK [Create user account] ******************************************************************************************************
changed: [fra]
changed: [ewr]
changed: [lax]
changed: [sgp]
changed: [syd]

TASK [Add SSH key to authorized_keys] *******************************************************************************************
changed: [fra]
changed: [ewr]
changed: [sgp]
changed: [syd]
changed: [lax]

PLAY RECAP **********************************************************************************************************************
ewr                        : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
fra                        : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
lax                        : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
sgp                        : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
syd                        : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Update Hosts File

we don’t have any dns resolving at the moment. we could assing our public ip address a fqdn name, but we can also do the poor man’s dns and feed the hosts file with the appropriate ip. it’s also a good example how todo this, isn’t it ?

cat << 'EOF' > 06_hosts.yml
#!/usr/bin/env ansible-playbook
- name: Add hosts to /etc/hosts
  hosts: debian_servers
  become: yes
  gather_facts: yes

  tasks:
    - name: Add hosts to /etc/hosts
      lineinfile:
        dest: /etc/hosts
        line: "{{ hostvars[item]['ansible_default_ipv4']['address'] }} {{ item }}"
        state: present
        regexp: "^{{ item }} "
      loop: "{{ groups['debian_servers'] }}"
EOF

# run it
chmod u+x 06_hosts.yml
./06_hosts.yml

and run it ..

./06_hosts.yml

PLAY [Add hosts to /etc/hosts] **************************************************************************************************

TASK [Add hosts to /etc/hosts] **************************************************************************************************
changed: [fra] => (item=ewr)
changed: [fra] => (item=fra)
changed: [ewr] => (item=ewr)
...

PLAY RECAP **********************************************************************************************************************
ewr                        : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
fra                        : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
lax                        : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
sgp                        : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
syd                        : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

adding docker

last playbook. let’s add Docker and start the Services

cat << 'EOF' > 07_docker.yml
#!/usr/bin/env ansible-playbook
---
- name: Install Docker on Debian
  hosts: debian_servers
  become: yes

  tasks:
    - name: Add Docker GPG key
      apt_key:
        url: https://download.docker.com/linux/debian/gpg
        state: present

    - name: Add Docker APT repository
      apt_repository:
        repo: deb [arch=amd64] https://download.docker.com/linux/debian {{ ansible_distribution_release }} stable
        state: present

    - name: Install Docker
      apt:
        name: docker-ce
        state: present

    - name: Start and enable Docker service
      systemd:
        name: docker
        state: started
        enabled: yes
EOF

# run it
chmod u+x 07_docker.yml
./07_docker.yml

this take a few moments for downloading software and install it …

./07_docker.yml

PLAY [Install Docker on Debian] **************************************************************************************************

TASK [Add Docker GPG key] ********************************************************************************************************
changed: [ewr]
changed: [lax]
changed: [syd]
changed: [fra]
changed: [sgp]

TASK [Add Docker APT repository] *************************************************************************************************
changed: [fra]
changed: [ewr]
changed: [lax]
changed: [sgp]
changed: [syd]

TASK [Install Docker] ************************************************************************************************************
changed: [fra]
changed: [ewr]
changed: [syd]
changed: [lax]
changed: [sgp]

TASK [Start and enable Docker service] *******************************************************************************************
ok: [fra]
ok: [ewr]
ok: [lax]
ok: [syd]
ok: [sgp]

PLAY RECAP ***********************************************************************************************************************
ewr                        : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
fra                        : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
lax                        : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
sgp                        : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
syd                        : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Setup Finished

so, our Setup is Finished except the Docker / Docker Swarm Part.

Update hosts

let’s put the ip of fra into the hosts file -> 45.77.xx.xx fra

check Setup

let’s have a quick check if everyting works as expected

ssh into debian-fra-1

$ ssh -A fra
Linux debian-fra-1 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Apr 15 12:58:13 2023 from 89.236.130.109

login as root

stoege@debian-fra-1:~$ ssr
Linux debian-fra-1 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Apr 15 12:58:15 2023 from ::1
root@debian-fra-1:~#

check hosts file

# cat /etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

149.28.xx.xx  ewr
45.77.xx.xx   fra
45.32.xx.xx   lax
139.18.xx.xx  sgp
45.77.xx.xx   syd

check full connectifity

# fping -l ewr fra lax sgp syd
fra : [0], 64 bytes, 0.074 ms (0.074 avg, 0% loss)
ewr : [0], 64 bytes, 80.6 ms (80.6 avg, 0% loss)
lax : [0], 64 bytes, 153 ms (153 avg, 0% loss)
sgp : [0], 64 bytes, 152 ms (152 avg, 0% loss)
syd : [0], 64 bytes, 287 ms (287 avg, 0% loss)

Docker Swarm

from your ssh host, we wanna setup the Docker and define “Fra” as the Master Node.

ansible fra -m shell -a "docker swarm init"

and we got the join / init command for the other nodes back

ansible fra -m shell -a "docker swarm init"
fra | CHANGED | rc=0 >>
Swarm initialized: current node (o68mppu1fvwqdn8wkw34brb1u) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-1g86i52rp8wmxebx6isbrezpehuq9k67vr7p4261t5naxe5grg-4hdenlg8o36kx753uj2m93z12 45.77.xx.xx:2377

let’s run the join command on all other nodes ..

ansible debian_servers -m shell -a "docker swarm join --token SWMTKN-1-1g86i52rp8wmxebx6isbrezpehuq9k67vr7p4261t5naxe5grg-4hdenlg8o36kx753uj2m93z12 45.77.xx.xx:2377"

and we got the following output. keep in mind that Node “Fra” is already a Node Manager so we skip that.

ansible "debian_servers:!fra" -m shell -a "docker swarm join --token SWMTKN-1-1g86i52rp8wmxebx6isbrezpehuq9k67vr7p4261t5naxe5grg-4hdenlg8o36kx753uj2m93z12 45.77.xx.xx:2377"
ewr | CHANGED | rc=0 >>
This node joined a swarm as a worker.
lax | CHANGED | rc=0 >>
This node joined a swarm as a worker.
sgp | CHANGED | rc=0 >>
This node joined a swarm as a worker.
syd | CHANGED | rc=0 >>
This node joined a swarm as a worker.

check status

ansible fra -m shell -a "docker node ls"

all nodes are active, fra is the leader

$ansible fra -m shell -a "docker node ls"
fra | CHANGED | rc=0 >>
ID                            HOSTNAME        STATUS  AVAILABILITY  MANAGER STATUS  ENGINE VERSION
p6yfvr5x1swb56rpo4xfllg2x     debian-ewr-1    Ready   Active                        23.0.3
o68mppu1fvwqdn8wkw34brb1u *   debian-fra-1    Ready   Active        Leader          23.0.3
plbly3tcythecszkw6tlc8c9o     debian-lax-1    Ready   Active                        23.0.3
8pv4q770bg260rsmxd6qnt8pg     debian-sgp-1    Ready   Active                        23.0.3
iq647j4f5todx812eufjh8n5x     debian-syd-1    Ready   Active                        23.0.3

create service on master

ansible fra -m shell -a "docker service create --name demo alpine:latest ping 45.77.xx.xx"
$ ansible fra -m shell -a "docker service create --name demo alpine:latest ping 45.77.xx.xx"
fra | CHANGED | rc=0 >>
xd1wwoo3aivxhdd0hcv3mmbef
overall progress: 0 out of 1 tasks
1/1:
overall progress: 0 out of 1 tasks
overall progress: 0 out of 1 tasks
overall progress: 0 out of 1 tasks

...
verify: Waiting 1 seconds to verify that tasks are stable...
verify: Waiting 1 seconds to verify that tasks are stable...
verify: Waiting 1 seconds to verify that tasks are stable...
verify: Service converged

Check Node, Service and Service Details

ansible fra -m shell -a "echo; docker node ls; echo; docker service ls; echo; docker service ps demo"
$ ansible fra -m shell -a "echo; docker node ls; echo; docker service ls; echo; docker service ps demo"
fra | CHANGED | rc=0 >>

ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
p6yfvr5x1swb56rpo4xfllg2x     debian-ewr-1   Ready     Active                          23.0.3
o68mppu1fvwqdn8wkw34brb1u *   debian-fra-1   Ready     Active         Leader           23.0.3
plbly3tcythecszkw6tlc8c9o     debian-lax-1   Ready     Active                          23.0.3
8pv4q770bg260rsmxd6qnt8pg     debian-sgp-1   Ready     Active                          23.0.3
iq647j4f5todx812eufjh8n5x     debian-syd-1   Ready     Active                          23.0.3

ID             NAME      MODE         REPLICAS   IMAGE           PORTS
xd1wwoo3aivx   demo      replicated   1/1        alpine:latest

ID             NAME      IMAGE           NODE           DESIRED STATE   CURRENT STATE                ERROR     PORTS
imy3m48ozwyn   demo.1    alpine:latest   debian-fra-1   Running         Running about a minute ago

so, we have service called “demo” replicated 1/1 and running in “fra” himself. let’s check it’s output

as we’re dooing a “tail -f”, which follows the log file, we have to run this command via ssh and not via ansible. ansible expects some final result, while the -f does a follow and never stops responding.

$ ssh root@fra "docker service logs -f demo"
demo.1.imy3m48ozwyn@debian-fra-1    | PING 45.77.xx.xx (45.77.xx.xx): 56 data bytes
demo.1.imy3m48ozwyn@debian-fra-1    | 64 bytes from 45.77.xx.xx: seq=0 ttl=64 time=0.184 ms
demo.1.imy3m48ozwyn@debian-fra-1    | 64 bytes from 45.77.xx.xx: seq=1 ttl=64 time=0.094 ms
demo.1.imy3m48ozwyn@debian-fra-1    | 64 bytes from 45.77.xx.xx: seq=2 ttl=64 time=0.125 ms
demo.1.imy3m48ozwyn@debian-fra-1    | 64 bytes from 45.77.xx.xx: seq=3 ttl=64 time=0.135 ms
demo.1.imy3m48ozwyn@debian-fra-1    | 64 bytes from 45.77.xx.xx: seq=4 ttl=64 time=0.108 ms
demo.1.imy3m48ozwyn@debian-fra-1    | 64 bytes from 45.77.xx.xx: seq=5 ttl=64 time=0.453 ms

Scale Up

with docker swarm, we have to possibility to run the containers on all machines around the globe. so, we simply have to scale our solution to multiple instances and then, they will get distributed around the globe.

ansible fra -m shell -a "docker service scale demo=5"

and check services again …

ansible fra -m shell -a "echo; docker node ls; echo; docker service ls; echo; docker service ps demo"

we now have distributed containers around the globe.

$ ansible fra -m shell -a "echo; docker node ls; echo; docker service ls; echo; docker service ps demo"
fra | CHANGED | rc=0 >>

ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
p6yfvr5x1swb56rpo4xfllg2x     debian-ewr-1   Ready     Active                          23.0.3
o68mppu1fvwqdn8wkw34brb1u *   debian-fra-1   Ready     Active         Leader           23.0.3
plbly3tcythecszkw6tlc8c9o     debian-lax-1   Ready     Active                          23.0.3
8pv4q770bg260rsmxd6qnt8pg     debian-sgp-1   Ready     Active                          23.0.3
iq647j4f5todx812eufjh8n5x     debian-syd-1   Ready     Active                          23.0.3

ID             NAME      MODE         REPLICAS   IMAGE           PORTS
xd1wwoo3aivx   demo      replicated   5/5        alpine:latest

ID             NAME      IMAGE           NODE           DESIRED STATE   CURRENT STATE            ERROR     PORTS
imy3m48ozwyn   demo.1    alpine:latest   debian-fra-1   Running         Running 9 minutes ago
pe87z8jk5vda   demo.2    alpine:latest   debian-sgp-1   Running         Running 13 seconds ago
c5s6z5hzpfqo   demo.3    alpine:latest   debian-ewr-1   Running         Running 16 seconds ago
gbuzxln92b92   demo.4    alpine:latest   debian-lax-1   Running         Running 15 seconds ago
uyj7to38ha8t   demo.5    alpine:latest   debian-syd-1   Running         Running 13 seconds ago

check the logs again

ssh root@fra "docker service logs -f demo"
$ ssh root@fra "docker service logs -f demo"

demo.5.uyj7to38ha8t@debian-syd-1    | 64 bytes from 45.77.xx.xx: seq=70 ttl=46 time=286.935 ms
demo.2.pe87z8jk5vda@debian-sgp-1    | 64 bytes from 45.77.xx.xx: seq=70 ttl=46 time=151.872 ms
demo.4.gbuzxln92b92@debian-lax-1    | 64 bytes from 45.77.xx.xx: seq=73 ttl=49 time=152.757 ms
demo.3.c5s6z5hzpfqo@debian-ewr-1    | 64 bytes from 45.77.xx.xx: seq=74 ttl=48 time=80.677 ms
demo.1.imy3m48ozwyn@debian-fra-1    | 64 bytes from 45.77.xx.xx: seq=646 ttl=64 time=0.119 ms

and we see 5 containers pinging the ip address of Node “fra”

finally check google / quad9

let’s check google or any other ip address

# destroy service
ansible fra -m shell -a "docker service rm demo"

# create service
ansible fra -m shell -a "docker service create --name demo alpine:latest ping 8.8.8.8"

# scale service
ansible fra -m shell -a "docker service scale demo=5"

# show service
ansible fra -m shell -a "echo; docker service ps demo"

# show logs
ssh root@fra "docker service logs -f demo"
demo.1.y59fzpgne9wx@debian-fra-1    | 64 bytes from 8.8.8.8: seq=38 ttl=118 time=0.663 ms
demo.3.irn016egntkl@debian-syd-1    | 64 bytes from 8.8.8.8: seq=28 ttl=117 time=0.638 ms
demo.4.x3a7dsig3gcv@debian-ewr-1    | 64 bytes from 8.8.8.8: seq=29 ttl=117 time=1.169 ms
demo.2.ms0otryfay8e@debian-sgp-1    | 64 bytes from 8.8.8.8: seq=29 ttl=117 time=1.180 ms
demo.5.uac1kyykdmyz@debian-lax-1    | 64 bytes from 8.8.8.8: seq=29 ttl=58 time=0.414 ms

Cleanup

if you played enough, just switch back to your terraform folder (cd ../) and run the following:

terraform destroy

Happy Hacking ! btw. the costs were 0.02 $ each vm -> totally 10 Cents :)


Any Comments ?

sha256: 08a578b8634158501ff08d06fd23e9ff29061b4a95002c649a7a2c76f3fa07a7