Nomad Workshop 2 - Scaling Allocations

It's time to scale our greeter allocations. Thankfully Nomad's dynamic port allocation and Consul's templating are going to make this operation pretty painless.

Feeling a little lost?
This workshop is part of a series. You can always start at the beginning.

It's best if you follow the documentation here to update your job specification at 1_HELLO_WORLD/job.go and your vars file at 1_HELLO_WORLD/vars.go, but if you get lost you can see the final product under 2_HELLO_SCALING/job.go and 2_HELLO_SCALING/vars.go.

Increment the greeter count in our job specification

Edit job >> group "greeter" >> count in our job specification from:

count = 1

to:

count = 2

Check the plan output for the hello-world job

$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go
+/- Job: "hello-world"
+/- Task Group: "greeter" (1 create, 1 in-place update)
  +/- Count: "1" => "2" (forces create)
      Task: "server"

Scheduler dry-run:
- WARNING: Failed to place all allocations.
  Task Group "greeter" (failed to place 1 allocation):
    1. Resources exhausted on 1 nodes
    1. Dimension "network: reserved port collision http=1234" exhausted on 1 nodes

It looks like having a static port of 1234 is going to cause resource exhaustion. Not to worry though, we can update our job specification to let the Nomad Scheduler pick a port for each of our greeter allocations to listen on.

Update the job to make port selection dynamic

Under job >> group "greeter" >> network >> port we can remove our static port assignment of 1234 and leave empty curly braces {} . This will instruct the Nomad Scheduler dynamically assign the port for each allocation.

Our existing lines:

port "http" {
  static = 1234
}

Our new line:

port "http" {}

Update our greet config file template to use a dynamic port

By replacing 1234 in our greet config template with the NOMAD_ALLOC_PORT_http environment variable Nomad will always keep our config file up-to-date.

We expect the environment variable to be NOMAD_ALLOC_PORT_http because the network port we declare at job >> group "greeter" >> network >> port is called http . If we had called it my-special-port we would use NOMAD_ALLOC_PORT_my-special-port.

Our existing line:

port: 1234

Our new line:

port: {{ env "NOMAD_ALLOC_PORT_http" }}

For more info on Nomad Runtime Environment Variables see these
docs.

Check the plan output of the updated hello-world job

$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go
+/- Job: "hello-world"
+/- Task Group: "greeter" (1 create, 1 ignore)
  +/- Count: "1" => "2" (forces create)
  +   Network {
        Hostname: ""
      + MBits:    "0"
        Mode:     ""
      + Dynamic Port {
        + HostNetwork: "default"
        + Label:       "http"
        + To:          "0"
        }
      }
  -   Network {
        Hostname: ""
      - MBits:    "0"
        Mode:     ""
      - Static Port {
        - HostNetwork: "default"
        - Label:       "http"
        - To:          "0"
        - Value:       "1234"
        }
      }
  +/- Task: "greet" (forces create/destroy update)
    +/- Template {
          ChangeMode:   "restart"
          ChangeSignal: ""
          DestPath:     "${NOMAD_ALLOC_DIR}/config.yml"
      +/- EmbeddedTmpl: "---\nname: \"Samantha\"\nport: 1234\n\n" => "---\nname: \"Samantha\"\nport: {{ env \"NOMAD_ALLOC_PORT_http\" }}\n\n"
          Envvars:      "false"
          LeftDelim:    "{{"
          Perms:        "0644"
          RightDelim:   "}}"
          SourcePath:   ""
          Splay:        "5000000000"
          VaultGrace:   "0"
        }

Scheduler dry-run:
- All tasks successfully allocated.

Okay, this looks as though it will work!

Run the updated hello-world job

$ nomad job run -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go

Fetch the ports of our 2 new greeter allocations

There are a few ways that we can fetch the ports that Nomad assigned to our
greeter allocations.

via the Nomad GUI

  1. Open: http://localhost:4646/ui/jobs/hello-world/web
  2. Scroll down to the Allocations table
  3. Open each of the Allocations where Status is running
  4. Scroll down to the Ports table, and note the value for http in the Host Address column

via the Nomad CLI

  1. Run nomad job status hello-world and note the ID for each allocation with running in the Status column:
    $ nomad job status hello-world
    ID            = hello-world
    Name          = hello-world
    Submit Date   = 2022-01-06T16:57:57-08:00
    Type          = service
    Priority      = 50
    Datacenters   = dev-general
    Namespace     = default
    Status        = running
    Periodic      = false
    Parameterized = false
    
    Summary
    Task Group  Queued  Starting  Running  Failed  Complete  Lost
    greeter     0       0         2        0       1         0
    
    Latest Deployment
    ID          = a11c023a
    Status      = successful
    Description = Deployment completed successfully
    
    Deployed
    Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
    greeter     2        2       2        0          2022-01-06T17:08:24-08:00
    
    Allocations
    ID        Node ID   Task Group  Version  Desired  Status    Created    Modified
    4ed1c285  e6e7b140  greeter     1        run      running   17s ago    4s ago
    ef0ef9b3  e6e7b140  greeter     1        run      running   31s ago    18s ago
    aa3a7834  e6e7b140  greeter     0        stop     complete  14m9s ago  16s ago
    
  2. Run nomad alloc status <allocation-id> for each alloc ID:
    $ nomad alloc status 4ed1c285
    ID                  = 4ed1c285-e923-d627-7cc2-d392147eca2f
    Eval ID             = e6b817a5
    Name                = hello-world.greeter[0]
    Node ID             = e6e7b140
    Node Name           = treepie.local
    Job ID              = hello-world
    Job Version         = 1
    Client Status       = running
    Client Description  = Tasks are running
    Desired Status      = run
    Desired Description = <none>
    Created             = 58s ago
    Modified            = 45s ago
    Deployment ID       = a11c023a
    Deployment Health   = healthy
    
    Allocation Addresses
    Label  Dynamic  Address
    *http  yes      127.0.0.1:31623
    
    Task "greet" is "running"
    Task Resources
    CPU        Memory          Disk     Addresses
    0/100 MHz  49 MiB/300 MiB  300 MiB
    
    Task Events:
    Started At     = 2022-01-07T00:58:12Z
    Finished At    = N/A
    Total Restarts = 0
    Last Restart   = N/A
    
    Recent Events:
    Time                       Type        Description
    2022-01-06T16:58:12-08:00  Started     Task started by client
    2022-01-06T16:58:12-08:00  Task Setup  Building Task Directory
    2022-01-06T16:58:12-08:00  Received    Task received by client
    
  3. Under Allocation Addresses we can see 127.0.0.1:31623 is the address for this allocation

In our job specification you'll see that we also registered our greeter allocations with the Consul Catalog as a Service called hello-world-greeter. This means that we can also grab these addresses and ports via the Consul web UI:

  1. Open http://localhost:8500/ui/dev-general/services/hello-world-greeter/instances
  2. On the right-hand side of each entry you can find the complete IP address and port for each of our hello-world-greeter allocations.

via the Consul DNS endpoint

But, how would a service be able to locate these hello-world-greeter allocations? Well sure, you could integrate a Consul Client into these other services that want to connect with a hello-world-greeter, but there's something a little simpler that you can do as a first approach, use the DNS endpoint that Consul exposes by default to fetch theses addresses and ports in the form of a SRV record.

$ dig @127.0.0.1 -p 8600 hello-world-greeter.service.dev-general.consul. SRV

; <<>> DiG 9.10.6 <<>> @127.0.0.1 -p 8600 hello-world-greeter.service.dev-general.consul. SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11398
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 5
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;hello-world-greeter.service.dev-general.consul.	IN SRV

;; ANSWER SECTION:
hello-world-greeter.service.dev-general.consul.	0 IN SRV 1 1 28098 7f000001.addr.dev-general.consul.
hello-world-greeter.service.dev-general.consul.	0 IN SRV 1 1 31623 7f000001.addr.dev-general.consul.

;; ADDITIONAL SECTION:
7f000001.addr.dev-general.consul. 0 IN	A	127.0.0.1
treepie.local.node.dev-general.consul. 0 IN TXT	"consul-network-segment="
7f000001.addr.dev-general.consul. 0 IN	A	127.0.0.1
treepie.local.node.dev-general.consul. 0 IN TXT	"consul-network-segment="

;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Thu Jan 06 17:01:41 PST 2022
;; MSG SIZE  rcvd: 302

Given the output of this SRV record you should be able to browse to http://localhost:28098 or http://localhost:31623 and be greeted.

If you follow these docs you should also be able add Consul to your list of resolvers.

Assuming Consul is set as one of my resolvers I should also be able to browse to either of the following:

Ready to learn more about Consul?

Continue on to Nomad Workshop 3 - Storing Configuration in Consul.

Show Comments