It's time to scale our hello-world job again, but this time we're going to do so with the help of a load balancer called Traefik. Traefik is an open-source edge router written in Go with first-party Consul integration. Please ensure that you've got version 2.5.x installed and in your path.
This workshop is part of a series. You can always start at the beginning.
Install Traefik
I was able to fetch the binary via Homebrew on MacOS but you can always fetchthe latest binary for your platform from their releases page.
Add a Traefik config template to our vars file
Out Traefik config is a minimal TOML file with some consul-template syntax.
traefik-config-template = <<-EOF
[entryPoints.http]
address = ":{{ env "NOMAD_ALLOC_PORT_http" }}"
[entryPoints.traefik]
address = ":{{ env "NOMAD_ALLOC_PORT_dashboard" }}"
[api]
dashboard = true
insecure = true
[providers.consulCatalog]
prefix = "hello-world-lb"
exposedByDefault = false
[providers.consulCatalog.endpoint]
address = "{{ env "CONSUL_HTTP_ADDR" }}"
scheme = "http"
EOF
- Our first two declarations are HTTP
entryPoints
which are similar tohttp { server {
in NGINX parlance. The only attribute we need need to template is the<hostname>:<port>
. The first is for our greeter load-balancer and the second is for the Traefik dashboard (not required). For both of these we can rely on the Nomad environment variables for two new ports we're going to add to our job specification, these will be calledhttp
anddashboard
. Again, we prefix these withNOMAD_ALLOC_PORT_
and Nomad will do the rest for us. - The next declaration is
api
. Here we're just going to enable thedashboard
and disabletls
. - The final declarations enable and configure the
consulCatalog
provider. There are two attributes in the first declaration.prefix
configures the provider to exclusively query for Consul catalog hosts tagged withprefix:hello-world-lb
.exposedByDefault
(false) configures the provider to query only Consul services tagged withtraefik.enable=true
. The last declaration instructs the provider on how to connect to Consul. Because Nomad and Consul are already tightly integrated we can templateaddress
with theCONSUL_HTTP_ADDR
env var. As forscheme
, since we're using Consul indev
mode this ishttp
. - Ensure that you add a newline at the end of this file otherwise Nomad will be unable to parse it.
Declare a variable for our Traefik config template in our job specification
Near the top, just below our existing config-yml-template
variable declaration add the following:
variable "traefik-config-template" {
type = string
}
Add a new group for our load-balancer above greeter
Our Traefik load-balancer will route requests on port 8080
to any healthy greeter allocation. Traefik will also expose a dashboard on port 8081
. We've added static ports for both the load-balancer (http
) and the dashboard under the network
stanza. We've also added some TCP and HTTP readiness checks that reference these ports in our new hello-world-lb Consul service.
group "load-balancer" {
count = 1
network {
port "http" {
static = 8080
}
port "dashboard" {
static = 8081
}
}
service {
name = "hello-world-lb"
port = "http"
check {
name = "ready-tcp"
type = "tcp"
port = "http"
interval = "3s"
timeout = "2s"
}
check {
name = "ready-http"
type = "http"
port = "http"
path = "/"
interval = "3s"
timeout = "2s"
}
check {
name = "ready-tcp"
type = "tcp"
port = "dashboard"
interval = "3s"
timeout = "2s"
}
check {
name = "ready-http"
type = "http"
port = "dashboard"
path = "/"
interval = "3s"
timeout = "2s"
}
}
task "traefik" {
driver = "raw_exec"
config {
command = "traefik"
args = [
"--configFile=${NOMAD_ALLOC_DIR}/traefik.toml",
]
}
template {
data = var.traefik-config-template
destination = "${NOMAD_ALLOC_DIR}/traefik.toml"
change_mode = "restart"
}
}
}
Lastly, add some tags to the hello-world-greeter service
Under the greeter group you should see the service
stanza. Adjust yours to include the tags from the Traefik config.
service {
name = "hello-world-greeter"
port = "http"
tags = [
"hello-world-lb.enable=true",
"hello-world-lb.http.routers.http.rule=Path(`/`)",
]
Check the plan output for our updated hello-world job
$ nomad job plan -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go
+/- Job: "hello-world"
+/- Task Group: "greeter" (2 in-place update)
+/- Service {
AddressMode: "auto"
EnableTagOverride: "false"
Name: "hello-world-greeter"
Namespace: "default"
OnUpdate: "require_healthy"
PortLabel: "http"
TaskName: ""
+ Tags {
+ Tags: "hello-world-lb.enable=true"
+ Tags: "hello-world-lb.http.routers.http.rule=Path(`/`)"
}
}
Task: "greet"
+ Task Group: "load-balancer" (1 create)
+ Count: "1" (forces create)
+ RestartPolicy {
+ Attempts: "2"
+ Delay: "15000000000"
+ Interval: "1800000000000"
+ Mode: "fail"
}
+ ReschedulePolicy {
+ Attempts: "0"
+ Delay: "30000000000"
+ DelayFunction: "exponential"
+ Interval: "0"
+ MaxDelay: "3600000000000"
+ Unlimited: "true"
}
+ EphemeralDisk {
+ Migrate: "false"
+ SizeMB: "300"
+ Sticky: "false"
}
+ Update {
+ AutoPromote: "false"
+ AutoRevert: "false"
+ Canary: "0"
+ HealthCheck: "checks"
+ HealthyDeadline: "300000000000"
+ MaxParallel: "1"
+ MinHealthyTime: "10000000000"
+ ProgressDeadline: "600000000000"
}
+ Network {
Hostname: ""
+ MBits: "0"
Mode: ""
+ Static Port {
+ HostNetwork: "default"
+ Label: "dashboard"
+ To: "0"
+ Value: "8081"
}
+ Static Port {
+ HostNetwork: "default"
+ Label: "http"
+ To: "0"
+ Value: "8080"
}
}
+ Service {
+ AddressMode: "auto"
+ EnableTagOverride: "false"
+ Name: "hello-world-lb"
+ Namespace: "default"
+ OnUpdate: "require_healthy"
+ PortLabel: "http"
TaskName: ""
+ Check {
AddressMode: ""
Body: ""
Command: ""
+ Expose: "false"
+ FailuresBeforeCritical: "0"
GRPCService: ""
+ GRPCUseTLS: "false"
InitialStatus: ""
+ Interval: "3000000000"
Method: ""
+ Name: "ready-http"
+ OnUpdate: "require_healthy"
+ Path: "/"
+ PortLabel: "dashboard"
Protocol: ""
+ SuccessBeforePassing: "0"
+ TLSSkipVerify: "false"
TaskName: ""
+ Timeout: "2000000000"
+ Type: "http"
}
+ Check {
AddressMode: ""
Body: ""
Command: ""
+ Expose: "false"
+ FailuresBeforeCritical: "0"
GRPCService: ""
+ GRPCUseTLS: "false"
InitialStatus: ""
+ Interval: "3000000000"
Method: ""
+ Name: "ready-tcp"
+ OnUpdate: "require_healthy"
Path: ""
+ PortLabel: "dashboard"
Protocol: ""
+ SuccessBeforePassing: "0"
+ TLSSkipVerify: "false"
TaskName: ""
+ Timeout: "2000000000"
+ Type: "tcp"
}
}
+ Task: "traefik" (forces create)
+ Driver: "raw_exec"
+ KillTimeout: "5000000000"
+ Leader: "false"
+ ShutdownDelay: "0"
+ Config {
+ args[0]: "--configFile=${NOMAD_ALLOC_DIR}/traefik.toml"
+ command: "traefik"
}
+ Resources {
+ CPU: "100"
+ Cores: "0"
+ DiskMB: "0"
+ IOPS: "0"
+ MemoryMB: "300"
+ MemoryMaxMB: "0"
}
+ LogConfig {
+ MaxFileSizeMB: "10"
+ MaxFiles: "10"
}
+ Template {
+ ChangeMode: "restart"
ChangeSignal: ""
+ DestPath: "${NOMAD_ALLOC_DIR}/traefik.toml"
+ EmbeddedTmpl: "[entryPoints.http]\naddress = \":{{ env \"NOMAD_ALLOC_PORT_http\" }}\"\n \n[entryPoints.traefik]\naddress = \":{{ env \"NOMAD_ALLOC_PORT_dashboard\" }}\"\n \n[api]\ndashboard = true\ninsecure = true\n \n[providers.consulCatalog]\nprefix = \"hello-world-lb\"\nexposedByDefault = false\n \n[providers.consulCatalog.endpoint]\naddress = \"{{ env \"CONSUL_HTTP_ADDR\" }}\"\nscheme = \"http\"\n"
+ Envvars: "false"
+ LeftDelim: "{{"
+ Perms: "0644"
+ RightDelim: "}}"
SourcePath: ""
+ Splay: "5000000000"
+ VaultGrace: "0"
}
Scheduler dry-run:
- All tasks successfully allocated.
Alright this looks like it should work.
Run our updated hello-world job
$ nomad job run -verbose -var-file=./1_HELLO_WORLD/vars.go ./1_HELLO_WORLD/job.go
Browse to our new Traefik load-balancer
- Open http://localhost:8080 and ensure that you're being greeted
- Open http://localhost:8081 and ensure that it loads succesfully
Inspect the Consul provided backend configuration via the Traefik dashboard
- Open: http://localhost:8081/dashboard/#/http/services/hello-world-greeter@consulcatalog
- You should find your 2 existing greeter allocations listed by their full address
<hostname>:<port>
.
Perform some scaling of our greeter allocations
It's time to scale our greeter allocations again, except this time we have a load-balancer that will reconfigure itself when the count is increased.
- You can scale allocations via the job specification but you can also temporarily scale a given
job >> group
via the nomad CLI:$ nomad job scale "hello-world" "greeter" 3
- Refresh: http://localhost:8081/dashboard/#/http/services/hello-world-greeter@consulcatalog
- You should see 3 greeter allocations
- You can also temporarily de-scale a given
job >> group
via the nomad CLI:$ nomad job scale "hello-world" "greeter" 2
- Refresh: http://localhost:8081/dashboard/#/http/services/hello-world-greeter@consulcatalog
- You should see 2 greeter allocations like before