libcluster v2.2.0 Cluster.Strategy.Kubernetes

This clustering strategy works by loading all pods in the current Kubernetes namespace with the configured tag. It will fetch the addresses of all pods with that tag and attempt to connect. It will continually monitor and update its connections every 5s.

It assumes that all nodes share a base name, are using longnames, and are unique based on their FQDN, rather than the base hostname. In other words, in the following longname, <basename>@<domain>, basename would be the value configured in

can be either of type :ip (the pod's ip, can be obtained by setting an env variable to status.podIP) or :dns, which is the pod's internal A Record. This A Record has the format ..pod.cluster.local, e.g 1-2-3-4.default.pod.cluster.local. Getting :ip to work requires a bit fiddling in the container's CMD, for example: ```yaml # deployment.yaml command: ["sh", -c"] args: ["POD_A_RECORD"] args: ["export POD_A_RECORD=$(echo $POD_IP | sed 's/./-/g') && /app/bin/app foreground"] ``` ``` # vm.args -name app@<%= "${POD_A_RECORD}.${NAMESPACE}.pod.cluster.local" %> ``` (in an app running as a Distillery release). The benefit of using :dns over :ip is that you can establish a remote shell (as well as run observer) by using `kubectl port-forward` in combination with some entries in `/etc/hosts`. Defaults to :ip. An example configuration is below: config :libcluster, topologies: [ k8s_example: [ strategy: Elixir.Cluster.Strategy.Kubernetes, config: [ mode: :ip, kubernetes_node_basename: "myapp", kubernetes_selector: "app=myapp", polling_interval: 10_000]]]

Link to this section Summary

Link to this section Functions

Link to this function start_link(opts)

Callback implementation for Cluster.Strategy.start_link/1.