Akd Walkthrough
Prelude
The development experience in Elixir is amazing, but deploying to production can still be difficult. Akd was developed to improve this process, and once configured, it makes deployment as easy as mix akd.deploy
In this walkthrough we will be covering the basics of Elixir deployments and
how we can make them easier with Annkissam’s deployment package: akd
.
Akd, in its purest form, is a way of executing a list of operations on a remote
(or local) machine. Akd provides a DSL that allows developers to
define a pipeline consisting of a set of these operations. It also allows you to provide corresponding,
remedial operations, in the event that one or more of the pipeline
operations fails. Akd was inspired by the Ruby gem capistrano
, and should feel somewhat familiar to users experienced with that gem.
Akd’s primary goal is twofold:
- to provide developers with the ability to easily compose a series of deloyment operations using the Elixir programming language, and
- to standardize the way in which Elixir application deployments (using tools
like
distillery
ordocker
) are performed.
We will talk more about Akd as we proceed through this walkthrough.
How does Akd work?
Hook
s
Hook
s form the core functionality for Akd. A Hook
is an abstraction around a
set of sequential operations. These operations
include the commands to be executed within the contexts of both (i) a pseudo-terminal
and (ii) the desired host.
Based on their purpose, each of these operations will fall into one of the following contexts: main
, ensure
, or rollback
.
The main
context includes the set of operations that denote the
intended functionality of a hook. For example, members of main
set might
include operations that fetch source code, run migrations, etc.
Next, the ensure
set of operations should includes those operations which are
to be run after the main
set of operations succeeds. These ensure
operations might include tasks aimed at the tidying up of data, the deletion of
temporary files, etc.
Finally, the rollback
set of operations includes those tasks that should be
executed in the event a deployment process fails. The rollback
set operations
is meant to negate the effects of the main
operations and should include
tasks like rolling back a migration, reverting source code, etc.
Deployment
s
A Deployment
is a structure consisting of a collection of Hook
s,
the mix_env
specification, and other information about where to build and
publish an app. By default, in order to help better organize deployment logic,
a Deployment
definition is broken out into multiple phases (pipelines of
hooks). Akd works via the sequential execution of these phases, along with corresponding
transformations of the Deployment
struct itself. The default set of phases
generated by Akd are:
fetch
: fetch the source-code which corresponds to a release (deployed app). This can be done by usinggit
,svn
or justscp
.init
: initialize and configure the libraries required for the rest of the deployment process. For an Elixir app, it can be configuringdistillery
ordocker
.build
: produce a deployable entity. It can be a binary produced by distillery or source code itself or even a docker image.publish
: publish/deploy the app to the desired destination. This can be done byscp
,cp
etc.stop
: stops a previously running instance of the app.start
: start a newly deployed instance of the app.
These phases are generated by Akd by default, however they are completely optional. The phases’ sole purpose is to aid in addressing various common deployment use cases; thus, Akd isn’t intended to force structure where it isn’t needed. Akd should—and indeed, was designed to—work well for applications that require more (or less) robust deployment protocols. As such, it is very easy to define your own phases using Akd’s pipeline DSL (which will be covered in more detail, in future walkthroughs). Even Akd’s own base hook definitions are built upon Akd’s DSL, which should allow for developers to more easily read/modify/duplicate them in ways that best suit their needs.
Taking Akd for a Spin: an example Phoenix App
For this walkthrough, we will use a simple Phoenix application, akd_example
.
The app runs on Elixir 1.6.4 and erlang 20.3.4. It uses Phoenix 1.3.2, with
Ecto and Postgrex. Additionally, as any ordinary Phoenix app, it is configured
to use brunch
and npm
, by default.
This Phoenix app consists of one table/schema: products
. We will be deploying
this app to a server with ip, 192.168.xx.xx.
The simplest way to run a Phoenix app is to get all the dependencies using
mix deps.get
and run it using mix phx.server
. This is a great for
development mode, but the recommended way to use it in production is through a
built release; At Annkissam, our preferred way to release an Elixir application is by using
distillery
. Distillery allows us to build and release an Elixir application as
an executable binary, and as such, it isn’t necessary for the source code of the
application to exist on the server being deployed to. However, distillery
leaves it up to the developer to manage placing that code into their hosting environment.
The following deployment example is centered around setting
up akd
and distillery
to work together to deploy a Phoenix app.
Setting up the project with Akd and Distillery
One thing to point out here is that akd
generates a mix task which allows us
to deploy an application, but that mix task has to be part of an application.
The simplest thing to do is to add akd
to the app which you are deploying,
but then you would need to akd
as a dependency in all the environments as
it creates a mix task which needs to be compiled in all the environments.
I recommend creating a new app in the root folder named <app_name>_deployer
,
whose responsibility is to just deploy the app, <app_name>
. In this way, we
don’t affect the dependencies of the original app and we could also use the
app <app_name>_deployer
to deploy applications other than <app_name>
.
So, let’s create a new mix project in the root folder of the app akd_example
by using the command $ mix new akd_example_deployer
and add akd
as a
dependency in the mix.exs
:
# akd_example_deployer/mix.exs
defp deps do
[{:akd, "~> 0.2.1", only: :dev, runtime: false}]
end
Once we run the command $ mix deps.get
from akd_example_deployer
’s root
directory, we will have access to akd
’s mix tasks.
Once you run the command $ mix akd.gen.task
it will print out the usage
of the task. To keep things simple, let’s use all the default hooks and run the task
as $ mix akd.gen.task Deploy --with-phx
. The --with-phx
flag instructs Akd to generate hooks for use with a Phoenix app. This will generate a mix task by
creating the file akd_example_deployer/lib/mix/tasks/akd/deploy.ex
containing:
# akd_example_deployer/lib/akd_example_deployer/mix/tasks/akd/deploy.ex
defmodule Mix.Tasks.Akd.Deploy do
@moduledoc """
This task was generated by Akd
TODO: Add more documentation
"""
use Akd.Mix.Task
# This tasks comes with the following switches, but add more if needed
# For example: :client (for your apps)
@switches [name: :string, build_at: :string, env: :string,
publish_to: :string, vsn: :string]
@aliases [n: :name, b: :build_at, e: :env, p: :publish_to, v: :vsn]
# Change default values for all the switches
@defaults [name: "node", build_at: {:local, "."}, env: "prod",
publish_to: "user@host:~/path/to/dir",
vsn: Mix.Project.config[:version]]
pipeline :fetch do
hook Akd.Fetch.Git
end
pipeline :init do
hook Akd.Init.Distillery
end
pipeline :build do
hook Akd.Build.Distillery
hook Akd.Build.Phoenix.Npm,
package: "path/to/assets_folder", # web_app/assets
cmd_envs: [] # Add build time system variables
hook Akd.Build.Phoenix.Brunch,
config: "path/to/assets_folder", # web_app/assets
brunch: "./node_modules/brunch/bin/brunch", # Path to brunch binary from assets folder
cmd_envs: [] # Add build time system variables
end
pipeline :publish do
hook Akd.Stop.Distillery, ignore_failure: true
hook Akd.Publish.Distillery
hook Akd.Start.Distillery
end
pipeline :deploy do
pipe_through :fetch
pipe_through :init
pipe_through :build
pipe_through :publish
end
def run(argv) do
{parsed, _, _} =
OptionParser.parse(argv, switches: @switches, aliases: @aliases)
execute :deploy, with: parameterize(parsed)
end
# This functions translates a list options into parameters that can be
# converted to a Akd.Deployment struct
def parameterize(opts) do
opts = uniq_merge(opts, @defaults)
%{mix_env: opts[:env], build_at: opts[:build_at], hooks: [],
publish_to: opts[:publish_to], name: opts[:name], vsn: opts[:vsn]}
end
# This function takes two keyword lists and merges them keeping the keys
# unique. If there are multiple values for a key, it takes the value from
# the first value of keyword1 corresponding to that key.
defp uniq_merge(keyword1, keyword2) do
keyword2
|> Keyword.merge(keyword1)
|> Keyword.new()
end
end
The file will have a lot of autogenerated code, defaults, switches and aliases.
The task has switches for:
name
: the name of the built/deployed binary (this is usually the same as the app name).build_at
: the destination where the app/node will be built/released. This is usually given in the format: user@ip:path/to/apppublish_to
: the destination where the app/node will be published. This is usually given in the format: user@ip:path/to/appenv
: the mix_env/distillery_env of the app being released.vsn
: the version of the app being deployed.
Here’s an example usage of the generated task:
$ mix akd.deploy -n app_name -b user@192.168.x.xxx -p user@192.168.x.xxx -e prod -v 0.1.0
NOTE: This command must be ran from the root directory of the deployer app
But that won’t work yet, as we haven’t set up the task completely and it’s verbose to enter the switches every time.
First, we will change the defaults. Replace the autogenerated @defaults
with:
@defaults [name: "akd_example", build_at: "user@192.168.xx.xx:~/path/to/app",
env: "prod", publish_to: "user@192.168.xx.xx:~/path/to/app",
vsn: "0.1.0"]
Next, we can give the Akd.Fetch.Git
hook some parameters to clone the source
code of the application. This tells Akd to fetch the source code using git
,
from a source (usually github) and a branch:
pipeline :fetch do
hook Akd.Fetch.Git, branch: "master",
src: "git@github.com:annkissam/akd_example.git",
run_ensure: false
end
run_ensure
flag determines whether ensure commands of a hook should be ran or
not. The ensure operations of Git
hook include deleting the fetched code once
deployed. In this case it makes sense for it to be false, so the build
destination doesn’t have to fetch the code everytime it wants to deploy.
Next, we can give Akd.Build.Phoenix.Npm
and Akd.Build.Phoenix.Brunch
hooks
their parameters to allow akd
to call commands like $ npm install
and
brunch build
:
hook Akd.Build.Phoenix.Npm,
package: "./assets"
hook Akd.Build.Phoenix.Brunch,
config: "./assets",
brunch: "./node_modules/brunch/bin/brunch"
Now, the app is ready to be deployed! Just run mix akd.deploy
from the
akd_example_deployer
folder and make sure that you have ssh credentials to
the server (192.168.xx.xx).
Running Migrations
Distillery provides various ways in which a set of migrations can be run on a Phoenix app in this document.
For this walkthrough, we will be creating a migration module which ensures that all the Ecto apps are started and runs the migrations.
In out example app, we can create a new file in lib/akd_example/release.ex
,
and add the following code to it:
defmodule AkdExample.Release do
@start_apps [
:postgrex,
:ecto,
]
def myapp, do: Application.get_application(__MODULE__)
def repos, do: Application.get_env(myapp(), :ecto_repos, [])
def migrate do
me = myapp()
IO.puts "Loading #{me}.."
# Load the code for myapp, but don't start it
:ok = Application.load(me)
IO.puts "Starting dependencies.."
# Start apps necessary for executing migrations
Enum.each(@start_apps, &Application.ensure_all_started/1)
# Start the Repo(s) for myapp
IO.puts "Starting repos.."
Enum.each(repos(), &(&1.start_link(pool_size: 1)))
# Run migrations
Enum.each(repos(), &run_migrations_for/1)
# Signal shutdown
IO.puts "Success!"
:init.stop()
end
def priv_dir(app), do: "#{:code.priv_dir(app)}"
defp run_migrations_for(repo) do
app = Keyword.get(repo.config, :otp_app)
IO.puts "Running migrations for #{app}"
Ecto.Migrator.run(repo, migrations_path(repo), :up, all: true)
end
def migrations_path(repo), do: priv_path_for(repo, "migrations")
def priv_path_for(repo, filename) do
app = Keyword.get(repo.config, :otp_app)
repo_underscore = repo |> Module.split |> List.last |> Macro.underscore
Path.join([priv_dir(app), repo_underscore, filename])
end
end
This will allow us to run migrations without having mix in production by using
the command $ bin/akd_example command Elixir.AkdExample.Release migrate
on
the publish server.
Now, using Akd’s generators, we’ll add that command to the list of hooks in the deploy pipeline.
Generating a Custom Hook
Part of Akd’s extensability comes from its transparency. Instead of defining
functions and pipelines internally, akd generates them and leaves them open to
modification through the use of its DSLs. In order to add a hook which runs
migrations, we will cd
into akd_example_deployer
and use the command,
$ mix akd.gen.hook MyHooks.RunMigrations
. This will create a file at
akd_example_deployer/lib/my_hooks/run_migrations.ex
, with the following
generated code:
defmodule MyHooks.RunMigrations do
@moduledoc """
A Custom Hook module generate by Akd.
TODO: Update Documentation
"""
use Akd.Hook
@default_opts [run_ensure: true, ignore_failure: false]
def get_hooks(deployment, opts \\ []) do
opts = uniq_merge(opts, @default_opts)
# Replace this with some destination
destination = Akd.Destination.local(".")
# For more information check out Akd.Dsl.FormHook
[my_hook(destination)]
end
defp my_hook(destination, opts \\ []) do
form_hook opts do
main "main command", destination, cmd_envs: [{"SOME_ENV", "some_values"}]
ensure "ensure command", destination
rollback "rollback command", destination
end
end
# This function takes two keyword lists and merges them keeping the keys
# unique. If there are multiple values for a key, it takes the value from
# the first value of keyword1 corresponding to that key.
defp uniq_merge(keyword1, keyword2) do
keyword2
|> Keyword.merge(keyword1)
|> Keyword.new()
end
end
The command to run migrations
is added to the migrate_hook
function. We can
change the server the command will execute on by using the deployment’s publish
configuration.
defmodule MyHooks.RunMigrations do
@moduledoc """
A Custom Hook module generate by Akd.
TODO: Update Documentation
"""
use Akd.Hook
@default_opts [run_ensure: true, ignore_failure: false]
def get_hooks(deployment, opts \\ []) do
opts = uniq_merge(opts, @default_opts)
destination = Akd.DestinationResolver.resolve(:publish, deployment)
# For more information check out Akd.Dsl.FormHook
[migrate_hook(destination)]
end
defp migrate_hook(destination, opts \\ []) do
form_hook opts do
main "bin/#{deployment.name} command Elixir.AkdExample.Release migrate",
destination
end
end
# This function takes two keyword lists and merges them keeping the keys
# unique. If there are multiple values for a key, it takes the value from
# the first value of keyword1 corresponding to that key.
defp uniq_merge(keyword1, keyword2) do
keyword2
|> Keyword.merge(keyword1)
|> Keyword.new()
end
end
To execute the hook, it must be added to the deployment pipeline. For this example,
it is has been added as part of the :publish
pipeline, after the :start
hook.
# akd_example_deployer/lib/mix/tasks/akd/deploy.ex
pipeline :publish do
hook Akd.Stop.Distillery, ignore_failure: true
hook Akd.Publish.Distillery
hook MyHooks.RunMigrations
hook Akd.Start.Distillery
end
So, here’s our final deploy.ex
file:
# akd_example_deployer/lib/akd_example_deployer/mix/tasks/akd/deploy.ex
defmodule Mix.Tasks.Akd.Deploy do
@moduledoc """
This task was generated by Akd
TODO: Add more documentation
"""
use Akd.Mix.Task
# This tasks comes with the following switches, but add more if needed
# For example: :client (for your apps)
@switches [name: :string, build_at: :string, env: :string,
publish_to: :string, vsn: :string]
@aliases [n: :name, b: :build_at, e: :env, p: :publish_to, v: :vsn]
# Change default values for all the switches
@defaults [name: "node", build_at: {:local, "."}, env: "prod",
publish_to: "user@host:~/path/to/dir",
vsn: Mix.Project.config[:version]]
pipeline :fetch do
hook Akd.Fetch.Git
end
pipeline :init do
hook Akd.Init.Distillery
end
pipeline :build do
hook Akd.Build.Distillery
hook Akd.Build.Phoenix.Npm,
package: "./assets"
hook Akd.Build.Phoenix.Brunch,
config: "./assets",
brunch: "./node_modules/brunch/bin/brunch"
end
pipeline :publish do
hook Akd.Stop.Distillery, ignore_failure: true
hook Akd.Publish.Distillery
hook MyHooks.RunMigrations
hook Akd.Start.Distillery
end
pipeline :deploy do
pipe_through :fetch
pipe_through :init
pipe_through :build
pipe_through :publish
end
def run(argv) do
{parsed, _, _} =
OptionParser.parse(argv, switches: @switches, aliases: @aliases)
execute :deploy, with: parameterize(parsed)
end
# This functions translates a list options into parameters that can be
# converted to a Akd.Deployment struct
def parameterize(opts) do
opts = uniq_merge(opts, @defaults)
%{mix_env: opts[:env], build_at: opts[:build_at], hooks: [],
publish_to: opts[:publish_to], name: opts[:name], vsn: opts[:vsn]}
end
# This function takes two keyword lists and merges them keeping the keys
# unique. If there are multiple values for a key, it takes the value from
# the first value of keyword1 corresponding to that key.
defp uniq_merge(keyword1, keyword2) do
keyword2
|> Keyword.merge(keyword1)
|> Keyword.new()
end
end
Now, everytime we deploy migrations will run after the app is started on the publish server.
Conclusion
This example was intended to highlight the basic features of Akd as well as to demonstrate its extensability. Using its generators and DSL, Akd has allowed Annkissam to more swiftly deploy our growing suite of Elixir applications.
The Future
Akd is not limited to only deploying Distillery releases. The following are a few additional use cases that Akd is equipped to handle:
- Publish Mix projects
- Package and deploy Docker containers.
- Deploy non-Elixir applications
- A tool to run tasks on a remote server.
We’re hoping to publish more about these use cases in the future. We’re also eager to hear how Akd has been extended to help with your workflows.