ExAws.S3 (ExAws.S3 v2.2.0) View Source
Service module for https://github.com/ex-aws/ex_aws
Installation
The package can be installed by adding :ex_aws_s3
to your list of dependencies in mix.exs
along with :ex_aws
, your preferred JSON codec / HTTP client, and optionally :sweet_xml
to support operations like list_objects
that require XML parsing.
def deps do
[
{:ex_aws, "~> 2.0"},
{:ex_aws_s3, "~> 2.0"},
{:poison, "~> 3.0"},
{:hackney, "~> 1.9"},
{:sweet_xml, "~> 0.6.6"}, # optional dependency
]
end
Operations on AWS S3
Basic Operations
The vast majority of operations here represent a single operation on S3.
Examples
S3.list_objects |> ExAws.request! #=> %{body: [list, of, objects]}
S3.list_objects |> ExAws.stream! |> Enum.to_list #=> [list, of, objects]
S3.put_object("my-bucket", "path/to/bucket", contents) |> ExAws.request!
Higher Level Operations
There are also some operations which operate at a higher level to make it easier to download and upload very large files.
Multipart uploads
"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request #=> {:ok, :done}
See ExAws.S3.upload/4
for options
Download large file to disk
S3.download_file("my-bucket", "path/on/s3", "path/to/dest/file")
|> ExAws.request #=> {:ok, :done}
More high level functionality
Task.async_stream makes some high level flows so easy you don't need explicit ExAws support.
For example, here is how to concurrently upload many files.
upload_file = fn {src_path, dest_path} ->
S3.put_object("my_bucket", dest_path, File.read!(src_path))
|> ExAws.request!
end
paths = %{"path/to/src0" => "path/to/dest0", "path/to/src1" => "path/to/dest1"}
paths
|> Task.async_stream(upload_file, max_concurrency: 10)
|> Stream.run
Configuration
The scheme
, host
, and port
can be configured to hit alternate endpoints.
For example, this is how to use a local minio instance:
# config.exs
config :ex_aws, :s3,
scheme: "http://",
host: "localhost",
port: 9000
Link to this section Summary
Functions
Abort a multipart upload
Complete a multipart upload
Delete all listed objects.
Delete a bucket
Delete a bucket cors
Delete a bucket lifecycle
Delete a bucket policy
Delete a bucket replication
Delete a bucket tagging
Delete a bucket website
Delete multiple objects within a bucket
Delete an object within a bucket
Remove the entire tag set from the specified object
Download an S3 object to a file.
Get bucket acl
Get bucket cors
Get bucket lifecycle
Get bucket location
Get bucket logging
Get bucket notification
Get bucket object versions
Get bucket policy
Get bucket replication
Get bucket payment configuration
Get bucket tagging
Get bucket versioning
Get bucket website
Get an object from a bucket
Get an object's access control policy
Get object tagging
Get a torrent for a bucket
Determine if a bucket exists
Determine if an object exists
Initiate a multipart upload
List buckets
List multipart uploads for a bucket
List objects in bucket
List objects in bucket
List the parts of a multipart upload
Determine the CORS configuration for an object
Restore an object to a particular version
Generate a pre-signed URL for an object.
Creates a bucket in the specified region
Update or create a bucket access control policy
Update or create a bucket CORS policy
Update or create a bucket lifecycle configuration
Update or create a bucket logging configuration
Update or create a bucket notification configuration
Update or create a bucket policy configuration
Update or create a bucket replication configuration
Update or create a bucket requestPayment configuration
Update or create a bucket tagging configuration
Update or create a bucket versioning configuration
Update or create a bucket website configuration
Create an object within a bucket
Create or update an object's access control policy
Add a set of tags to an existing object
Multipart upload to S3.
Upload a part for a multipart upload
Upload a part for a multipart copy
Link to this section Types
Specs
acl_opts() :: {:acl, canned_acl()} | grant()
Specs
Specs
canned_acl() :: :private | :public_read | :public_read_write | :authenticated_read | :bucket_owner_read | :bucket_owner_full_control
Specs
Specs
download_file_opts() :: [ max_concurrency: pos_integer(), chunk_size: pos_integer(), timeout: pos_integer() ]
Specs
encryption_opts() :: binary() | [{:aws_kms_key_id, binary()}] | customer_encryption_opts()
Specs
get_object_opts() :: [ {:response, get_object_response_opts()} | {:version_id, binary()} | head_object_opt() ]
Specs
Specs
Specs
Specs
Specs
head_object_opts() :: [head_object_opt()]
Specs
initiate_multipart_upload_opts() :: [ {:cache_control, binary()} | {:content_disposition, binary()} | {:content_encoding, binary()} | {:content_type, binary()} | {:expires, binary()} | {:storage_class, :standard | :reduced_redundancy} | {:website_redirect_location, binary()} | {:encryption, encryption_opts()} | acl_opts() ]
Specs
Specs
Specs
Specs
put_object_copy_opts() :: [ {:metadata_directive, :COPY | :REPLACE} | {:copy_source_if_modified_since, binary()} | {:copy_source_if_unmodified_since, binary()} | {:copy_source_if_match, binary()} | {:copy_source_if_none_match, binary()} | {:website_redirect_location, binary()} | {:destination_encryption, encryption_opts()} | {:source_encryption, customer_encryption_opts()} | {:cache_control, binary()} | {:content_disposition, binary()} | {:content_encoding, binary()} | {:content_length, binary()} | {:content_type, binary()} | {:expect, binary()} | {:expires, binary()} | {:storage_class, :standard | :reduced_redundancy} | {:website_redirect_location, binary()} | {:meta, amz_meta_opts()} | acl_opts() ]
Specs
put_object_opts() :: [ {:cache_control, binary()} | {:content_disposition, binary()} | {:content_encoding, binary()} | {:content_length, binary()} | {:content_type, binary()} | {:expect, binary()} | {:expires, binary()} | {:storage_class, :standard | :reduced_redundancy} | {:website_redirect_location, binary()} | {:encryption, encryption_opts()} | {:meta, amz_meta_opts()} | acl_opts() ]
Specs
upload_opts() :: [ {:max_concurrency, pos_integer()} | initiate_multipart_upload_opts() ]
Specs
upload_part_copy_opts() :: [ copy_source_range: Range.t(), copy_source_if_modified_since: binary(), copy_source_if_unmodified_since: binary(), copy_source_if_match: binary(), copy_source_if_none_match: binary(), destination_encryption: encryption_opts(), source_encryption: customer_encryption_opts() ]
Link to this section Functions
Specs
abort_multipart_upload( bucket :: binary(), object :: binary(), upload_id :: binary() ) :: ExAws.Operation.S3.t()
Abort a multipart upload
Specs
complete_multipart_upload( bucket :: binary(), object :: binary(), upload_id :: binary(), parts :: [{binary() | pos_integer(), binary()}, ...] ) :: ExAws.Operation.S3.t()
Complete a multipart upload
Specs
delete_all_objects( bucket :: binary(), objects :: [binary() | {binary(), binary()}, ...] | Enumerable.t(), opts :: [{:quiet, true}] ) :: ExAws.Operation.S3DeleteAllObjects.t()
Delete all listed objects.
When performed, this function will continue making delete_multiple_objects
requests deleting 1000 objects at a time until all are deleted.
Can be streamed.
Example
stream = ExAws.S3.list_objects(bucket(), prefix: "some/prefix") |> ExAws.stream!() |> Stream.map(& &1.key)
ExAws.S3.delete_all_objects(bucket(), stream) |> ExAws.request()
Specs
delete_bucket(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket
Specs
delete_bucket_cors(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket cors
Specs
delete_bucket_lifecycle(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket lifecycle
Specs
delete_bucket_policy(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket policy
Specs
delete_bucket_replication(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket replication
Specs
delete_bucket_tagging(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket tagging
Specs
delete_bucket_website(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket website
Specs
delete_multiple_objects( bucket :: binary(), objects :: [binary() | {binary(), binary()}, ...], opts :: [{:quiet, true}] ) :: ExAws.Operation.S3.t()
Delete multiple objects within a bucket
Limited to 1000 objects.
Delete an object within a bucket
Specs
delete_object_tagging( bucket :: binary(), object :: binary(), opts :: Keyword.t() ) :: ExAws.Operation.S3.t()
Remove the entire tag set from the specified object
Specs
download_file( bucket :: binary(), path :: binary(), dest :: :memory | binary(), opts :: download_file_opts() ) :: ExAws.S3.Download.t()
Download an S3 object to a file.
This operation downloads multiple parts of an S3 object concurrently, allowing you to maximize throughput.
Defaults to a concurrency of 8, chunk size of 1MB, and a timeout of 1 minute.
Streaming to memory
In order to use ExAws.stream!/2
, the third dest
parameter must be set to :memory
.
An example would be like the following:
ExAws.S3.download_file("example-bucket", "path/to/file.txt", :memory)
|> ExAws.stream!()
Note that this won't start fetching anything immediately since it returns an Elixir Stream
.
Streaming by line
Streaming by line can be done with Stream.chunk_while/4
. Here is an example:
# Returns a stream which grabs chunks of data from S3 as specified in `opts`
# but processes the stream line by line. For example, the default chunk
# size of 1MB means requests for bytes from S3 will ask for 1MB sizes (to be downloaded)
# however each element of the stream will be a single line.
def generate_stream(bucket, file, opts \\ []) do
bucket
|> ExAws.S3.download_file(file, :memory, opts)
|> ExAws.stream!()
# Uncomment if you need to gunzip (and add dependency :stream_gzip)
# |> StreamGzip.gunzip()
|> Stream.chunk_while("", &chunk_fun/2, &after_fun/1)
end
def chunk_fun(chunk, acc) do
split_chunk(acc, chunk) || split_chunk(chunk, "")
end
defp split_chunk("", _append_remaining), do: nil
defp split_chunk(string, append_remaining) do
case String.split(string, "\n", parts: 2) do
[l] ->
{:cont, l}
[l, remaining] ->
{:cont, l, remaining <> append_remaining}
end
end
def after_fun(""), do: {:cont, ""}
def after_fun(acc), do: {:cont, acc, ""}
Specs
get_bucket_acl(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket acl
Specs
get_bucket_cors(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket cors
Specs
get_bucket_lifecycle(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket lifecycle
Specs
get_bucket_location(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket location
Specs
get_bucket_logging(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket logging
Specs
get_bucket_notification(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket notification
Specs
get_bucket_object_versions(bucket :: binary(), opts :: Keyword.t()) :: ExAws.Operation.S3.t()
Get bucket object versions
Specs
get_bucket_policy(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket policy
Specs
get_bucket_replication(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket replication
Specs
get_bucket_request_payment(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket payment configuration
Specs
get_bucket_tagging(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket tagging
Specs
get_bucket_versioning(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket versioning
Specs
get_bucket_website(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket website
Specs
get_object(bucket :: binary(), object :: binary(), opts :: get_object_opts()) :: ExAws.Operation.S3.t()
Get an object from a bucket
Examples
S3.get_object("my-bucket", "image.png")
S3.get_object("my-bucket", "image.png", version_id: "ae57ekgXPpdiVZLkYVWoTAGRhGJ5swt9")
Specs
get_object_acl(bucket :: binary(), object :: binary(), opts :: Keyword.t()) :: ExAws.Operation.S3.t()
Get an object's access control policy
Specs
get_object_tagging(bucket :: binary(), object :: binary(), opts :: Keyword.t()) :: ExAws.Operation.S3.t()
Get object tagging
Specs
get_object_torrent(bucket :: binary(), object :: binary()) :: ExAws.Operation.S3.t()
Get a torrent for a bucket
Specs
head_bucket(bucket :: binary()) :: ExAws.Operation.S3.t()
Determine if a bucket exists
Specs
head_object(bucket :: binary(), object :: binary(), opts :: head_object_opts()) :: ExAws.Operation.S3.t()
Determine if an object exists
Specs
initiate_multipart_upload( bucket :: binary(), object :: binary(), opts :: initiate_multipart_upload_opts() ) :: ExAws.Operation.S3.t()
Initiate a multipart upload
Specs
list_buckets(opts :: Keyword.t()) :: ExAws.Operation.S3.t()
List buckets
Specs
list_multipart_uploads(bucket :: binary(), opts :: Keyword.t()) :: ExAws.Operation.S3.t()
List multipart uploads for a bucket
Specs
list_objects(bucket :: binary(), opts :: list_objects_opts()) :: ExAws.Operation.S3.t()
List objects in bucket
Can be streamed.
Examples
S3.list_objects("my-bucket") |> ExAws.request
S3.list_objects("my-bucket") |> ExAws.stream!
S3.list_objects("my-bucket", delimiter: "/", prefix: "backup") |> ExAws.stream!
S3.list_objects("my-bucket", prefix: "some/inner/location/path") |> ExAws.stream!
S3.list_objects("my-bucket", max_keys: 5, encoding_type: "url") |> ExAws.stream!
Specs
list_objects_v2(bucket :: binary(), opts :: list_objects_v2_opts()) :: ExAws.Operation.S3.t()
List objects in bucket
Can be streamed.
Examples
S3.list_objects_v2("my-bucket") |> ExAws.request
S3.list_objects_v2("my-bucket") |> ExAws.stream!
S3.list_objects_v2("my-bucket", delimiter: "/", prefix: "backup") |> ExAws.stream!
S3.list_objects_v2("my-bucket", prefix: "some/inner/location/path") |> ExAws.stream!
S3.list_objects_v2("my-bucket", max_keys: 5, encoding_type: "url") |> ExAws.stream!
Specs
list_parts( bucket :: binary(), object :: binary(), upload_id :: binary(), opts :: Keyword.t() ) :: ExAws.Operation.S3.t()
List the parts of a multipart upload
options_object(bucket, object, origin, request_method, request_headers \\ [])
View SourceSpecs
options_object( bucket :: binary(), object :: binary(), origin :: binary(), request_method :: atom(), request_headers :: [binary()] ) :: ExAws.Operation.S3.t()
Determine the CORS configuration for an object
Specs
post_object_restore( bucket :: binary(), object :: binary(), number_of_days :: pos_integer(), opts :: [{:version_id, binary()}] ) :: ExAws.Operation.S3.t()
Restore an object to a particular version
Specs
presigned_url( config :: map(), http_method :: atom(), bucket :: binary(), object :: binary(), opts :: presigned_url_opts() ) :: {:ok, binary()} | {:error, binary()}
Generate a pre-signed URL for an object.
When option param :virtual_host
is true
, the bucket name will be used as
the hostname. This will cause the returned URL to be 'http' and not 'https'.
When option param :s3_accelerate
is true
, the bucket name will be used as
the hostname, along with the s3-accelerate.amazonaws.com
host.
Additional (signed) query parameters can be added to the url by setting option param
:query_params
to a list of {"key", "value"}
pairs. Useful if you are uploading parts of
a multipart upload directly from the browser.
Signed headers can be added to the url by setting option param :headers
to
a list of {"key", "value"}
pairs.
Creates a bucket in the specified region
Specs
put_bucket_acl(bucket :: binary(), opts :: [acl_opts()]) :: ExAws.Operation.S3.t()
Update or create a bucket access control policy
Specs
put_bucket_cors(bucket :: binary(), cors_config :: [map()]) :: ExAws.Operation.S3.t()
Update or create a bucket CORS policy
Specs
put_bucket_lifecycle(bucket :: binary(), lifecycle_rules :: [map()]) :: ExAws.Operation.S3.t()
Update or create a bucket lifecycle configuration
Live-Cycle Rule Format
%{
# Unique id for the rule (max. 255 chars, max. 1000 rules allowed)
id: "123",
# Disabled rules are not executed
enabled: true,
# Filters
# Can be based on prefix, object tag(s), both or none
filter: %{
prefix: "prefix/",
tags: %{
"key" => "value"
}
},
# Actions
# https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#intro-lifecycle-rules-actions
actions: %{
transition: %{
trigger: {:date, ~D[2020-03-26]}, # Date or days based
storage: ""
},
expiration: %{
trigger: {:days, 2}, # Date or days based
expired_object_delete_marker: true
},
noncurrent_version_transition: %{
trigger: {:days, 2}, # Only days based
storage: ""
},
noncurrent_version_expiration: %{
trigger: {:days, 2} # Only days based
},
abort_incomplete_multipart_upload: %{
trigger: {:days, 2} # Only days based
}
}
}
Specs
Update or create a bucket logging configuration
Specs
Update or create a bucket notification configuration
Specs
put_bucket_policy(bucket :: binary(), policy :: String.t()) :: ExAws.Operation.S3.t()
Update or create a bucket policy configuration
Specs
Update or create a bucket replication configuration
Specs
put_bucket_request_payment( bucket :: binary(), payer :: :requester | :bucket_owner ) :: no_return()
Update or create a bucket requestPayment configuration
Specs
Update or create a bucket tagging configuration
Specs
Update or create a bucket versioning configuration
Specs
Update or create a bucket website configuration
Specs
put_object( bucket :: binary(), object :: binary(), body :: binary(), opts :: put_object_opts() ) :: ExAws.Operation.S3.t()
Create an object within a bucket
Specs
put_object_acl(bucket :: binary(), object :: binary(), acl :: [acl_opts()]) :: ExAws.Operation.S3.t()
Create or update an object's access control policy
put_object_copy(dest_bucket, dest_object, src_bucket, src_object, opts \\ [])
View SourceSpecs
put_object_copy( dest_bucket :: binary(), dest_object :: binary(), src_bucket :: binary(), src_object :: binary(), opts :: put_object_copy_opts() ) :: ExAws.Operation.S3.t()
Copy an object
Specs
put_object_tagging( bucket :: binary(), object :: binary(), tags :: Access.t(), opts :: Keyword.t() ) :: ExAws.Operation.S3.t()
Add a set of tags to an existing object
Options
:version_id
- The versionId of the object that the tag-set will be added to.
Specs
upload( source :: Enumerable.t(), bucket :: String.t(), path :: String.t(), opts :: upload_opts() ) :: ExAws.S3.Upload.t()
Multipart upload to S3.
Handles initialization, uploading parts concurrently, and multipart upload completion.
Uploading a stream
Streams that emit binaries may be uploaded directly to S3. Each binary will be uploaded
as a chunk, so it must be at least 5 megabytes in size. The S3.Upload.stream_file
helper takes care of reading the file in 5 megabyte chunks.
"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request! #=> :done
Options
These options are specific to this function
- See
Task.async_stream/5
's:max_concurrency
and:timeout
options.:max_concurrency
- only applies when uploading a stream. Sets the maximum number of tasks to run at the same time. Defaults to4
:timeout
- the maximum amount of time (in milliseconds) each task is allowed to execute for. Defaults to30_000
.
All other options (ex. :content_type
) are passed through to
ExAws.S3.initiate_multipart_upload/3
.
upload_part(bucket, object, upload_id, part_number, body, opts \\ [])
View SourceSpecs
upload_part( bucket :: binary(), object :: binary(), upload_id :: binary(), part_number :: pos_integer(), body :: binary(), opts :: [encryption_opts() | {:expect, binary()}] ) :: ExAws.Operation.S3.t()
Upload a part for a multipart upload
upload_part_copy(dest_bucket, dest_object, src_bucket, src_object, opts \\ [])
View SourceSpecs
upload_part_copy( dest_bucket :: binary(), dest_object :: binary(), src_bucket :: binary(), src_object :: binary(), opts :: upload_part_copy_opts() ) :: ExAws.Operation.S3.t()
Upload a part for a multipart copy