ex_aliyun_ots v0.6.4 ExAliyunOts View Source
The ExAliyunOts
module provides a tablestore-based API as a client for working with Alibaba TableStore product servers.
Here are links to official documents in Chinese | English
Configuration
config :ex_aliyun_ots, :my_instance
name: "MyInstanceName",
endpoint: "MyInstanceEndpoint",
access_key_id: "MyAliyunRAMKeyID",
access_key_secret: "MyAliyunRAMKeySecret"
config :ex_aliyun_ots,
instances: [:my_instance],
debug: false,
enable_tunnel: false
debug
, optional, specifies whether to enable debug logger, by default it's false, and please DO NOT use debug mode in production.enable_tunnel
, optional, specifies whether to enable tunnel functions, there will startup tunnel relatedSupervisor
andRegistry
when enable it, by default it's false.
Using ExAliyunOts
To use ExAliyunOts
, a module that calls use ExAliyunOts
has to be defined:
defmodule MyApp.TableStore do
use ExAliyunOts, instance: :my_instance
end
This automatically defines some macros and functions in the MyApp.TableStore
module, here are some examples:
import MyApp.TableStore
# Create table
create_table "table",
[{"pk1", :integer}, {"pk2", :string}]
# Put row
put_row "table",
[{"pk1", "id1"}],
[{"attr1", 10}, {"attr2", "attr2_value"}],
condition: condition(:expect_not_exist),
return_type: :pk
# Search index
search "table", "index_name",
search_query: [
query: match_query("age", 28),
sort: [
field_sort("age", order: :desc)
]
]
# Local transaction
start_local_transaction "table", {"partition_key", "partition_value"}
ExAliyunOts API
There are two ways to use ExAliyunOts:
- using macros and functions from your own ExAliyunOts module, like
MyApp.TableStore
. - using macros and functions from the
ExAliyunOts
module.
All defined functions and macros in ExAliyunOts
are available and referrible for your own ExAliyunOts module as well, except that the given arity of functions may
different, because the instance
parameter of each invoke request is NOT needed from your own ExAliyunOts module although the ExAliyunOts
module defines it.
Link to this section Summary
Row
Similar to condition/1
and support use filter expression (please see filter/1
) as well, please refer them for details.
Used in batch get operation, please see batch_get/2
for details.
As a client SDK wrapper built on get_range/5
to fetch a large data set by iterate.
Used in batch write operation, please see batch_write/2
for details.
Used in batch write operation, please see batch_write/2
for details.
Used in batch write operation, please see batch_write/2
for details.
Search
The one entrance to use search index functions, please see ExAliyunOts.Search
module for details.
Link to this section Table
Official document in Chinese | English
Example
create_table "table_name2",
[{"key1", :string}, {"key2", :auto_increment}]
create_table "table_name3",
[{"key1", :string}],
reserved_throughput_write: 1,
reserved_throughput_read: 1,
time_to_live: 100_000,
max_versions: 3,
deviation_cell_version_in_sec: 6_400,
stream_spec: [is_enabled: true, expiration_time: 2]
Options
:reserved_throughput_write
, optional, the reserved throughput write of table, by default it is 0.:reserved_throughput_read
, optional, the reserved throughput read of table, by default it is 0.time_to_live
, optional, the data storage time to live in seconds, the minimux settable value is 864_000 seconds (one day), by default it is -1 (for permanent).:max_versions
, optional, the version of table, by default it is 1 that specifies there is only one version for columns.:deviation_cell_version_in_sec
, optional, maximum version deviation, by default it is 864_000 seconds (one day).:stream_spec
, specifies whether enable stream, by default it is not enable stream feature.:is_enabled
, enable or not enable stream, usetrue
orfalse
;:expiration_time
, the expiration time of stream.
delete_table(instance, table)
View Sourcedelete_table(instance :: atom(), table :: String.t()) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
delete_table("table_name")
describe_table(instance, table)
View Sourcedescribe_table(instance :: atom(), table :: String.t()) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
describe_table(table_name)
list_table(instance)
View Sourcelist_table(instance :: atom()) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
list_table()
update_table(instance, table, options \\ [])
View Sourceupdate_table(instance :: atom(), table :: String.t(), options :: Keyword.t()) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
update_table "table_name",
reserved_throughput_write: 10,
time_to_live: 200_000,
stream_spec: [is_enabled: false]
Options
Please see options of create_table/4
.
Link to this section Row
batch_get(instance, requests)
View Sourcebatch_get(instance :: atom(), requests :: list()) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
batch_get [
get(table_name1, [[{"key1", 1}, {"key2", "1"}]]),
get(
table_name2,
[{"key1", "key1"}],
columns_to_get: ["name", "age"],
filter: filter "age" >= 10
)
]
The batch get operation can be considered as a collection of mulitple get/3
operations.
batch_write(instance, requests, options \\ [])
View Sourcebatch_write(instance :: atom(), requests :: list(), options :: Keyword.t()) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
batch_write [
{"table1", [
write_delete([{"key1", 5}, {"key2", "5"}],
return_type: :pk,
condition: condition(:expect_exist, "attr1" == 5)),
write_put([{"key1", 6}, {"key2", "6"}],
[{"new_put_val1", "val1"}, {"new_put_val2", "val2"}],
condition: condition(:expect_not_exist),
return_type: :pk)
]},
{"table2", [
write_update([{"key1", "new_tab3_id2"}],
put: [{"new_put1", "u1"}, {"new_put2", 2.5}],
condition: condition(:expect_not_exist)),
write_put([{"key1", "new_tab3_id3"}],
[{"new_put1", "put1"}, {"new_put2", 10}],
condition: condition(:expect_not_exist))
]}
]
The batch write operation can be considered as a collection of mulitple write_put/3
, write_update/2
and write_delete/2
operations.
condition(existence)
View Sourcecondition(existence :: :expect_exist | :expect_not_exist | :ignore) :: map()
Official document in Chinese | English
Example
import MyApp.TableStore
update_row "table", [{"pk", "pk1"}],
delete_all: ["attr1", "attr2"],
return_type: :pk,
condition: condition(:expect_exist)
The available existence
options: :expect_exist
| :expect_not_exist
| :ignore
, here are some use cases for your reference:
Use condition(:expect_exist)
, expect the primary keys to row is existed.
- for
put_row/5
, if the primary keys have auto increment column type, meanwhile the target primary keys row is existed, only usecondition(:expect_exist)
can successfully overwrite the row. - for
update_row/4
, if the primary keys have auto increment column type, meanwhile the target primary keys row is existed, only usecondition(:expect_exist)
can successfully update the row. - for
delete_row/4
, no matter what primary keys type are, usecondition(:expect_exist)
can successfully delete the row.
Use condition(:expect_not_exist)
, expect the primary_keys to row is not existed.
for
put_row/5
, if the primary keys have auto increment type,- while the target primary keys row is existed, only use
condition(:expect_exist)
can successfully put the row; - while the target primary keys row is not existed, only use
condition(:ignore)
can successfully put the row.
- while the target primary keys row is existed, only use
Use condition(:ignore)
, ignore the row existence check
- for
put_row/5
, if the primary keys have auto increment column type, meanwhile the target primary keys row is not existed, only usecondition(:ignore)
can successfully put the row. - for
update_row/4
, if the primary keys have auto increment column type, meanwhile the target primary keys row is not existed, only usecondition(:ignore)
can successfully update the row. - for
delete_row/4
, no matter what primary keys type are, usecondition(:ignore)
can successfully delete the row if existed.
The batch_write/3
operation is a collection of put_row / update_row / delete_row operations.
Similar to condition/1
and support use filter expression (please see filter/1
) as well, please refer them for details.
Example
import MyApp.TableStore
delete_row "table",
[{"key", "key1"}, {"key2", "key2"}],
condition: condition(:expect_exist, "attr_column" == "value2")
Official document in Chinese | English
Example
import MyApp.TableStore
delete_row "table1",
[{"key1", 3}, {"key2", "3"}],
condition: condition(:expect_exist, "attr2" == "value2")
delete_row "table1",
[{"key1", 3}, {"key2", "3"}],
condition: condition(:expect_exist, "attr2" == "value2"),
transaction_id: "transaction_id"
Options
:condition
, required, please seecondition/1
orcondition/2
for details.:transaction_id
, optional, write operation within local transaction.
Official document in Chinese | English
Example
import MyApp.TableStore
get_row table_name1, [{"key", "key1"}],
columns_to_get: ["name", "level"],
filter: filter(("name[ignore_if_missing: true, latest_version_only: true]" == var_name and "age" > 1) or ("class" == "1"))
batch_get [
get(
table_name2,
[{"key", "key1"}],
filter: filter "age" >= 10
)
]
Options
ignore_if_missing
, used when attribute column not existed.- if a attribute column is not existed, when set
[ignore_if_missing: true]
in filter expression, there will ignore this row data in the returned result; - if a attribute column is existed, the returned result won't be affected no matter true or false was set.
- if a attribute column is not existed, when set
latest_version_only
, used when attribute column has multiple versions.- if set
[latest_version_only: true]
, there will only check the value of the latest version is matched or not, by default it's set as[latest_version_only: true]
; - if set
[latest_version_only: false]
, there will check the value of all versions are matched or not.
- if set
Used in batch get operation, please see batch_get/2
for details.
Options
The available options are same as get_row/4
.
get_range(instance, table, inclusive_start_primary_keys, exclusive_end_primary_keys, options \\ [])
View SourceOfficial document in Chinese | English
Example
import MyApp.TableStore
get_range "table_name",
[{"key1", 1}, {"key2", :inf_min}],
[{"key1", 4}, {"key2", :inf_max}],
direction: :forward
get_range "table_name",
[{"key1", 1}, {"key2", :inf_min}],
[{"key1", 4}, {"key2", :inf_max}],
time_range: {1525922253224, 1525923253224},
direction: :forward
get_range "table_name",
[{"key1", 1}, {"key2", :inf_min}],
[{"key1", 4}, {"key2", :inf_max}],
time_range: 1525942123224,
direction: :forward
Options
:direction
, required, the order of fetch data, available options are:forward
|:backward
, by it is:forward
.:forward
, this query is performed in the order of primary key in ascending, in this case, inputinclusive_start_primary_keys
should less thanexclusive_end_primary_keys
;:backward
, this query is performed in the order of primary key in descending, in this case, inputinclusive_start_primary_keys
should greater thanexclusive_end_primary_keys
.
:columns_to_get
, optional, fetch the special fields, by default it returns all fields, pass a field list to specify the expected return fields, e.g.["field1", "field2"]
.:start_column
, optional, specifies the start column when using for wide-row-read, the returned result contains this:start_column
.:end_column
, optional, specifies the end column when using for wide-row-read, the returned result does not contain this:end_column
.:filter
, optional, filter the return results in the server side, please seefilter/1
for details.:max_versions
, optional, how many versions need to return in results, by default it is 1.:time_range
, optional, read data by timestamp range, support two ways to use it:time_range: {start_timestamp, end_timestamp}
, the timestamp in the range (includestart_timestamp
but excludeend_timestamp
) and then will return in the results.time_range: specail_timestamp
, exactly match and then will return in the results.:time_range
and:max_versions
are mutually exclusive, by default usemax_versions: 1
andtime_range: nil
. *:transaction_id
, optional, read operation within local transaction.
Official document in Chinese | English
Example
import MyApp.TableStore
get_row "table1",
[{"key1", "id1"}, {"key2", "id2"}],
columns_to_get: ["name", "level"],
filter: filter(("name[ignore_if_missing: true, latest_version_only: true]" == var_name and "age" > 1) or ("class" == "1"))
get_row "table2",
[{"key", "1"}],
start_column: "room",
filter: pagination(offset: 0, limit: 3)
get_row "table3",
[{"key", "1"}],
transaction_id: "transaction_id"
Options
:columns_to_get
, optional, fetch the special fields, by default it returns all fields, pass a field list to specify the expected return fields e.g.["field1", "field2"]
.:start_column
, optional, specifies the start column when using for wide-row-read, the returned result contains this:start_column
.:end_column
, optional, specifies the end column when using for wide-row-read, the returned result does not contain this:end_column
.:filter
, optional, filter the return results in the server side, please seefilter/1
for details.:max_versions
, optional, how many versions need to return in results, by default it is 1.:time_range
, optional, read data by timestamp range, support two ways to use it:time_range: {start_timestamp, end_timestamp}
, the timestamp in the range (includestart_timestamp
but excludeend_timestamp
) and then will return in the results.time_range: specail_timestamp
, exactly match and then will return in the results.:time_range
and:max_versions
are mutually exclusive, by default usemax_versions: 1
andtime_range: nil
.
:transaction_id
, optional, read operation within local transaction.
iterate_all_range(instance, table, inclusive_start_primary_keys, exclusive_end_primary_keys, options \\ [])
View SourceAs a client SDK wrapper built on get_range/5
to fetch a large data set by iterate.
Example
import MyApp.TableStore
iterate_all_range table_name1,
[{"key1", 1}, {"key2", :inf_min}],
[{"key1", 4}, {"key2", :inf_max}],
direction: :forward
Options
Please see options of get_range/5
for details.
Official document in Chinese | English
Example
import MyApp.TableStore
get_row table_name,
[{"key", "1"}],
start_column: "room",
filter: pagination(offset: 0, limit: 3)
Use pagination/1
for :filter
options when get row.
Official document in Chinese | English
Example
import MyApp.TableStore
put_row "table1",
[{"key1", "id1"}],
[{"name", "name1"}, {"age", 20}],
condition: condition(:expect_not_exist),
return_type: :pk
put_row "table2",
[{"key1", "id1"}],
[{"name", "name1"}, {"age", 20}],
condition: condition(:expect_not_exist),
transaction_id: "transaction_id"
return_type: :pk
Options
:condition
, required, please seecondition/1
orcondition/2
for details.:return_type
, optional, whether return the primary keys after put row, available options are:pk
|:none
, by default it is:none
.:transaction_id
, optional, write operation within local transaction.
Official document in Chinese | English
Example
import MyApp.TableStore
value = "1"
update_row "table1",
[{"key1", 2}, {"key2", "2"}],
delete: [{"attr2", nil, 1524464460}],
delete_all: ["attr1"],
put: [{"attr3", "put_attr3"}],
return_type: :pk,
condition: condition(:expect_exist, "attr2" == value)
update_row "table2",
[{"key1", 1}],
put: [{"attr1", "put_attr1"}],
increment: [{"count", 1}],
return_type: :after_modify,
return_columns: ["count"],
condition: condition(:ignore)
update_row "table3",
[partition_key],
put: [{"new_attr1", "a1"}],
delete_all: ["level", "size"],
condition: condition(:ignore),
transaction_id: "transaction_id"
Options
:put
, optional, require to be valid value, e.g.[{"field1", "value"}, {...}]
, insert a new column if this field is not existed, or overwrite this field if existed.:delete
, optional, delete the special version of a column or columns, please pass the column's version (timestamp) in:delete
option, e.g. [{"field1", nil, 1524464460}, ...].:delete_all
, optional, delete all versions of a column or columns, e.g. ["field1", "field2", ...].:increment
, optional, attribute column(s) base on atomic counters for increment or decreasement, require the value of column is integer.- for increment,
increment: [{"count", 1}]
; - for decreasement,
increment: [{"count", -1}]
.
- for increment,
:return_type
, optional, whether return the primary keys after update row, available options are:pk
|:none
|:after_modify
, by default it is:none
.- if use atomic counters, must set
return_type: :after_modify
.
- if use atomic counters, must set
:condition
, required, please seecondition/1
orcondition/2
for details.:transaction_id
, optional, write operation within local transaction.
Used in batch write operation, please see batch_write/2
for details.
Options
The available operation same as delete_row/4
.
Used in batch write operation, please see batch_write/2
for details.
Options
The available options are same as put_row/5
.
Used in batch write operation, please see batch_write/2
for details.
Options
The available options are same as update_row/4
.
Link to this section Local Transaction
Official document in Chinese | English
Example
import MyApp.TableStore
abort_transaction("transaction_id")
commit_transaction(instance, transaction_id)
View Sourcecommit_transaction(instance :: atom(), transaction_id :: String.t()) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
commit_transaction("transaction_id")
start_local_transaction(instance, table, partition_key)
View Sourcestart_local_transaction( instance :: atom(), table :: String.t(), partition_key :: tuple() ) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
partition_key = {"key", "key1"}
start_local_transaction("table", partition_key)
Link to this section Search
Official document in Chinese | English
Example
import MyApp.TableStore
create_search_index "table", "index_name",
field_schemas: [
field_schema_keyword("name"),
field_schema_integer("age")
]
create_search_index "table", "index_name",
field_schemas: [
field_schema_keyword("name"),
field_schema_geo_point("location"),
field_schema_integer("value")
]
create_search_index "table", "index_name",
field_schemas: [
field_schema_nested(
"content",
field_schemas: [
field_schema_keyword("header"),
field_schema_keyword("body")
]
)
]
Options
:field_schemas
, required, a list of predefined search-index schema fields, please see the following helper functions:
delete_search_index(instance, table, index_name)
View Sourcedelete_search_index( instance :: atom(), table :: String.t(), index_name :: String.t() ) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
delete_search_index("table", "index_name")
describe_search_index(instance, table, index_name)
View Sourcedescribe_search_index( instance :: atom(), table :: String.t(), index_name :: String.t() ) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
describe_search_index("table", "index_name")
list_search_index(instance, table)
View Sourcelist_search_index(instance :: atom(), table :: String.t()) :: {:ok, map()} | {:error, ExAliyunOts.Error.t()}
Official document in Chinese | English
Example
import MyApp.TableStore
list_search_index("table")
The one entrance to use search index functions, please see ExAliyunOts.Search
module for details.
Official document in Chinese | English
Options
:search_query
, required, the main option to use Query and Sort.:query
, required, bind to the Query functions:ExAliyunOts.Search.bool_query/1
ExAliyunOts.Search.exists_query/1
ExAliyunOts.Search.geo_bounding_box_query/3
ExAliyunOts.Search.geo_distance_query/3
ExAliyunOts.Search.geo_polygon_query/2
ExAliyunOts.Search.match_all_query/0
ExAliyunOts.Search.match_phrase_query/2
ExAliyunOts.Search.match_query/3
ExAliyunOts.Search.nested_query/3
ExAliyunOts.Search.prefix_query/2
ExAliyunOts.Search.range_query/2
ExAliyunOts.Search.term_query/2
ExAliyunOts.Search.terms_query/2
ExAliyunOts.Search.wildcard_query/2
:sort
, optional, by default it is usepk_sort/1
, bind to the Sort functions::aggs
, optional, please see official document in Chinese | English.:group_bys
, optional, please see official document in Chinese | English.:limit
, optional, the limited size of query.:offset
, optional, the offset size of query. When the total rows are less or equal than 2000, can both used:limit
and:offset
to pagination.:get_total_count
, optional, return the total count of the all matched rows, by default it istrue
.:token
, optional, when do not load all the matched rows in a single request, there will return anext_token
value in that result, and then we can pass it to:token
in the next same search query to continue load.:collapse
, optional, duplicate removal by the specified field, please see official document in Chinese, please NOTICE that currently there does not support use:collapse
with:token
together.
:columns_to_get
, optional, fetch the special fields, by default it returns all fields, here are available options::all
, return all attribute column fields;:none
, do not return any attribute column fields;["field1", "field2"]
, specifies the expected return attribute column fields.