EctoTablestore.Repo.batch_write
You're seeing just the callback
batch_write
, go back to EctoTablestore.Repo module for more information.
Specs
batch_write(writes, options()) :: {:ok, Keyword.t()} | {:error, term()} when writes: [ {operation :: :put, items :: [ item :: {schema_entity :: Ecto.Schema.t(), options()} | {module :: Ecto.Schema.t(), ids :: list(), attrs :: list(), options()} | {changeset :: Ecto.Changeset.t(), operation :: Keyword.t()} ]} | {operation :: :update, items :: [ changeset :: Ecto.Changeset.t() | {changeset :: Ecto.Changeset.t(), options()} ]} | {operation :: :delete, items :: [ schema_entity :: Ecto.Schema.t() | {schema_entity :: Ecto.Schema.t(), options()} | {module :: Ecto.Schema.t(), ids :: list(), options()} ]} ]
Batch write several rows of data from one or more tables, this batch request put multiple put_row/delete_row/update_row in one request from client's perspective.
After execute each operation in servers, return results independently and independently consumes capacity units.
If use a batch write request include a transaction ID, all rows in that request can only be written to the table that matches the transaction ID.
Options
transaction_id
, use local transaction.
Example
The options of each :put
, :delete
, and :update
operation are similar as
ExAliyunOts.put_row/5
, ExAliyunOts.delete_row/4
and ExAliyunOts.update_row/4
, but
transaction_id
option is using in the options of EctoTablestore.Repo.batch_write/2
.
batch_write([
delete: [
schema_entity_1,
schema_entity_2
],
put: [
{%Schema2{}, condition: condition(:ignore)},
{%Schema1{}, condition: condition(:expect_not_exist)},
{changeset_schema_1, condition: condition(:ignore)}
],
update: [
{changeset_schema_1, return_type: :pk},
{changeset_schema_2}
]
])