EctoTablestore.Repo.batch_write

You're seeing just the callback batch_write, go back to EctoTablestore.Repo module for more information.
Link to this callback

batch_write(writes, options)

View Source

Specs

batch_write(writes, options()) :: {:ok, Keyword.t()} | {:error, term()}
when writes: [
       {operation :: :put,
        items :: [
          item ::
            {schema_entity :: Ecto.Schema.t(), options()}
            | {module :: Ecto.Schema.t(), ids :: list(), attrs :: list(),
               options()}
            | {changeset :: Ecto.Changeset.t(), operation :: Keyword.t()}
        ]}
       | {operation :: :update,
          items :: [
            changeset ::
              Ecto.Changeset.t() | {changeset :: Ecto.Changeset.t(), options()}
          ]}
       | {operation :: :delete,
          items :: [
            schema_entity ::
              Ecto.Schema.t()
              | {schema_entity :: Ecto.Schema.t(), options()}
              | {module :: Ecto.Schema.t(), ids :: list(), options()}
          ]}
     ]

Batch write several rows of data from one or more tables, this batch request puts multiple PutRow/DeleteRow/UpdateRow operations in one request from client's perspective.

After execute each operation in servers, return results independently and independently consumes capacity units.

If use a batch write request include a transaction ID, all rows in that request can only be written to the table that matches the transaction ID.

Options

  • transaction_id, use local transaction.

Example

The options of each :put, :delete, and :update operation are similar as ExAliyunOts.put_row/5, ExAliyunOts.delete_row/4 and ExAliyunOts.update_row/4, but transaction_id option is using in the options of EctoTablestore.Repo.batch_write/2.

By default, require to explicitly set the :condition option in each operation, excepts that if a table defined an auto increment primary key(aka non-partitioned primary key) which is processed in the server side, the server logic MUST use condition: condition(:ignore), in this case, this library internally forces to use condition: condition(:ignore), so we can omit the :condition option in the PutRow operation of a batch write.

If we set entity_full_match: true there will use the whole provided attribute-column field(s) of schema entity into the column_condition of condition filter, and always use row_existence: :EXPECT_EXIST, by default the entity_full_match option is false.

If put a row with an auto increment primary key, meanwhile set entity_full_match: true, the entity_full_match: true option is no effect, this library internally forces to use condition: condition(:ignore).

batch_write([
  delete: [
    {schema_entity_1, condition: condition(:ignore)}
    {schema_entity_2, condition: condition(:expect_exist)}
  ],
  put: [
    {%Schema1{}, condition: condition(:expect_not_exist)},
    {%Schema2{}, condition: condition(:ignore)},
    {%Schema3WithAutoIncrementPK{}, return_type: :pk},
    {changeset_schema_1, condition: condition(:ignore)}
  ],
  update: [
    {changeset_schema_1, return_type: :pk},
    {changeset_schema_2, entity_full_match: true}
  ]
])

Use transaction_id option:

batch_write(
  [
    delete: [...],
    put: [...],
    update: [...]
  ],
  transaction_id: "..."
)