baiji v0.6.5 Baiji.Glue
Defines service operations used by the GlueFrontendService
Link to this section Summary
Functions
Returns a map containing the input/output shapes for this endpoint
Outputs values common to all actions
Creates one or more partitions in a batch operation
Deletes a list of connection definitions from the Data Catalog
Deletes one or more partitions in a batch operation
Deletes multiple tables at once
Retrieves partitions in a batch request
Creates a Classifier
in the user’s account
Creates a connection definition in the Data Catalog
Creates a new Crawler
with specified targets, role, configuration, and
optional schedule. At least one crawl target must be specified, in either
the s3Targets or the jdbcTargets field
Creates a new database in a Data Catalog
Creates a new DevEndpoint
Creates a new job
Creates a new partition
Transforms a directed acyclic graph (DAG) into a Python script
Creates a new table definition in the Data Catalog
Creates a new trigger
Creates a new function definition in the Data Catalog
Removes a Classifier
from the metadata store
Deletes a connection from the Data Catalog
Removes a specified Crawler
from the metadata store, unless the Crawler
state is RUNNING
Removes a specified Database from a Data Catalog
Deletes a specified DevEndpoint
Deletes a specified job
Deletes a specified partition
Removes a table definition from the Data Catalog
Deletes a specified trigger
Deletes an existing function definition from the Data Catalog
Retrieves the status of a migration operation
Retrieve a Classifier
by name
Lists all Classifier objects in the metadata store
Retrieves a connection definition from the Data Catalog
Retrieves a list of connection definitions from the Data Catalog
Retrieves metadata for a specified Crawler
Retrieves metrics about specified crawlers
Retrieves metadata for all Crawlers
defined in the customer account
Retrieves the definition of a specified database
Retrieves all Databases defined in a given Data Catalog
Transforms a Python script into a directed acyclic graph (DAG)
Retrieves information about a specified DevEndpoint
Retrieves all the DevEndpoints in this AWS account
Retrieves an existing job definition
Retrieves the metadata for a given job run
Retrieves metadata for all runs of a given job
Retrieves all current jobs
Creates mappings
Retrieves information about a specified partition
Retrieves information about the partitions in a table
Gets a Python script to perform a specified mapping
Retrieves the Table
definition in a Data Catalog for a specified table
Retrieves a list of strings that identify available versions of a specified table
Retrieves the definitions of some or all of the tables in a given
Database
Retrieves the definition of a trigger
Gets all the triggers associated with a job
Retrieves a specified function definition from the Data Catalog
Retrieves a multiple function definitions from the Data Catalog
Imports an existing Athena Data Catalog to AWS Glue
Resets a bookmark entry
Starts a crawl using the specified Crawler
, regardless of what is
scheduled. If the Crawler
is already running, does nothing
Changes the schedule state of the specified crawler to SCHEDULED
, unless
the crawler is already running or the schedule state is already
SCHEDULED
Runs a job
Starts an existing trigger
If the specified Crawler
is running, stops the crawl
Sets the schedule state of the specified crawler to NOT_SCHEDULED
, but
does not stop the crawler if it is already running
Stops a specified trigger
Modifies an existing Classifier
Updates a connection definition in the Data Catalog
Updates a Crawler
. If a Crawler
is running, you must stop it using
StopCrawler
before updating it
Updates the schedule of a crawler using a Cron expression
Updates an existing database definition in a Data Catalog
Updates a specified DevEndpoint
Updates an existing job definition
Updates a partition
Updates a metadata table in the Data Catalog
Updates a trigger definition
Updates an existing function definition in the Data Catalog
Link to this section Functions
Returns a map containing the input/output shapes for this endpoint
Outputs values common to all actions
Creates one or more partitions in a batch operation.
Deletes a list of connection definitions from the Data Catalog.
Deletes one or more partitions in a batch operation.
Deletes multiple tables at once.
Retrieves partitions in a batch request.
Creates a Classifier
in the user’s account.
Creates a connection definition in the Data Catalog.
Creates a new Crawler
with specified targets, role, configuration, and
optional schedule. At least one crawl target must be specified, in either
the s3Targets or the jdbcTargets field.
Creates a new database in a Data Catalog.
Creates a new DevEndpoint.
Creates a new job.
Creates a new partition.
Transforms a directed acyclic graph (DAG) into a Python script.
Creates a new table definition in the Data Catalog.
Creates a new trigger.
Creates a new function definition in the Data Catalog.
Removes a Classifier
from the metadata store.
Deletes a connection from the Data Catalog.
Removes a specified Crawler
from the metadata store, unless the Crawler
state is RUNNING
.
Removes a specified Database from a Data Catalog.
Deletes a specified DevEndpoint.
Deletes a specified job.
Deletes a specified partition.
Removes a table definition from the Data Catalog.
Deletes a specified trigger.
Deletes an existing function definition from the Data Catalog.
Retrieves the status of a migration operation.
Retrieve a Classifier
by name.
Lists all Classifier objects in the metadata store.
Retrieves a connection definition from the Data Catalog.
Retrieves a list of connection definitions from the Data Catalog.
Retrieves metadata for a specified Crawler
.
Retrieves metrics about specified crawlers.
Retrieves metadata for all Crawlers
defined in the customer account.
Retrieves the definition of a specified database.
Retrieves all Databases defined in a given Data Catalog.
Transforms a Python script into a directed acyclic graph (DAG).
Retrieves information about a specified DevEndpoint.
Retrieves all the DevEndpoints in this AWS account.
Retrieves an existing job definition.
Retrieves the metadata for a given job run.
Retrieves metadata for all runs of a given job.
Retrieves all current jobs.
Creates mappings.
Retrieves information about a specified partition.
Retrieves information about the partitions in a table.
Gets a Python script to perform a specified mapping.
Retrieves the Table
definition in a Data Catalog for a specified table.
Retrieves a list of strings that identify available versions of a specified table.
Retrieves the definitions of some or all of the tables in a given
Database
.
Retrieves the definition of a trigger.
Gets all the triggers associated with a job.
Retrieves a specified function definition from the Data Catalog.
Retrieves a multiple function definitions from the Data Catalog.
Imports an existing Athena Data Catalog to AWS Glue
Resets a bookmark entry.
Starts a crawl using the specified Crawler
, regardless of what is
scheduled. If the Crawler
is already running, does nothing.
Changes the schedule state of the specified crawler to SCHEDULED
, unless
the crawler is already running or the schedule state is already
SCHEDULED
.
Runs a job.
Starts an existing trigger.
If the specified Crawler
is running, stops the crawl.
Sets the schedule state of the specified crawler to NOT_SCHEDULED
, but
does not stop the crawler if it is already running.
Stops a specified trigger.
Modifies an existing Classifier
.
Updates a connection definition in the Data Catalog.
Updates a Crawler
. If a Crawler
is running, you must stop it using
StopCrawler
before updating it.
Updates the schedule of a crawler using a Cron expression.
Updates an existing database definition in a Data Catalog.
Updates a specified DevEndpoint.
Updates an existing job definition.
Updates a partition.
Updates a metadata table in the Data Catalog.
Updates a trigger definition.
Updates an existing function definition in the Data Catalog.