baiji v0.6.5 Baiji.Glue

Defines service operations used by the GlueFrontendService

Link to this section Summary

Functions

Returns a map containing the input/output shapes for this endpoint

Outputs values common to all actions

Creates one or more partitions in a batch operation

Deletes a list of connection definitions from the Data Catalog

Deletes one or more partitions in a batch operation

Deletes multiple tables at once

Retrieves partitions in a batch request

Creates a Classifier in the user’s account

Creates a connection definition in the Data Catalog

Creates a new Crawler with specified targets, role, configuration, and optional schedule. At least one crawl target must be specified, in either the s3Targets or the jdbcTargets field

Creates a new database in a Data Catalog

Transforms a directed acyclic graph (DAG) into a Python script

Creates a new table definition in the Data Catalog

Creates a new function definition in the Data Catalog

Removes a Classifier from the metadata store

Deletes a connection from the Data Catalog

Removes a specified Crawler from the metadata store, unless the Crawler state is RUNNING

Removes a specified Database from a Data Catalog

Deletes a specified DevEndpoint

Deletes a specified job

Deletes a specified partition

Removes a table definition from the Data Catalog

Deletes a specified trigger

Deletes an existing function definition from the Data Catalog

Retrieves the status of a migration operation

Retrieve a Classifier by name

Lists all Classifier objects in the metadata store

Retrieves a connection definition from the Data Catalog

Retrieves a list of connection definitions from the Data Catalog

Retrieves metadata for a specified Crawler

Retrieves metrics about specified crawlers

Retrieves metadata for all Crawlers defined in the customer account

Retrieves the definition of a specified database

Retrieves all Databases defined in a given Data Catalog

Transforms a Python script into a directed acyclic graph (DAG)

Retrieves information about a specified DevEndpoint

Retrieves all the DevEndpoints in this AWS account

Retrieves an existing job definition

Retrieves the metadata for a given job run

Retrieves metadata for all runs of a given job

Retrieves all current jobs

Retrieves information about a specified partition

Retrieves information about the partitions in a table

Gets a Python script to perform a specified mapping

Retrieves the Table definition in a Data Catalog for a specified table

Retrieves a list of strings that identify available versions of a specified table

Retrieves the definitions of some or all of the tables in a given Database

Retrieves the definition of a trigger

Gets all the triggers associated with a job

Retrieves a specified function definition from the Data Catalog

Retrieves a multiple function definitions from the Data Catalog

Imports an existing Athena Data Catalog to AWS Glue

Starts a crawl using the specified Crawler, regardless of what is scheduled. If the Crawler is already running, does nothing

Changes the schedule state of the specified crawler to SCHEDULED, unless the crawler is already running or the schedule state is already SCHEDULED

Starts an existing trigger

If the specified Crawler is running, stops the crawl

Sets the schedule state of the specified crawler to NOT_SCHEDULED, but does not stop the crawler if it is already running

Stops a specified trigger

Modifies an existing Classifier

Updates a connection definition in the Data Catalog

Updates a Crawler. If a Crawler is running, you must stop it using StopCrawler before updating it

Updates the schedule of a crawler using a Cron expression

Updates an existing database definition in a Data Catalog

Updates a specified DevEndpoint

Updates an existing job definition

Updates a metadata table in the Data Catalog

Updates a trigger definition

Updates an existing function definition in the Data Catalog

Link to this section Functions

Returns a map containing the input/output shapes for this endpoint

Outputs values common to all actions

Link to this function batch_create_partition(input \\ %{}, options \\ [])

Creates one or more partitions in a batch operation.

Link to this function batch_delete_connection(input \\ %{}, options \\ [])

Deletes a list of connection definitions from the Data Catalog.

Link to this function batch_delete_partition(input \\ %{}, options \\ [])

Deletes one or more partitions in a batch operation.

Link to this function batch_delete_table(input \\ %{}, options \\ [])

Deletes multiple tables at once.

Link to this function batch_get_partition(input \\ %{}, options \\ [])

Retrieves partitions in a batch request.

Link to this function create_classifier(input \\ %{}, options \\ [])

Creates a Classifier in the user’s account.

Link to this function create_connection(input \\ %{}, options \\ [])

Creates a connection definition in the Data Catalog.

Link to this function create_crawler(input \\ %{}, options \\ [])

Creates a new Crawler with specified targets, role, configuration, and optional schedule. At least one crawl target must be specified, in either the s3Targets or the jdbcTargets field.

Link to this function create_database(input \\ %{}, options \\ [])

Creates a new database in a Data Catalog.

Link to this function create_dev_endpoint(input \\ %{}, options \\ [])

Creates a new DevEndpoint.

Link to this function create_job(input \\ %{}, options \\ [])

Creates a new job.

Link to this function create_partition(input \\ %{}, options \\ [])

Creates a new partition.

Link to this function create_script(input \\ %{}, options \\ [])

Transforms a directed acyclic graph (DAG) into a Python script.

Link to this function create_table(input \\ %{}, options \\ [])

Creates a new table definition in the Data Catalog.

Link to this function create_trigger(input \\ %{}, options \\ [])

Creates a new trigger.

Link to this function create_user_defined_function(input \\ %{}, options \\ [])

Creates a new function definition in the Data Catalog.

Link to this function delete_classifier(input \\ %{}, options \\ [])

Removes a Classifier from the metadata store.

Link to this function delete_connection(input \\ %{}, options \\ [])

Deletes a connection from the Data Catalog.

Link to this function delete_crawler(input \\ %{}, options \\ [])

Removes a specified Crawler from the metadata store, unless the Crawler state is RUNNING.

Link to this function delete_database(input \\ %{}, options \\ [])

Removes a specified Database from a Data Catalog.

Link to this function delete_dev_endpoint(input \\ %{}, options \\ [])

Deletes a specified DevEndpoint.

Link to this function delete_job(input \\ %{}, options \\ [])

Deletes a specified job.

Link to this function delete_partition(input \\ %{}, options \\ [])

Deletes a specified partition.

Link to this function delete_table(input \\ %{}, options \\ [])

Removes a table definition from the Data Catalog.

Link to this function delete_trigger(input \\ %{}, options \\ [])

Deletes a specified trigger.

Link to this function delete_user_defined_function(input \\ %{}, options \\ [])

Deletes an existing function definition from the Data Catalog.

Link to this function get_catalog_import_status(input \\ %{}, options \\ [])

Retrieves the status of a migration operation.

Link to this function get_classifier(input \\ %{}, options \\ [])

Retrieve a Classifier by name.

Link to this function get_classifiers(input \\ %{}, options \\ [])

Lists all Classifier objects in the metadata store.

Link to this function get_connection(input \\ %{}, options \\ [])

Retrieves a connection definition from the Data Catalog.

Link to this function get_connections(input \\ %{}, options \\ [])

Retrieves a list of connection definitions from the Data Catalog.

Link to this function get_crawler(input \\ %{}, options \\ [])

Retrieves metadata for a specified Crawler.

Link to this function get_crawler_metrics(input \\ %{}, options \\ [])

Retrieves metrics about specified crawlers.

Link to this function get_crawlers(input \\ %{}, options \\ [])

Retrieves metadata for all Crawlers defined in the customer account.

Link to this function get_database(input \\ %{}, options \\ [])

Retrieves the definition of a specified database.

Link to this function get_databases(input \\ %{}, options \\ [])

Retrieves all Databases defined in a given Data Catalog.

Link to this function get_dataflow_graph(input \\ %{}, options \\ [])

Transforms a Python script into a directed acyclic graph (DAG).

Link to this function get_dev_endpoint(input \\ %{}, options \\ [])

Retrieves information about a specified DevEndpoint.

Link to this function get_dev_endpoints(input \\ %{}, options \\ [])

Retrieves all the DevEndpoints in this AWS account.

Link to this function get_job(input \\ %{}, options \\ [])

Retrieves an existing job definition.

Link to this function get_job_run(input \\ %{}, options \\ [])

Retrieves the metadata for a given job run.

Link to this function get_job_runs(input \\ %{}, options \\ [])

Retrieves metadata for all runs of a given job.

Link to this function get_jobs(input \\ %{}, options \\ [])

Retrieves all current jobs.

Link to this function get_mapping(input \\ %{}, options \\ [])

Creates mappings.

Link to this function get_partition(input \\ %{}, options \\ [])

Retrieves information about a specified partition.

Link to this function get_partitions(input \\ %{}, options \\ [])

Retrieves information about the partitions in a table.

Link to this function get_plan(input \\ %{}, options \\ [])

Gets a Python script to perform a specified mapping.

Link to this function get_table(input \\ %{}, options \\ [])

Retrieves the Table definition in a Data Catalog for a specified table.

Link to this function get_table_versions(input \\ %{}, options \\ [])

Retrieves a list of strings that identify available versions of a specified table.

Link to this function get_tables(input \\ %{}, options \\ [])

Retrieves the definitions of some or all of the tables in a given Database.

Link to this function get_trigger(input \\ %{}, options \\ [])

Retrieves the definition of a trigger.

Link to this function get_triggers(input \\ %{}, options \\ [])

Gets all the triggers associated with a job.

Link to this function get_user_defined_function(input \\ %{}, options \\ [])

Retrieves a specified function definition from the Data Catalog.

Link to this function get_user_defined_functions(input \\ %{}, options \\ [])

Retrieves a multiple function definitions from the Data Catalog.

Link to this function import_catalog_to_glue(input \\ %{}, options \\ [])

Imports an existing Athena Data Catalog to AWS Glue

Link to this function reset_job_bookmark(input \\ %{}, options \\ [])

Resets a bookmark entry.

Link to this function start_crawler(input \\ %{}, options \\ [])

Starts a crawl using the specified Crawler, regardless of what is scheduled. If the Crawler is already running, does nothing.

Link to this function start_crawler_schedule(input \\ %{}, options \\ [])

Changes the schedule state of the specified crawler to SCHEDULED, unless the crawler is already running or the schedule state is already SCHEDULED.

Link to this function start_job_run(input \\ %{}, options \\ [])

Runs a job.

Link to this function start_trigger(input \\ %{}, options \\ [])

Starts an existing trigger.

Link to this function stop_crawler(input \\ %{}, options \\ [])

If the specified Crawler is running, stops the crawl.

Link to this function stop_crawler_schedule(input \\ %{}, options \\ [])

Sets the schedule state of the specified crawler to NOT_SCHEDULED, but does not stop the crawler if it is already running.

Link to this function stop_trigger(input \\ %{}, options \\ [])

Stops a specified trigger.

Link to this function update_classifier(input \\ %{}, options \\ [])

Modifies an existing Classifier.

Link to this function update_connection(input \\ %{}, options \\ [])

Updates a connection definition in the Data Catalog.

Link to this function update_crawler(input \\ %{}, options \\ [])

Updates a Crawler. If a Crawler is running, you must stop it using StopCrawler before updating it.

Link to this function update_crawler_schedule(input \\ %{}, options \\ [])

Updates the schedule of a crawler using a Cron expression.

Link to this function update_database(input \\ %{}, options \\ [])

Updates an existing database definition in a Data Catalog.

Link to this function update_dev_endpoint(input \\ %{}, options \\ [])

Updates a specified DevEndpoint.

Link to this function update_job(input \\ %{}, options \\ [])

Updates an existing job definition.

Link to this function update_partition(input \\ %{}, options \\ [])

Updates a partition.

Link to this function update_table(input \\ %{}, options \\ [])

Updates a metadata table in the Data Catalog.

Link to this function update_trigger(input \\ %{}, options \\ [])

Updates a trigger definition.

Link to this function update_user_defined_function(input \\ %{}, options \\ [])

Updates an existing function definition in the Data Catalog.