google_api_dataproc v0.16.0 API Reference
Modules
API calls for all endpoints tagged Projects
.
Handle Tesla connections for GoogleApi.Dataproc.V1.
Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
Associates members with a role.
A request to cancel a job.
Describes the identifying information, config, and status of a cluster of Compute Engine instances.
The cluster config.
Contains cluster daemon metrics, such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
The cluster operation triggered by a workflow.
Metadata describing the operation.
The status of the operation.
A selector that chooses target cluster for jobs based on metadata.
The status of a cluster and its instances.
A request to collect cluster diagnostic information.
The location of diagnostic output.
Specifies the config of disk options for a group of VM instances.
A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}.
Encryption settings for the cluster.
Represents an expression text. Example: title: "User account presence" description: "Determines whether the request has a user account" expression: "size(request.user) > 0"
Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.
Request message for GetIamPolicy method.
Encapsulates settings provided to GetIamPolicy.
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group.
A request to instantiate a workflow template.
A Cloud Dataproc job resource.
Cloud Dataproc job config.
Encapsulates the full scoping used to reference a job.
Job scheduling options.
Cloud Dataproc job status.
Specifies Kerberos related configuration.
Specifies the cluster auto-delete schedule configuration.
The list of all clusters in a project.
A list of jobs in a project.
The response message for Operations.ListOperations.
A response to a request to list workflow templates in a project.
The runtime logging config of the job.
Cluster that is managed by the workflow.
Specifies the resources used to actively manage an instance group.
Specifies an executable to run on a fully configured node and a timeout period for executable completion.
This resource represents a long-running operation that is the result of a network API call.
A job executed by the workflow.
Configuration for parameter validation.
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
Defines an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources.A Policy consists of a list of bindings. A binding binds a list of members to a role, where the members can be user accounts, Google groups, Google domains, and service accounts. A role is a named list of permissions defined by IAM.JSON Example { "bindings": [
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN.
A list of queries to run on a cluster.
Validation based on regular expressions.
Security related configuration, including Kerberos.
Request message for SetIamPolicy method.
Specifies the selection and config of software inside the cluster.
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC (https://github.com/grpc). Each Status message contains three pieces of data: error code, error message, and error details.You can find out more about this error model and how to work with it in the API Design Guide (https://cloud.google.com/apis/design/errors).
A request to submit a job.
A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
Request message for TestIamPermissions method.
Response message for TestIamPermissions method.
Validation based on a list of allowed values.
The workflow graph.
A Cloud Dataproc workflow template resource.
The workflow node.
A Cloud Dataproc workflow template resource.
Specifies workflow execution target.Either managed_cluster or cluster_selector is required.
A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto
.Beta Feature: This report is available for testing purposes only. It may be changed before final release.