API Reference google_api_data_labeling v0.2.1

Modules

API client metadata for GoogleApi.DataLabeling.V1beta1.

API calls for all endpoints tagged Projects.

Handle Tesla connections for GoogleApi.DataLabeling.V1beta1.

Export destination of the data.Only gcs path is allowed in output_uri.

Configuration for how human labeling task should be done.

Metadata of a labeling operation, such as LabelImage or LabelVideo. Next tag: 23

AnnotatedDataset is a set holding annotations for data in a Dataset. Each labeling task will generate an AnnotatedDataset under the Dataset that the task is requested for.

Annotation for Example. Each example may have one or more annotations. For example in image classification problem, each image might have one or more labels. We call labels binded with this image an Annotation.

Additional information associated with the annotation.

Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as dog or cat must reference an AnnotationSpec for dog and an AnnotationSpec for cat.

An AnnotationSpecSet is a collection of label definitions. For example, in image classification tasks, you define a set of possible labels for images as an AnnotationSpecSet. An AnnotationSpecSet is immutable upon creation.

Annotation spec set with the setting of allowing multi labels or not.

The BigQuery location for input data. If used in an EvaluationJob, this is where the service saves the prediction input and output sampled from the model version.

Config for image bounding poly (and bounding box) human labeling task.

Attributes

  • confidenceThreshold (type: number(), default: nil) - Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label's score meets this threshold. For image object detection (bounding box) tasks, this is the intersection-over-union (IOU) threshold for the context of this point on the PR curve.
  • f1Score (type: number(), default: nil) - Harmonic mean of recall and precision.
  • f1ScoreAt1 (type: number(), default: nil) - The harmonic mean of recall_at1 and precision_at1.
  • f1ScoreAt5 (type: number(), default: nil) - The harmonic mean of recall_at5 and precision_at5.
  • precision (type: number(), default: nil) - Precision value.
  • precisionAt1 (type: number(), default: nil) - Precision value for entries with label that has highest score.
  • precisionAt5 (type: number(), default: nil) - Precision value for entries with label that has highest 5 scores.
  • recall (type: number(), default: nil) - Recall value.
  • recallAt1 (type: number(), default: nil) - Recall value for entries with label that has highest score.
  • recallAt5 (type: number(), default: nil) - Recall value for entries with label that has highest 5 scores.

Confusion matrix of the model running the classification. Only applicable when the metrics entry aggregates multiple labels. Not applicable when the entry is for a single label.

Attributes

  • annotationSpec (type: GoogleApi.DataLabeling.V1beta1.Model.GoogleCloudDatalabelingV1beta1AnnotationSpec.t, default: nil) - The annotation spec of a predicted label.
  • itemCount (type: integer(), default: nil) - Number of items predicted to have this label. (The ground truth label for these items is the Row.annotationSpec of this entry's parent.)

Deprecated: this instruction format is not supported any more. Instruction from a CSV file.

DataItem is a piece of data, without annotation. For example, an image.

Dataset is the resource to hold your data. You can request multiple labeling tasks for a dataset while each one will generate an AnnotatedDataset.

Describes an evaluation between a machine learning model's predictions and ground truth labels. Created when an EvaluationJob runs successfully.

Configuration details used for calculating evaluation metrics and creating an Evaluation.

Defines an evaluation job that runs periodically to generate Evaluations. Creating an evaluation job is the starting point for using continuous evaluation.

Provides details for how an evaluation job sends email alerts based on the results of a run.

Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob.

Attributes

  • classificationMetrics (type: GoogleApi.DataLabeling.V1beta1.Model.GoogleCloudDatalabelingV1beta1ClassificationMetrics.t, default: nil) -
  • objectDetectionMetrics (type: GoogleApi.DataLabeling.V1beta1.Model.GoogleCloudDatalabelingV1beta1ObjectDetectionMetrics.t, default: nil) -

An Example is a piece of data and its annotation. For example, an image with label "house".

Example comparisons comparing ground truth output and predictions for a specific input.

A feedback thread of a certain labeling task on a certain annotated dataset.

Attributes

  • createTime (type: DateTime.t, default: nil) - When the thread is created
  • lastUpdateTime (type: DateTime.t, default: nil) - When the thread is last updated.
  • status (type: String.t, default: nil) -
  • thumbnail (type: String.t, default: nil) - An image thumbnail of this thread.

Export destination of the data.Only gcs path is allowed in output_uri.

Source of the Cloud Storage file to be imported.

Configuration for how human labeling task should be done.

Image bounding poly annotation. It represents a polygon including bounding box in the image.

The configuration of input data, including data type, location, etc.

Instruction of how to perform the labeling task for human operators. Currently only PDF instruction is supported.

Metadata of a labeling operation, such as LabelImage or LabelVideo. Next tag: 23

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.

Config for video object detection human labeling task. Object detection will be conducted on the images extracted from the video, and those objects will be labeled with bounding boxes. User need to specify the number of images to be extracted per second as the extraction frame rate.

Metrics calculated for an image object detection (bounding box) model.

Video frame level annotation for object detection and tracking.

General information useful for labels coming from contributors.

Attributes

  • annotationSpec (type: GoogleApi.DataLabeling.V1beta1.Model.GoogleCloudDatalabelingV1beta1AnnotationSpec.t, default: nil) - The annotation spec of the label for which the precision-recall curve calculated. If this field is empty, that means the precision-recall curve is an aggregate curve for all labels.
  • areaUnderCurve (type: number(), default: nil) - Area under the precision-recall curve. Not to be confused with area under a receiver operating characteristic (ROC) curve.
  • confidenceMetricsEntries (type: list(GoogleApi.DataLabeling.V1beta1.Model.GoogleCloudDatalabelingV1beta1ConfidenceMetricsEntry.t), default: nil) - Entries that make up the precision-recall graph. Each entry is a "point" on the graph drawn for a different confidence_threshold.
  • meanAveragePrecision (type: number(), default: nil) - Mean average prcision of this curve.

Metadata describing the feedback from the labeling task requester.

A row in the confusion matrix. Each entry in this row has the same ground truth label.

Start and end position in a sequence (e.g. text segment).

A time period inside of an example that has a time dimension (e.g. video).

A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.

Config for video classification human labeling task. Currently two types of video classification are supported: 1. Assign labels on the entire video. 2. Split the video into multiple video clips based on camera shot, and assign labels on each video clip.

Export destination of the data.Only gcs path is allowed in output_uri.

Metadata of a labeling operation, such as LabelImage or LabelVideo. Next tag: 23

Export destination of the data.Only gcs path is allowed in output_uri.

Metadata of a labeling operation, such as LabelImage or LabelVideo. Next tag: 23

The response message for Operations.ListOperations.

This resource represents a long-running operation that is the result of a network API call.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}.

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.