google_api_tool_results v0.9.0 API Reference

Modules

API calls for all endpoints tagged Projects.

Handle Tesla connections for GoogleApi.ToolResults.V1beta3.

Helper functions for deserializing responses into models.

A test of an Android application that can control an Android component independently of its normal lifecycle. See for more information on types of Android tests.

A test of an android application that explores the application on a virtual or physical Android device, finding culprits and crashes as it goes.

An Android mobile test specification.

`Any` contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Foo foo = ...; Any any; any.PackFrom(foo); ... if (any.UnpackTo(&foo)) { ... } Example 2: Pack and unpack a message in Java. Foo foo = ...; Any any = Any.pack(foo); ... if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } Example 3: Pack and unpack a message in Python. foo = Foo(...) any = Any() any.Pack(foo) ... if any.Is(Foo.DESCRIPTOR): any.Unpack(foo) ... Example 4: Pack and unpack a message in Go foo := &pb.Foo{...} any, err := ptypes.MarshalAny(foo) ... foo := &pb.Foo{} if err := ptypes.UnmarshalAny(any, foo); err != nil { ... } The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". JSON ==== The JSON representation of an `Any` value uses the regular representation of the deserialized, embedded message, with an additional field `@type` which contains the type URL. Example: package google.profile; message Person { string first_name = 1; string last_name = 2; } { "@type": "type.googleapis.com/google.profile.Person", "firstName": , "lastName": } If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field `value` which holds the custom JSON in addition to the `@type` field. Example (for message [google.protobuf.Duration][]): { "@type": "type.googleapis.com/google.protobuf.Duration", "value": "1.212s" }

Attributes

  • fullyDrawnTime (Duration): Optional. The time from app start to reaching the developer-reported "fully drawn" time. This is only stored if the app includes a call to Activity.reportFullyDrawn(). See https://developer.android.com/topic/performance/launch-time.html#time-full Defaults to: null.
  • initialDisplayTime (Duration): The time from app start to the first displayed activity being drawn, as reported in Logcat. See https://developer.android.com/topic/performance/launch-time.html#time-initial Defaults to: null.

Encapsulates the metadata for basic sample series represented by a line chart

The request must provide up to a maximum of 5000 samples to be created; a larger sample size will cause an INVALID_ARGUMENT error

Attributes

  • perfSamples ([PerfSample]): Defaults to: null.

Attributes

  • cpuProcessor (String.t): description of the device processor ie '1.8 GHz hexa core 64-bit ARMv8-A' Defaults to: null.
  • cpuSpeedInGhz (float()): the CPU clock speed in GHz Defaults to: null.
  • numberOfCores (integer()): the number of CPU cores Defaults to: null.

A Duration represents a signed, fixed-length span of time represented as a count of seconds and fractions of seconds at nanosecond resolution. It is independent of any calendar and concepts like "day" or "month". It is related to Timestamp in that the difference between two Timestamp values is a Duration and it can be added or subtracted from a Timestamp. Range is approximately +-10,000 years. # Examples Example 1: Compute Duration from two Timestamps in pseudo code. Timestamp start = ...; Timestamp end = ...; Duration duration = ...; duration.seconds = end.seconds - start.seconds; duration.nanos = end.nanos - start.nanos; if (duration.seconds 0) { duration.seconds += 1; duration.nanos -= 1000000000; } else if (durations.seconds > 0 && duration.nanos < 0) { duration.seconds -= 1; duration.nanos += 1000000000; } Example 2: Compute Timestamp from Timestamp + Duration in pseudo code. Timestamp start = ...; Duration duration = ...; Timestamp end = ...; end.seconds = start.seconds + duration.seconds; end.nanos = start.nanos + duration.nanos; if (end.nanos = 1000000000) { end.seconds += 1; end.nanos -= 1000000000; } Example 3: Compute Duration from datetime.timedelta in Python. td = datetime.timedelta(days=3, minutes=10) duration = Duration() duration.FromTimedelta(td) # JSON Mapping In JSON format, the Duration type is encoded as a string rather than an object, where the string ends in the suffix "s" (indicating seconds) and is preceded by the number of seconds, with nanoseconds expressed as fractional seconds. For example, 3 seconds with 0 nanoseconds should be encoded in JSON format as "3s", while 3 seconds and 1 nanosecond should be expressed in JSON format as "3.000000001s", and 3 seconds and 1 microsecond should be expressed in JSON format as "3.000001s".

An Execution represents a collection of Steps. For instance, it could represent: - a mobile test executed across a range of device configurations - a jenkins job with a build step followed by a test step The maximum size of an execution message is 1 MiB. An Execution can be updated until its state is set to COMPLETE at which point it becomes immutable.

Details for an outcome with a FAILURE outcome summary.

Graphics statistics for the App. The information is collected from 'adb shell dumpsys graphicsstats'. For more info see: https://developer.android.com/training/testing/performance.html Statistics will only be present for API 23+.

Attributes

  • frameCount (String.t): Number of frames in the bucket. Defaults to: null.
  • renderMillis (String.t): Lower bound of render time in milliseconds. Defaults to: null.

A History represents a sorted list of Executions ordered by the start_timestamp_millis field (descending). It can be used to group all the Executions of a continuous build. Note that the ordering only operates on one-dimension. If a repository has multiple branches, it means that multiple histories will need to be used in order to order Executions per branch.

An image, with a link to the main image and a thumbnail.

Details for an outcome with an INCONCLUSIVE outcome summary.

Step Id and outcome of each individual step that was run as a group with other steps with the same configuration.

Attributes

  • executions ([Execution]): Executions. Always set. Defaults to: null.
  • nextPageToken (String.t): A continuation token to resume the query at the next item. Will only be set if there are more Executions to fetch. Defaults to: null.

Response message for HistoryService.List

Attributes

  • perfSampleSeries ([PerfSampleSeries]): The resulting PerfSampleSeries sorted by id Defaults to: null.

Attributes

  • nextPageToken (String.t): Optional, returned if result size exceeds the page size specified in the request (or the default page size, 500, if unspecified). It indicates the last sample timestamp to be used as page_token in subsequent request Defaults to: null.
  • perfSamples ([PerfSample]): Defaults to: null.

Attributes

  • clusters ([ScreenshotCluster]): The set of clusters associated with an execution Always set Defaults to: null.

A response containing the thumbnails in a step.

Response message for StepService.List.

Response message for StepService.ListTestCases.

Attributes

  • memoryCapInKibibyte (String.t): Maximum memory that can be allocated to the process in KiB Defaults to: null.
  • memoryTotalInKibibyte (String.t): Total memory available on the device in KiB Defaults to: null.

Details when multiple steps are run with the same configuration as a group.

Interprets a result so that humans and machines can act on it.

Encapsulates performance environment info

A summary of perf metrics collected and performance environment info

Resource representing a single performance measure or data point

Resource representing a collection of performance samples (or data points)

Stores rollup test status of multiple steps that were run as a group and outcome of each individual step.

Per-project settings for the Tool Results service.

Request message for StepService.PublishXunitXmlFiles.

Attributes

  • fileReference (String.t): File reference of the png file. Required. Defaults to: null.
  • locale (String.t): Locale of the device that the screenshot was taken on. Required. Defaults to: null.
  • model (String.t): Model of the device that the screenshot was taken on. Required. Defaults to: null.
  • version (String.t): OS version of the device that the screenshot was taken on. Required. Defaults to: null.

Attributes

  • activity (String.t): A string that describes the activity of every screen in the cluster. Defaults to: null.
  • clusterId (String.t): A unique identifier for the cluster. Defaults to: null.
  • keyScreen (Screen): A singular screen that represents the cluster as a whole. This screen will act as the "cover" of the entire cluster. When users look at the clusters, only the key screen from each cluster will be shown. Which screen is the key screen is determined by the ClusteringAlgorithm Defaults to: null.
  • screens ([Screen]): Full list of screens. Defaults to: null.

Details for an outcome with a SKIPPED outcome summary.

The details about how to run the execution.

The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

A Step represents a single operation performed as part of Execution. A step can be used to represent the execution of a tool ( for example a test runner execution or an execution of a compiler). Steps can overlap (for instance two steps might have the same start time if some operations are done in parallel). Here is an example, let's consider that we have a continuous build is executing a test runner for each iteration. The workflow would look like: - user creates a Execution with id 1 - user creates an TestExecutionStep with id 100 for Execution 1 - user update TestExecutionStep with id 100 to add a raw xml log + the service parses the xml logs and returns a TestExecutionStep with updated TestResult(s). - user update the status of TestExecutionStep with id 100 to COMPLETE A Step can be updated until its state is set to COMPLETE at which points it becomes immutable.

Attributes

  • key (String.t): Defaults to: null.
  • value (String.t): Defaults to: null.

Attributes

  • key (String.t): Defaults to: null.
  • value (String.t): Defaults to: null.

Details for an outcome with a SUCCESS outcome summary.

Attributes

  • endTime (Timestamp): The end time of the test case. Optional. Defaults to: null.
  • skippedMessage (String.t): Why the test case was skipped. Present only for skipped test case Defaults to: null.
  • stackTraces ([StackTrace]): The stack trace details if the test case failed or encountered an error. The maximum size of the stack traces is 100KiB, beyond which the stack track will be truncated. Zero if the test case passed. Defaults to: null.
  • startTime (Timestamp): The start time of the test case. Optional. Defaults to: null.
  • status (String.t): The status of the test case. Required. Defaults to: null.

A reference to a test case. Test case references are canonically ordered lexicographically by these three factors: First, by test_suite_name. Second, by class_name. * Third, by name.

A step that represents running tests. It accepts ant-junit xml files which will be parsed into structured test results by the service. Xml file paths are updated in order to append more files, however they can't be deleted. Users can also add test results manually by using the test_result field.

An issue detected occurring during a test execution.

A summary of a test suite result either parsed from XML or uploaded directly by a user. Note: the API related comments are for StepService only. This message is also being used in ExecutionService in a read only mode for the corresponding step.

Testing timing break down to know phases.

A single thumbnail, with its size and format.

An execution of an arbitrary tool. It could be a test runner or a tool copying artifacts or deploying code.

Generic tool step to be used for binaries we do not explicitly support. For example: running cp to copy artifacts from one location to another.

Exit code from a tool execution.

A reference to a ToolExecution output file.

Helper functions for building Tesla requests.