google_api_machine_learning v0.8.0 GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1TrainingInput View Source
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training job</a>.
Attributes
- args ([String.t]): Optional. Command line arguments to pass to the program. Defaults to:
null
. - hyperparameters (GoogleCloudMlV1HyperparameterSpec): Optional. The set of Hyperparameters to tune. Defaults to:
null
. - jobDir (String.t): Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training. Defaults to:
null
. - masterConfig (GoogleCloudMlV1ReplicaConfig): Optional. The configuration for your master worker. You should only set `masterConfig.acceleratorConfig` if `masterType` is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set `masterConfig.imageUri` only if you build a custom image. Only one of `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about configuring custom containers. Defaults to:
null
. - masterType (String.t): Optional. Specifies the type of virtual machine to use for your training job's master worker. The following types are supported: <dl> <dt>standard</dt> <dd> A basic machine configuration suitable for training simple models with small to moderate datasets. </dd> <dt>large_model</dt> <dd> A machine with a lot of memory, specially suited for parameter servers when your model is large (having many hidden layers or layers with very large numbers of nodes). </dd> <dt>complex_model_s</dt> <dd> A machine suitable for the master and workers of the cluster when your model requires more computation than the standard machine can handle satisfactorily. </dd> <dt>complex_model_m</dt> <dd> A machine with roughly twice the number of cores and roughly double the memory of <i>complex_model_s</i>. </dd> <dt>complex_model_l</dt> <dd> A machine with roughly twice the number of cores and roughly double the memory of <i>complex_model_m</i>. </dd> <dt>standard_gpu</dt> <dd> A machine equivalent to <i>standard</i> that also includes a single NVIDIA Tesla K80 GPU. See more about <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to train your model</a>. </dd> <dt>complex_model_m_gpu</dt> <dd> A machine equivalent to <i>complex_model_m</i> that also includes four NVIDIA Tesla K80 GPUs. </dd> <dt>complex_model_l_gpu</dt> <dd> A machine equivalent to <i>complex_model_l</i> that also includes eight NVIDIA Tesla K80 GPUs. </dd> <dt>standard_p100</dt> <dd> A machine equivalent to <i>standard</i> that also includes a single NVIDIA Tesla P100 GPU. </dd> <dt>complex_model_m_p100</dt> <dd> A machine equivalent to <i>complex_model_m</i> that also includes four NVIDIA Tesla P100 GPUs. </dd> <dt>standard_v100</dt> <dd> A machine equivalent to <i>standard</i> that also includes a single NVIDIA Tesla V100 GPU. </dd> <dt>large_model_v100</dt> <dd> A machine equivalent to <i>large_model</i> that also includes a single NVIDIA Tesla V100 GPU. </dd> <dt>complex_model_m_v100</dt> <dd> A machine equivalent to <i>complex_model_m</i> that also includes four NVIDIA Tesla V100 GPUs. </dd> <dt>complex_model_l_v100</dt> <dd> A machine equivalent to <i>complex_model_l</i> that also includes eight NVIDIA Tesla V100 GPUs. </dd> <dt>cloud_tpu</dt> <dd> A TPU VM including one Cloud TPU. See more about <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train your model</a>. </dd> </dl> You may also use certain Compute Engine machine types directly in this field. The following types are supported: - `n1-standard-4` - `n1-standard-8` - `n1-standard-16` - `n1-standard-32` - `n1-standard-64` - `n1-standard-96` - `n1-highmem-2` - `n1-highmem-4` - `n1-highmem-8` - `n1-highmem-16` - `n1-highmem-32` - `n1-highmem-64` - `n1-highmem-96` - `n1-highcpu-16` - `n1-highcpu-32` - `n1-highcpu-64` - `n1-highcpu-96` See more about using Compute Engine machine types. You must set this value when `scaleTier` is set to `CUSTOM`. Defaults to:
null
. - maxRunningTime (String.t): Optional. The maximum job running time. The default is 7 days. Defaults to:
null
. - packageUris ([String.t]): Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100. Defaults to:
null
. - parameterServerConfig (GoogleCloudMlV1ReplicaConfig): Optional. The configuration for parameter servers. You should only set `parameterServerConfig.acceleratorConfig` if `parameterServerConfigType` is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set `parameterServerConfig.imageUri` only if you build a custom image for your parameter server. If `parameterServerConfig.imageUri` has not been set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about configuring custom containers. Defaults to:
null
. - parameterServerCount (String.t): Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in `parameter_server_type`. This value can only be used when `scale_tier` is set to `CUSTOM`.If you set this value, you must also set `parameter_server_type`. The default value is zero. Defaults to:
null
. - parameterServerType (String.t): Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for `master_type`. This value must be consistent with the category of machine type that `masterType` uses. In other words, both must be AI Platform machine types or both must be Compute Engine machine types. This value must be present when `scaleTier` is set to `CUSTOM` and `parameter_server_count` is greater than zero. Defaults to:
null
. - pythonModule (String.t): Required. The Python module name to run after installing the packages. Defaults to:
null
. - pythonVersion (String.t): Optional. The version of Python used in training. If not set, the default version is '2.7'. Python '3.5' is available when `runtime_version` is set to '1.4' and above. Python '2.7' works with all supported <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>. Defaults to:
null
. - region (String.t): Required. The Google Compute Engine region to run the training job in. See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> for AI Platform services. Defaults to:
null
. - runtimeVersion (String.t): Optional. The AI Platform runtime version to use for training. If not set, AI Platform uses the default stable version, 1.0. For more information, see the <a href="/ml-engine/docs/runtime-version-list">runtime version list</a> and <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>. Defaults to:
null
. scaleTier (String.t): Required. Specifies the machine types, the number of replicas for workers and parameter servers. Defaults to:
null
.- Enum - one of [BASIC, STANDARD_1, PREMIUM_1, BASIC_GPU, BASIC_TPU, CUSTOM]
- workerConfig (GoogleCloudMlV1ReplicaConfig): Optional. The configuration for workers. You should only set `workerConfig.acceleratorConfig` if `workerType` is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set `workerConfig.imageUri` only if you build a custom image for your worker. If `workerConfig.imageUri` has not been set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about configuring custom containers. Defaults to:
null
. - workerCount (String.t): Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in `worker_type`. This value can only be used when `scale_tier` is set to `CUSTOM`. If you set this value, you must also set `worker_type`. The default value is zero. Defaults to:
null
. - workerType (String.t): Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for `masterType`. This value must be consistent with the category of machine type that `masterType` uses. In other words, both must be AI Platform machine types or both must be Compute Engine machine types. If you use `cloud_tpu` for this value, see special instructions for configuring a custom TPU machine. This value must be present when `scaleTier` is set to `CUSTOM` and `workerCount` is greater than zero. Defaults to:
null
.
Link to this section Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Link to this section Types
Link to this type
t()
View Source
t()
View Source
t() :: %GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1TrainingInput{
args: [any()],
hyperparameters:
GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1HyperparameterSpec.t(),
jobDir: any(),
masterConfig:
GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1ReplicaConfig.t(),
masterType: any(),
maxRunningTime: any(),
packageUris: [any()],
parameterServerConfig:
GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1ReplicaConfig.t(),
parameterServerCount: any(),
parameterServerType: any(),
pythonModule: any(),
pythonVersion: any(),
region: any(),
runtimeVersion: any(),
scaleTier: any(),
workerConfig:
GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1ReplicaConfig.t(),
workerCount: any(),
workerType: any()
}
t() :: %GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1TrainingInput{ args: [any()], hyperparameters: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1HyperparameterSpec.t(), jobDir: any(), masterConfig: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1ReplicaConfig.t(), masterType: any(), maxRunningTime: any(), packageUris: [any()], parameterServerConfig: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1ReplicaConfig.t(), parameterServerCount: any(), parameterServerType: any(), pythonModule: any(), pythonVersion: any(), region: any(), runtimeVersion: any(), scaleTier: any(), workerConfig: GoogleApi.MachineLearning.V1.Model.GoogleCloudMlV1ReplicaConfig.t(), workerCount: any(), workerType: any() }
Link to this section Functions
Link to this function
decode(value, options) View Source
Unwrap a decoded JSON object into its complex fields.