baiji v0.6.7 Baiji.CloudwatchLogs

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon EC2 instances, AWS CloudTrail, or other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console, CloudWatch Logs commands in the AWS CLI, CloudWatch Logs API, or CloudWatch Logs SDK.

You can use CloudWatch Logs to:

  • **Monitor logs from EC2 instances in real-time**: You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold that you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceException") or count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify.
  • **Monitor AWS CloudTrail logged events**: You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail and use the notification to perform troubleshooting.
  • **Archive log data**: You can use CloudWatch Logs to store your log data in highly durable storage. You can change the log retention setting so that any log events older than this setting are automatically deleted. The CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it.

Link to this section Summary

Functions

Returns a map containing the input/output shapes for this endpoint

Outputs values common to all actions

Cancels the specified export task

Creates an export task, which allows you to efficiently export data from a log group to an Amazon S3 bucket

Creates a log group with the specified name

Creates a log stream for the specified log group

Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination

Deletes the specified log group and permanently deletes all the archived log events associated with the log group

Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream

Deletes the specified metric filter

Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account

Deletes the specified retention policy

Deletes the specified subscription filter

Lists all your destinations. The results are ASCII-sorted by destination name

Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status

Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name

Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered

Lists the specified metric filters. You can list all the metric filters or filter the results by log name, prefix, metric name, and metric namespace. The results are ASCII-sorted by filter name

Lists the resource policies in this account

Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name

Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream

Lists log events from the specified log stream. You can list all the log events or filter using a time range

Lists the tags for the specified log group

Creates or updates a destination. A destination encapsulates a physical resource (such as an Amazon Kinesis stream) and enables you to subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents. Currently, the only supported physical resource is a Kinesis stream belonging to the same account as the destination

Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination

Uploads a batch of log events to the specified log stream

Creates or updates a metric filter and associates it with the specified log group. Metric filters allow you to configure rules to extract metric data from log events ingested through PutLogEvents

Creates or updates a resource policy allowing other AWS services to put log events to this account, such as Amazon Route 53. An account can have up to 50 resource policies per region

Sets the retention of the specified log group. A retention policy allows you to configure the number of days for which to retain log events in the specified log group

Creates or updates a subscription filter and associates it with the specified log group. Subscription filters allow you to subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. Currently, the supported destinations are

Adds or updates the specified tags for the specified log group

Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern

Removes the specified tags from the specified log group

Link to this section Functions

Returns a map containing the input/output shapes for this endpoint

Outputs values common to all actions

Link to this function cancel_export_task(input \\ %{}, options \\ [])

Cancels the specified export task.

The task must be in the PENDING or RUNNING state.

Link to this function create_export_task(input \\ %{}, options \\ [])

Creates an export task, which allows you to efficiently export data from a log group to an Amazon S3 bucket.

This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active (RUNNING or PENDING) export task at a time. To cancel an export task, use CancelExportTask.

You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate out log data for each export task, you can specify a prefix to be used as the Amazon S3 key prefix for all exported objects.

Link to this function create_log_group(input \\ %{}, options \\ [])

Creates a log group with the specified name.

You can create up to 5000 log groups per account.

You must use the following guidelines when naming a log group:

  • Log group names must be unique within a region for an AWS account.
  • Log group names can be between 1 and 512 characters long.
  • Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), and '.' (period).
Link to this function create_log_stream(input \\ %{}, options \\ [])

Creates a log stream for the specified log group.

There is no limit on the number of log streams that you can create for a log group.

You must use the following guidelines when naming a log stream:

  • Log stream names must be unique within the log group.
  • Log stream names can be between 1 and 512 characters long.
  • The ':' (colon) and '*' (asterisk) characters are not allowed.
Link to this function delete_destination(input \\ %{}, options \\ [])

Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination.

Link to this function delete_log_group(input \\ %{}, options \\ [])

Deletes the specified log group and permanently deletes all the archived log events associated with the log group.

Link to this function delete_log_stream(input \\ %{}, options \\ [])

Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.

Link to this function delete_metric_filter(input \\ %{}, options \\ [])

Deletes the specified metric filter.

Link to this function delete_resource_policy(input \\ %{}, options \\ [])

Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account.

Link to this function delete_retention_policy(input \\ %{}, options \\ [])

Deletes the specified retention policy.

Log events do not expire if they belong to log groups without a retention policy.

Link to this function delete_subscription_filter(input \\ %{}, options \\ [])

Deletes the specified subscription filter.

Link to this function describe_destinations(input \\ %{}, options \\ [])

Lists all your destinations. The results are ASCII-sorted by destination name.

Link to this function describe_export_tasks(input \\ %{}, options \\ [])

Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status.

Link to this function describe_log_groups(input \\ %{}, options \\ [])

Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.

Link to this function describe_log_streams(input \\ %{}, options \\ [])

Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered.

This operation has a limit of five transactions per second, after which transactions are throttled.

Link to this function describe_metric_filters(input \\ %{}, options \\ [])

Lists the specified metric filters. You can list all the metric filters or filter the results by log name, prefix, metric name, and metric namespace. The results are ASCII-sorted by filter name.

Link to this function describe_resource_policies(input \\ %{}, options \\ [])

Lists the resource policies in this account.

Link to this function describe_subscription_filters(input \\ %{}, options \\ [])

Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.

Link to this function filter_log_events(input \\ %{}, options \\ [])

Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.

By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events), or all the events found within the time range that you specify. If the results include a token, then there are more log events available, and you can get additional results by specifying the token in a subsequent call.

Link to this function get_log_events(input \\ %{}, options \\ [])

Lists log events from the specified log stream. You can list all the log events or filter using a time range.

By default, this operation returns as many log events as can fit in a response size of 1 MB (up to 10,000 log events). You can get additional log events by specifying one of the tokens in a subsequent call.

Link to this function list_tags_log_group(input \\ %{}, options \\ [])

Lists the tags for the specified log group.

To add tags, use TagLogGroup. To remove tags, use UntagLogGroup.

Link to this function put_destination(input \\ %{}, options \\ [])

Creates or updates a destination. A destination encapsulates a physical resource (such as an Amazon Kinesis stream) and enables you to subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents. Currently, the only supported physical resource is a Kinesis stream belonging to the same account as the destination.

Through an access policy, a destination controls what is written to its Kinesis stream. By default, PutDestination does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after PutDestination.

Link to this function put_destination_policy(input \\ %{}, options \\ [])

Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination.

Link to this function put_log_events(input \\ %{}, options \\ [])

Uploads a batch of log events to the specified log stream.

You must include the sequence token obtained from the response of the previous call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token using DescribeLogStreams. If you call PutLogEvents twice within a narrow time period using the same value for sequenceToken, both calls may be successful, or one may be rejected.

The batch of events must satisfy the following constraints:

  • The maximum batch size is 1,048,576 bytes, and this size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.
  • None of the log events in the batch can be more than 2 hours in the future.
  • None of the log events in the batch can be older than 14 days or the retention period of the log group.
  • The log events in the batch must be in chronological ordered by their time stamp (the time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC).
  • The maximum number of log events in a batch is 10,000.
  • A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.
Link to this function put_metric_filter(input \\ %{}, options \\ [])

Creates or updates a metric filter and associates it with the specified log group. Metric filters allow you to configure rules to extract metric data from log events ingested through PutLogEvents.

The maximum number of metric filters that can be associated with a log group is 100.

Link to this function put_resource_policy(input \\ %{}, options \\ [])

Creates or updates a resource policy allowing other AWS services to put log events to this account, such as Amazon Route 53. An account can have up to 50 resource policies per region.

Link to this function put_retention_policy(input \\ %{}, options \\ [])

Sets the retention of the specified log group. A retention policy allows you to configure the number of days for which to retain log events in the specified log group.

Link to this function put_subscription_filter(input \\ %{}, options \\ [])

Creates or updates a subscription filter and associates it with the specified log group. Subscription filters allow you to subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. Currently, the supported destinations are:

  • An Amazon Kinesis stream belonging to the same account as the subscription filter, for same-account delivery.
  • A logical destination that belongs to a different account, for cross-account delivery.
  • An Amazon Kinesis Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.
  • An AWS Lambda function that belongs to the same account as the subscription filter, for same-account delivery.
There can only be one subscription filter associated with a log group. If you are updating an existing filter, you must specify the correct name in `filterName`. Otherwise, the call fails because you cannot associate a second filter with a log group.
Link to this function tag_log_group(input \\ %{}, options \\ [])

Adds or updates the specified tags for the specified log group.

To list the tags for a log group, use ListTagsLogGroup. To remove tags, use UntagLogGroup.

For more information about tags, see Tag Log Groups in Amazon CloudWatch Logs in the Amazon CloudWatch Logs User Guide.

Link to this function test_metric_filter(input \\ %{}, options \\ [])

Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern.

Link to this function untag_log_group(input \\ %{}, options \\ [])

Removes the specified tags from the specified log group.

To list the tags for a log group, use ListTagsLogGroup. To add tags, use UntagLogGroup.