aws_s3

Sends message parts as objects to an Amazon S3 bucket. Each object is uploaded with the path specified with the path field.

Introduced in version 3.36.0.

# Common config fields, showing default values
output:
aws_s3:
bucket: ""
path: ${!count("files")}-${!timestamp_unix_nano()}.txt
content_type: application/octet-stream
max_in_flight: 1
batching:
count: 0
byte_size: 0
period: ""
check: ""
region: eu-west-1

In order to have a different path for each object you should use function interpolations described here, which are calculated per message of a batch.

Metadata#

Metadata fields on messages will be sent as headers, in order to mutate these values (or remove them) check out the metadata docs.

Credentials#

By default Benthos will use a shared credentials file when connecting to AWS services. It's also possible to set them explicitly at the component level, allowing you to transfer data across accounts. You can find out more in this document.

Batching#

It's common to want to upload messages to S3 as batched archives, the easiest way to do this is to batch your messages at the output level and join the batch of messages with an archive and/or compress processor.

For example, if we wished to upload messages as a .tar.gz archive of documents we could achieve that with the following config:

output:
aws_s3:
bucket: TODO
path: ${!count("files")}-${!timestamp_unix_nano()}.tar.gz
batching:
count: 100
period: 10s
processors:
- archive:
format: tar
- compress:
algorithm: gzip

Alternatively, if we wished to upload JSON documents as a single large document containing an array of objects we can do that with:

output:
aws_s3:
bucket: TODO
path: ${!count("files")}-${!timestamp_unix_nano()}.json
batching:
count: 100
processors:
- archive:
format: json_array

Performance#

This output benefits from sending multiple messages in flight in parallel for improved performance. You can tune the max number of in flight messages with the field max_in_flight.

Fields#

bucket#

The bucket to upload messages to.

Type: string
Default: ""

path#

The path of each message to upload. This field supports interpolation functions.

Type: string
Default: "${!count(\"files\")}-${!timestamp_unix_nano()}.txt"

# Examples
path: ${!count("files")}-${!timestamp_unix_nano()}.txt
path: ${!meta("kafka_key")}.json
path: ${!json("doc.namespace")}/${!json("doc.id")}.json

content_type#

The content type to set for each object. This field supports interpolation functions.

Type: string
Default: "application/octet-stream"

content_encoding#

An optional content encoding to set for each object. This field supports interpolation functions.

Type: string
Default: ""

storage_class#

The storage class to set for each object. This field supports interpolation functions.

Type: string
Default: "STANDARD"
Options: STANDARD, REDUCED_REDUNDANCY, GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE.

kms_key_id#

An optional server side encryption key.

Type: string
Default: ""

force_path_style_urls#

Forces the client API to use path style URLs, which helps when connecting to custom endpoints.

Type: bool
Default: false

max_in_flight#

The maximum number of messages to have in flight at a given time. Increase this to improve throughput.

Type: number
Default: 1

timeout#

The maximum period to wait on an upload before abandoning it and reattempting.

Type: string
Default: "5s"

batching#

Allows you to configure a batching policy.

Type: object

# Examples
batching:
byte_size: 5000
count: 0
period: 1s
batching:
count: 10
period: 1s
batching:
check: this.contains("END BATCH")
count: 0
period: 1m

batching.count#

A number of messages at which the batch should be flushed. If 0 disables count based batching.

Type: number
Default: 0

batching.byte_size#

An amount of bytes at which the batch should be flushed. If 0 disables size based batching.

Type: number
Default: 0

batching.period#

A period in which an incomplete batch should be flushed regardless of its size.

Type: string
Default: ""

# Examples
period: 1s
period: 1m
period: 500ms

batching.check#

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string
Default: ""

# Examples
check: this.type == "end_of_transaction"

batching.processors#

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.

Type: array
Default: []

# Examples
processors:
- archive:
format: lines
processors:
- archive:
format: json_array
processors:
- merge_json: {}

region#

The AWS region to target.

Type: string
Default: "eu-west-1"

endpoint#

Allows you to specify a custom endpoint for the AWS API.

Type: string
Default: ""

credentials#

Optional manual configuration of AWS credentials to use. More information can be found in this document.

Type: object

credentials.profile#

A profile from ~/.aws/credentials to use.

Type: string
Default: ""

credentials.id#

The ID of credentials to use.

Type: string
Default: ""

credentials.secret#

The secret for the credentials being used.

Type: string
Default: ""

credentials.token#

The token for the credentials being used, required when using short term credentials.

Type: string
Default: ""

credentials.role#

A role ARN to assume.

Type: string
Default: ""

credentials.role_external_id#

An external ID to provide when assuming a role.

Type: string
Default: ""