Skip to main content

table_storage

DEPRECATED

This component is deprecated and will be removed in the next major version release. Please consider moving onto alternative components.

This component has been renamed to azure_table_storage.

# Common config fields, showing default values
output:
label: ""
table_storage:
storage_account: ""
storage_access_key: ""
storage_connection_string: ""
table_name: ""
partition_key: ""
row_key: ""
properties: {}
max_in_flight: 1
batching:
count: 0
byte_size: 0
period: ""
check: ""

Performance#

This output benefits from sending multiple messages in flight in parallel for improved performance. You can tune the max number of in flight messages with the field max_in_flight.

This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more in this doc.

Fields#

storage_account#

The storage account to upload messages to. This field is ignored if storage_connection_string is set.

Type: string
Default: ""

storage_access_key#

The storage account access key. This field is ignored if storage_connection_string is set.

Type: string
Default: ""

storage_connection_string#

A storage account connection string. This field is required if storage_account and storage_access_key are not set.

Type: string
Default: ""

table_name#

The table to store messages into. This field supports interpolation functions.

Type: string
Default: ""

# Examples
table_name: ${!meta("kafka_topic")}

partition_key#

The partition key. This field supports interpolation functions.

Type: string
Default: ""

# Examples
partition_key: ${!json("date")}

row_key#

The row key. This field supports interpolation functions.

Type: string
Default: ""

# Examples
row_key: ${!json("device")}-${!uuid_v4()}

properties#

A map of properties to store into the table. This field supports interpolation functions.

Type: object
Default: {}

insert_type#

Type of insert operation This field supports interpolation functions.

Type: string
Default: "INSERT"
Options: INSERT, INSERT_MERGE, INSERT_REPLACE.

max_in_flight#

The maximum number of messages to have in flight at a given time. Increase this to improve throughput.

Type: number
Default: 1

timeout#

The maximum period to wait on an upload before abandoning it and reattempting.

Type: string
Default: "5s"

batching#

Allows you to configure a batching policy.

Type: object

# Examples
batching:
byte_size: 5000
count: 0
period: 1s
batching:
count: 10
period: 1s
batching:
check: this.contains("END BATCH")
count: 0
period: 1m

batching.count#

A number of messages at which the batch should be flushed. If 0 disables count based batching.

Type: number
Default: 0

batching.byte_size#

An amount of bytes at which the batch should be flushed. If 0 disables size based batching.

Type: number
Default: 0

batching.period#

A period in which an incomplete batch should be flushed regardless of its size.

Type: string
Default: ""

# Examples
period: 1s
period: 1m
period: 500ms

batching.check#

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string
Default: ""

# Examples
check: this.type == "end_of_transaction"

batching.processors#

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.

Type: array
Default: []

# Examples
processors:
- archive:
format: lines
processors:
- archive:
format: json_array
processors:
- merge_json: {}