table_storage

BETA: This component is experimental and therefore subject to change outside of major version releases.

Stores message parts in an Azure Table Storage table.

# Common config fields, showing default values
output:
table_storage:
storage_account: ""
storage_access_key: ""
table_name: ""
partition_key: ""
row_key: ""
properties: {}
max_in_flight: 1
batching:
count: 1
byte_size: 0
period: ""

In order to set the table_name, partition_key and row_key you can use function interpolations described here, which are calculated per message of a batch.

If the properties are not set in the config, all the json fields are marshaled and stored in the table, which will be created if it does not exist. The object and array fields are marshaled as strings. e.g.:

The json message:

{
"foo": 55,
"bar": {
"baz": "a",
"bez": "b"
},
"diz": ["a", "b"]
}

will store in the table the following properties:

foo: '55'
bar: '{ "baz": "a", "bez": "b" }'
diz: '["a", "b"]'

It's also possible to use function interpolations to get or transform the properties values, e.g.:

properties:
device: '${! json("device") }'
timestamp: '${! json("timestamp") }'

Performance

This output benefits from sending multiple messages in flight in parallel for improved performance. You can tune the max number of in flight messages with the field max_in_flight.

This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more in this doc.

Fields

storage_account

The storage account to upload messages to.

Type: string
Default: ""

storage_access_key

The storage account access key.

Type: string
Default: ""

table_name

The table to store messages into. This field supports interpolation functions.

Type: string
Default: ""

# Examples
table_name: ${!meta("kafka_topic")}

partition_key

The partition key. This field supports interpolation functions.

Type: string
Default: ""

# Examples
partition_key: ${!json("date")}

row_key

The row key. This field supports interpolation functions.

Type: string
Default: ""

# Examples
row_key: ${!json("device")}-${!uuid_v4()}

properties

A map of properties to store into the table. This field supports interpolation functions.

Type: object
Default: {}

insert_type

Type of insert operation This field supports interpolation functions.

Type: string
Default: "INSERT"
Options: INSERT, INSERT_MERGE, INSERT_REPLACE.

max_in_flight

The maximum number of messages to have in flight at a given time. Increase this to improve throughput.

Type: number
Default: 1

timeout

The maximum period to wait on an upload before abandoning it and reattempting.

Type: string
Default: "5s"

batching

Allows you to configure a batching policy.

Type: object
Default: {"byte_size":0,"condition":{"static":false,"type":"static"},"count":1,"period":"","processors":[]}

# Examples
batching:
byte_size: 5000
period: 1s
batching:
count: 10
period: 1s
batching:
condition:
bloblang: this.contains("END BATCH")
period: 1m

batching.count

A number of messages at which the batch should be flushed. If 0 disables count based batching.

Type: number
Default: 1

batching.byte_size

An amount of bytes at which the batch should be flushed. If 0 disables size based batching.

Type: number
Default: 0

batching.period

A period in which an incomplete batch should be flushed regardless of its size.

Type: string
Default: ""

# Examples
period: 1s
period: 1m
period: 500ms

batching.condition

A condition to test against each message entering the batch, if this condition resolves to true then the batch is flushed.

Type: object
Default: {"static":false,"type":"static"}

batching.processors

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.

Type: array
Default: []

# Examples
processors:
- archive:
format: lines
processors:
- archive:
format: json_array
processors:
- merge_json: {}