Skip to main content

parquet_decode

EXPERIMENTAL

This component is experimental and therefore subject to change or removal outside of major version releases.

Decodes Parquet files into a batch of structured messages.

Introduced in version 4.4.0.

# Config fields, showing default values
label: ""
parquet_decode: {}

This processor uses https://github.com/parquet-go/parquet-go, which is itself experimental. Therefore changes could be made into how this processor functions outside of major version releases.

Examples

In this example we consume files from AWS S3 as they're written by listening onto an SQS queue for upload events. We make sure to use the to_the_end scanner which means files are read into memory in full, which then allows us to use a parquet_decode processor to expand each file into a batch of messages. Finally, we write the data out to local files as newline delimited JSON.

input:
aws_s3:
bucket: TODO
prefix: foos/
scanner:
to_the_end: {}
sqs:
url: TODO
processors:
- parquet_decode: {}

output:
file:
codec: lines
path: './foos/${! meta("s3_key") }.jsonl'