Blob fish

Benthos

The stream processor for mundane tasks.

Get Started

It's boringly easy to use

Stream pipelines are defined in a single config file, allowing you to declare connectors and a list of processing stages.

$ benthos -c ./yourconfig.yaml
input:
  kafka_balanced:
    addresses: [ TODO ]
    topics: [ foo, bar ]
    consumer_group: foogroup

pipeline:
  processors:
  - jmespath:
      query: '{ message: @, meta: { link_count: length(links) } }'

output:
  s3:
    bucket: TODO
    path: "${!metadata:kafka_topic}/${!json_field:message.id}.json"

Lots of Connectors

Benthos is able to glue a wide range of sources and sinks together, enabling you to seamlessly deploy it without changing your existing infrastructure.

Inputs Outputs
Blob Glue

Takes Care of the Dull Stuff

Many stream processing tasks are actually just boring translations from one format to another. Benthos specializes in these tasks, letting you focus on the more advanced features in your architecture.

Even more advanced patterns such as stream joins and enrichment workflows can be solved by breaking them down into boring operations. For examples check out the Benthos cookbooks.

Processors Cookbooks
Boring Blob

Reliable and Scalable

Benthos runs fast, has a low memory footprint and processes messages using a transaction model, making it able to guarantee at-least-once delivery* even in the event of crashes or unexpected server faults.

It can scale vertically, and since it is stateless it can also be horizontally scaled without limitation based on your choice of transport.

* When the connection protocols allow it

Reliable Blob

Extendable

Sometimes the components that come with Benthos aren't enough. Luckily, Benthos has been designed to be easily plugged with whatever components you need.

You can either write plugins directly in Go (recommended) or you can configure Benthos to run your plugin as a subprocess.

Go Plugins Subprocess
Blob Extended