It's boringly easy to use
Written in Go, deployed as a static binary. Configured with a single YAML file, allowing you to declare connectors and a list of processing stages.
# Installcurl -Lsf https://sh.benthos.dev | bash# Make a configbenthos create nats/protobuf/sqs > ./config.yaml# Runbenthos -c ./config.yaml
Read moreinput:gcp_pubsub:project: foosubscription: barpipeline:processors:- bloblang: |root.message = thisroot.meta.link_count = this.links.length()root.user.age = this.user.age.number()output:redis_streams:url: tcp://TODO:6379stream: bazmax_in_flight: 20
Takes Care of the Dull Stuff
Most stream processing tasks are actually just boring transformations, glueing APIs together, and multiplexing. Benthos specializes in these tasks, letting you focus on the more exciting features of your architecture.
Benthos is able to glue a wide range of sources and sinks together and hook into a variety of databases, caches, HTTP APIs, lambdas and more, enabling you to seamlessly deploy it without changing your existing infrastructure.
Working with disparate APIs and services can be a daunting task, doubly so in a streaming data context. With Benthos it's possible to break these tasks down and automatically parallelize them as a streaming workflow.
Reliable and Scalable
Benthos runs fast and processes messages using a transaction model, making it able to guarantee at-least-once delivery even in the event of crashes or unexpected server faults.
At Meltwater it's enriching over 450 million documents per day with a network of more than 20 NLP services. It sounds very interesting but rest assured, it's totally drab.