It's boringly easy to use
Written in Go, deployed as a static binary, declarative configuration. Cloud native as utter heck.
# Installcurl -Lsf https://sh.benthos.dev | bash# Make a configbenthos create nats/protobuf/sqs > ./config.yaml# Runbenthos -c ./config.yaml
Read moreinput:gcp_pubsub:project: foosubscription: barpipeline:processors:- bloblang: |root.message = thisroot.meta.link_count = this.links.length()root.user.age = this.user.age.number()output:redis_streams:url: tcp://TODO:6379stream: bazmax_in_flight: 20
Takes Care of the Dull Stuff
Benthos solves common data engineering tasks such as transformations, integrations, and multiplexing with declarative configuration. This allows you to easily and incrementally adapt your data pipelines as requirements change, letting you focus on the more exciting stuff.
Benthos is able to glue a wide range of sources and sinks together and hook into a variety of databases, caches, HTTP APIs, lambdas and more, enabling you to seamlessly drop it into your existing infrastructure.
Working with disparate APIs and services can be a daunting task, doubly so in a streaming data context. With Benthos it's possible to break these tasks down and automatically parallelize them as a streaming workflow.
Reliable and Scalable
Benthos runs fast and processes messages using a transaction model, making it able to guarantee at-least-once delivery even in the event of crashes or unexpected server faults.
At Meltwater it's enriching over 450 million documents per day with a network of more than 20 NLP services. It sounds very interesting but rest assured, it's totally drab.