AWS Firehose Eventbridge Subscriber

Write event data from EventBridge to S3.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Deployments

45

Made by

Massdriver

Official

Yes

No

Compliance

AWS Firehose Eventbridge Subscriber

Amazon Kinesis Firehose is a simple event bus that delivers data to specific AWS services such as Redshift, OpenSearch, and S3. The advantage of using Firehose over standard Kinesis streams is the autoscaling capability built into the Firehose stream. This bundle delivers JSON payloads from Eventbridge directly to an S3 bucket. Events can be filtered from the Eventbridge bus via event rule filtering. Data writes can be partitioned using Kinesis partitioning which is powered by JQ. JQ queries can extract data from within events and use that as a partitioning key. This solution is perfect for writing click stream data from a web application to s3.

Features

EventBridge Event Filtering

Eventbridge is an autoscaling event bus that allows consumers to declare the events to consume based on pattern-matching operations expressed as JSON. If you are sending all of your application’s events to the Eventbridge bus but only care about user-related events.

Example

Let’s say we have a UserPhoto service that handles uploading and serving assets for user profile pictures. To appropriately decouple the image optimizer from the uploader, we may want to communicate via events. The PhotoOptimizer should only consume events that pertain to work it must complete. In this fictional use case, we want events from the UserPhoto service, which indicates an uploaded asset which is a JPEG. Below is an example of event structure and match for those events.

Event

{
  "version": "0",
  "id": "xxxxxxxxxxx",
  "detail-type": "uploaded",
  "source": "UserPhoto",
  "account": "ARN",
  "time": "timestamp",
  "region": "region",
  "resources": ["ARN"],
  "detail": {
    "FileName": "testimage.jpg
  }
}

Match

{
  "source": "UserPhoto",
  "detail-type": "uploaded",
  "FileName": [{ "suffix": ".jpg }]
}

More information can be found here

Partitioned Write

Some use cases may require writes to S3 which are grouped under a common key, for instance, Organization ID. Partitioned write allows a user to use JQ queries to select a partition key from a value within the event being consumed. Using the UserPhoto example, we can record JPEG uploads to user-specific folders by using the following JQ query.

{
  "version": "0",
  "id": "xxxxxxxxxxx",
  "detail-type": "uploaded",
  "source": "UserPhoto",
  "account": "ARN",
  "time": "timestamp",
  "region": "region",
  "resources": ["ARN"],
  "detail": {
    "FileName": "testimage.jpg,
    "userID": "1234"
  }
}
.detail.userID

Alarms

Partition Count

Dynamic partitioning creates a partition per unique key queried with JQ. The partitions exist until the data has been written to the destination. By default, the maximum number of parallel partitions allowed per stream is 500. This quota can be increased to 5000 through quota increase requests with AWS. This alarm is set to trigger when 80% of the default capacity is used indicating time to increase the quota.

S3 Write Success

S3 write success tracks the number of successful writes divided by the number of total writes to S3. By default, this alarm is set to trigger when the success rate drops below 99%.

Links

Variable Type Description
event_rule.event_filter string No description
firehose.buffer_interval_seconds integer No description
firehose.buffer_size_mb integer No description
firehose.dynamic_partitioning.enabled boolean Dynamic partitioning allows you to parse incoming events and pull a value which can be used as a prefix in s3 directories. More information about dynamic paritioning can be found here
monitoring.mode string Enable and customize Kinesis Delivery Stream metric alarms.