Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#203: add placeholder support to delivery_stream_name for kinesis_firehose. closes #203 #204

Merged
merged 2 commits into from
Jul 15, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -404,6 +404,30 @@ Here are `kinesis_firehose` specific configurations.
### delivery_stream_name
Name of the delivery stream to put data.

As of Fluentd v1, built-in placeholders are supported. Now, you can also use built-in placeholders for this parameter.

**NOTE:**
Built-in placeholders require target key information in your buffer section attributes.

e.g.)

When you specify the following `delivery_stream_name` configuration with built-in placeholder:

```aconf
delivery_stream_name "${$.kubernetes.annotations.kinesis_firehose_streams}"
```

you ought to specify the corresponding attributes in buffer section:

```aconf
# $.kubernetes.annotations.kinesis_firehose_streams needs to be set in buffer attributes
<buffer $.kubernetes.annotations.kinesis_firehose_streams>
# ...
</buffer>
```

For more details, refer [Placeholders section in the official Fluentd document](https://docs.fluentd.org/configuration/buffer-section#placeholders).

### append_new_line
Boolean. Default `true`. If it is enabled, the plugin adds new line character (`\n`) to each serialized record.
Before appending `\n`, plugin calls chomp and removes separator from the end of each record as [chomp_record](#chomp_record) is `true`. Therefore, you don't need to enable [chomp_record](#chomp_record) option when you use [kinesis_firehose](#kinesis_firehose) output with default configuration ([append_new_line](#append_new_line) is `true`). If you want to set [append_new_line](#append_new_line) `false`, you can choose [chomp_record](#chomp_record) `false` (default) or `true` (compatible format with plugin v2).
Expand Down
3 changes: 2 additions & 1 deletion lib/fluent/plugin/out_kinesis_firehose.rb
Original file line number Diff line number Diff line change
Expand Up @@ -44,12 +44,13 @@ def format(tag, time, record)
end

def write(chunk)
delivery_stream_name = extract_placeholders(@delivery_stream_name, chunk)
write_records_batch(chunk) do |batch|
records = batch.map{|(data)|
{ data: data }
}
client.put_record_batch(
delivery_stream_name: @delivery_stream_name,
delivery_stream_name: delivery_stream_name,
records: records,
)
end
Expand Down
10 changes: 5 additions & 5 deletions test/dummy_server.rb
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ def describe_stream_boby(req)
def put_record_boby(req)
body = JSON.parse(req.body)
record = {'Data' => body['Data'], 'PartitionKey' => body['PartitionKey']}
@accepted_records << {:stream_name => body['StreamName'], :record => record} if recording?
@accepted_records << {:stream_name => body['StreamName'], :delivery_stream_name => body['DeliveryStreamName'], :record => record} if recording?
{
"SequenceNumber" => "21269319989653637946712965403778482177",
"ShardId" => "shardId-000000000001"
Expand Down Expand Up @@ -278,7 +278,7 @@ def put_records_boby(req)
"ErrorMessage" => "Rate exceeded for shard shardId-000000000001 in stream exampleStreamName under account 111111111111."
}
else
@accepted_records << {:stream_name => body['StreamName'], :record => record} if recording?
@accepted_records << {:stream_name => body['StreamName'], :delivery_stream_name => body['DeliveryStreamName'], :record => record} if recording?
{
"SequenceNumber" => "49543463076548007577105092703039560359975228518395019266",
"ShardId" => "shardId-000000000000"
Expand All @@ -303,7 +303,7 @@ def put_record_batch_boby(req)
"ErrorMessage" => "Some message"
}
else
@accepted_records << {:stream_name => body['StreamName'], :record => record} if recording?
@accepted_records << {:stream_name => body['StreamName'], :delivery_stream_name => body['DeliveryStreamName'], :record => record} if recording?
{
"RecordId" => "49543463076548007577105092703039560359975228518395019266",
}
Expand All @@ -323,13 +323,13 @@ def flatten_records(records, detailed: false)
if @aggregator.aggregated?(data)
agg_data = @aggregator.deaggregate(data)[0]
if detailed
{:stream_name => record[:stream_name], :data => agg_data, :partition_key => partition_key}
{:stream_name => record[:stream_name], :delivery_stream_name => record[:delivery_stream_name], :data => agg_data, :partition_key => partition_key}
else
agg_data
end
else
if detailed
{:stream_name => record[:stream_name], :data => data, :partition_key => partition_key}
{:stream_name => record[:stream_name], :delivery_stream_name => record[:delivery_stream_name], :data => data, :partition_key => partition_key}
else
data
end
Expand Down
55 changes: 55 additions & 0 deletions test/plugin/test_out_kinesis_firehose.rb
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,61 @@ def test_record_count
assert @server.error_count > 0
end

class PlaceholdersTest < self
def test_tag_placeholder
d = create_driver(
Fluent::Config::Element.new('ROOT', '', {
"delivery_stream_name" => "stream-placeholder-${tag}",
"@log_level" => "error",
"retries_on_batch_request" => 10,
"endpoint" => "https://localhost:#{@server.port}",
"ssl_verify_peer" => false,
}, [Fluent::Config::Element.new('buffer', 'tag', {'@type' => 'memory', }, [])])
)
record = {"a" => "test"}
driver_run(d, [record])
assert_equal("stream-placeholder-test", @server.detailed_records.first[:delivery_stream_name])
assert_equal 0, d.instance.log.out.logs.size
assert_equal (record.to_json + "\n").b, @server.records.first
end

def test_time_placeholder
d = create_driver(
Fluent::Config::Element.new('ROOT', '', {
"delivery_stream_name" => "stream-placeholder-${tag}-%Y%m%d",
"@log_level" => "error",
"retries_on_batch_request" => 10,
"endpoint" => "https://localhost:#{@server.port}",
"ssl_verify_peer" => false,
}, [Fluent::Config::Element.new('buffer', 'tag, time', {'@type' => 'memory', 'timekey' => 3600 }, [])])
)
record = {"a" => "test"}
time = event_time
driver_run(d, [record], time: time)
assert_equal("stream-placeholder-test-#{Time.now.strftime("%Y%m%d")}",
@server.detailed_records.first[:delivery_stream_name])
assert_equal 0, d.instance.log.out.logs.size
assert_equal (record.to_json + "\n").b, @server.records.first
end

def test_custom_placeholder
d = create_driver(
Fluent::Config::Element.new('ROOT', '', {
"delivery_stream_name" => "stream-placeholder-${$.key.nested}",
"@log_level" => "error",
"retries_on_batch_request" => 10,
"endpoint" => "https://localhost:#{@server.port}",
"ssl_verify_peer" => false,
}, [Fluent::Config::Element.new('buffer', '$.key.nested', {'@type' => 'memory', }, [])])
)
record = {"key" => {"nested" => "nested-value"}}
driver_run(d, [record])
assert_equal("stream-placeholder-nested-value", @server.detailed_records.first[:delivery_stream_name])
assert_equal 0, d.instance.log.out.logs.size
assert_equal (record.to_json + "\n").b, @server.records.first
end
end

# Debug test case for the issue that it fails to flush the buffer
# https://github.com/awslabs/aws-fluent-plugin-kinesis/issues/133
#def test_chunk_limit_size_for_debug
Expand Down