-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to flush the buffer with error_class=NoMethodError error="undefined method `compact' #133
Comments
Additional logs. 2017-10-11 20:27:25 +0000 [info]: gem 'fluent-plugin-kinesis' version '2.0.1' |
Hi @gurunathj , Thank you for reporting the issue. Could you give me whole fluent.conf related to this plugin? Also, how is it working with Thanks, |
Hi @riywo Thanks for looking into this. I have further observations and details as below.
Attached fluentd config and message used for testing. |
Thank you for providing the detail and examples. I've reproduced this issue. Let me investigate it. |
Hi @gurunathj , Could you share why you configure |
Hi @riywo The chunk_limit_size of 1k in example is for purpose of reproducing the issue. Anyway in actual environment the issue occurs when I provide chunk_limit_size up to 1m. I am keeping chunk_limit_size less than 1m because of Kinesis shard put request limit. So is chunk_limit_size actually related to Kinesis shard put request limit? Does it make any difference when multiple instances and threads are configured for fluentd? Otherwise I can configure chunk_limit_size much higher to handle heavy load. |
When putting events of one chunk to Kinesis, this plugin split the chunk to align the limit of API limits automatically, so you can set You can also configure the number of slicing by |
Setting larger chunk_limit_size resolved the error. Thank you. |
Hi @gurunathj , I'm happy to here that larget This error only happens with fluentd v0.14 since it automatically split data to multiple chunk but this plugin doesn't take care of it so far. The solution could be using the standard format of v0.14 plugin, but this is a big change. So, I'd like to keep current code and put solving this issue for the next major version v3. As well as it, I'll add this workaround to README later. |
Experiencing the same error with v2.0.1 of this plugin with fluentd 0.12.40. Prior to updating we were on v1.1.3 of this plugin and fluentd 0.12.36 and never experienced the same problem with that. I don't have a cause yet so cannot say if it's entirely bound to the plugin update. |
We're hitting this issue running fluentd v1.1 (
error message:
What's the recommended setting for |
I think the chunk_limit_size needs to be configured according to message size and available memory. I used higher value like 128m. |
I increased my chink_limit_size but I am still receiving this error. Curious if anyone has any tips to debugging this. |
Same here. Please help! |
Here is a test script that seems to reliably reproduce the error: require 'aws-sdk-kinesis'
require 'fluent/output'
require 'fluent/plugin/out_kinesis_streams_aggregated'
require 'fluent/test'
require 'fluent/test/driver/output'
driver = Fluent::Test::Driver::Output.new(Fluent::KinesisStreamsAggregatedOutput).configure <<~CONF
log_level debug
region us-west-1
stream_name dummy
aws_key_id abcdef123
aws_sec_key abcdef123
<buffer>
chunk_limit_size "1m"
</buffer>
CONF
Aws::Kinesis::Client.prepend(Module.new do
def put_records(*args)
OpenStruct.new(
encryption_type: "KMS",
failed_record_count: 0,
records: [
OpenStruct.new(
sequence_number: "12345",
shard_id: "12345"
)
]
)
end
end)
driver.run(force_flush_retry: true) do
10.times do
time = Fluent::EventTime.now
events = Array.new(Kernel.rand(3000..5000)).map { [time, { msg: "x" * 256 }] }
driver.feed("my.tag", events)
end
end
puts driver.logs |
From further investigation, this error seems to be occurring due to the use of the old v0.12 plugin api compatibility libraries. Internally, the records get msgpack'd and then appended to a single string that is emitted to the buffer, which is then split as a string when the size of the chunk is too large, without regard to the msgpack format. So when the plugin tries to parse the chunk as msgpack (via the msgpack_each helper), it fails and returns Integers instead. A test version of the same script using the Fluentd v1.0 plugin API appears not to exhibit this same issue: https://gist.github.com/adammw/27b7a3f236cb8fbbea8e1b3a4907225e (as long as the Fluent::SetTimeKeyMixin and Fluent::SetTagKeyMixin are not included) |
We've released v3.0.0.rc.1.0 https://rubygems.org/gems/fluent-plugin-kinesis/versions/3.0.0.rc.1.0 If there is no negative feedback, we'll release v3.0.0 soon. |
Is there anyone who can maintain fluent/fluentd-kubernetes-daemonset repository? https://github.com/fluent/fluentd-kubernetes-daemonset/blob/afa071c2c1ceb0a596e96885621e0a046d4f5915/docker-image/v1.11/debian-kinesis/Gemfile#L16 |
The aws-fluentd-plugin-kinesis having following errors while running with type kinesis_streams_aggregated and fluentd version 0.14.21.
2017-10-11 17:07:45 +0000 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2017-10-11 17:07:45 +0000 chunk="55b4874e145808169ea53006b65fdb54" error_class=NoMethodError error="undefined method `compact' for 92:Fixnum"
2017-10-11 17:09:15 +0000 [warn]: #0 failed to flush the buffer. retry_time=8 next_retry_seconds=2017-10-11 17:09:21 +0000 chunk="55b487651c7eeff3d29c0c2caa6bc7c2" error_class=NoMethodError error="undefined method `compact' for 50:Fixnum"
Additionally I am using fluent-plugin-s3 along with fluent-plugin-kinesis.
The text was updated successfully, but these errors were encountered: