Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chunk bytes limit exceeds for an emitted event stream #122

Closed
CheyiLin opened this issue May 17, 2017 · 5 comments
Closed

chunk bytes limit exceeds for an emitted event stream #122

CheyiLin opened this issue May 17, 2017 · 5 comments

Comments

@CheyiLin
Copy link

Hi,

I got the following warning and errors with fluentd 0.14.15:

[warn]: #0 [aws_firehose] chunk bytes limit exceeds for an emitted event stream: 2288290bytes
[error]: #0 [aws_firehose] Invalid type of record:
[error]: #0 [aws_firehose] Invalid type of record:
...

The config is:

@type kinesis_firehose
formatter json
include_time_key true
include_tag_key true

flush_interval 1s
buffer_chunk_limit 2m
buffer_queue_limit 64
try_flush_interval 0.1
queued_chunk_flush_interval 0.01
num_threads 8

The max record size from forward input is ~650KB, don't know why it hit the buffer chunk limit.
Did I miss something or have wrong configuration? Thanks.

@riywo
Copy link
Contributor

riywo commented Sep 20, 2017

Hi @CheyiLin ,

Sorry for my late reply. Does this problem still exist? If so, I need more information to reproduce this problem. Could you give me plugin versions and steps to reproduce it?

@riywo
Copy link
Contributor

riywo commented Sep 22, 2017

Closing. If you are facing the same issue, please add comment with more information.

@riywo riywo closed this as completed Sep 22, 2017
@gurunathj
Copy link

I got same warning message with fluentd-0.14.21.
2017-10-11 17:09:17 +0000 [warn]: #0 chunk bytes limit exceeds for an emitted event stream: 3351422bytes

What could be reason getting this message with below configuration.

@type kinesis_streams_aggregated
stream_name test-stream
region us-east-1

@type file
path /fluentd/log/kinesis.*.buffer
flush_thread_count 3
flush_interval 5s
flush_thread_burst_interval 0.01
retry_max_times 10
retry_max_interval 5s
chunk_limit_size 500k
queue_limit_length 256
chunk_limit_records 300

@riywo
Copy link
Contributor

riywo commented Oct 11, 2017

Seems duplicated to #133 . Just for reference.

@CheyiLin
Copy link
Author

Sorry for the late reply.

I solved this by increasing the chunk and queue limit to fit my message throughput, with the following config:

<buffer>
    @type memory
    flush_mode interval
    flush_interval 1s
    flush_thread_count 5
    chunk_limit_size 12m
    queue_limit_length 96
</buffer>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants