You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the question/issue
We have migrated our existing container to run on EKS fargate instead of EC2 node. by following the documentation https://docs.aws.amazon.com/eks/latest/userguide/fargate.html and somewhere in performance testing observed some of application logs are unable to send to cloudwatch and missed set of logs and seems fluent bit lost the connection sending missed logs back. whereas, full logs are present at container storage level (** kubectl logs **)
upon further investigation find similer issue #525 there issue has been fixed upgrade with latest version of fluent-bit 1.9.10. at the current moment by -fluent-bit-logs cloud watch log fargate pods running fluent-bit version 1.9.8.
Can we expect that newer fix version going to be available in next EKS 1.25 upgrade? or is there other possible way to fix this issue?
Configuration
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
flb_log_cw: "true" #ships fluent-bit process logs to CloudWatch
filters.conf: |
[FILTER]
Name parser
Match *
Parser crio
Key_Name log
Preserve_Key false
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
Labels Off
Annotations Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
[FILTER]
Name record_modifier
Match *
Remove_key stream
Remove_key logtag
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match
region {{Region}}
log_group_name
log_stream_prefix fallback-stream-
log_stream_template $kubernetes['pod_id'] - $kubernetes['pod_name']
auto_create_group false
[OUTPUT]
Name cloudwatch_logs
Match
region {{Region}}
log_group_name
log_stream_prefix fallback-stream-
log_stream_template $kubernetes['pod_id'] - $kubernetes['pod_name']
auto_create_group false
parsers.conf: |
[PARSER]
Name crio
Format Regex
Regex ^(?[^ ]+) (?stdout|stderr) (?P|F) (?.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
Describe the question/issue
We have migrated our existing container to run on EKS fargate instead of EC2 node. by following the documentation https://docs.aws.amazon.com/eks/latest/userguide/fargate.html and somewhere in performance testing observed some of application logs are unable to send to cloudwatch and missed set of logs and seems fluent bit lost the connection sending missed logs back. whereas, full logs are present at container storage level (** kubectl logs **)
upon further investigation find similer issue #525 there issue has been fixed upgrade with latest version of fluent-bit 1.9.10. at the current moment by -fluent-bit-logs cloud watch log fargate pods running fluent-bit version 1.9.8.
Can we expect that newer fix version going to be available in next EKS 1.25 upgrade? or is there other possible way to fix this issue?
Configuration
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
flb_log_cw: "true" #ships fluent-bit process logs to CloudWatch
filters.conf: |
[FILTER]
Name parser
Match *
Parser crio
Key_Name log
Preserve_Key false
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
Labels Off
Annotations Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
[FILTER]
Name record_modifier
Match *
Remove_key stream
Remove_key logtag
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match
region {{Region}}
log_group_name
log_stream_prefix fallback-stream-
log_stream_template $kubernetes['pod_id'] - $kubernetes['pod_name']
auto_create_group false
[OUTPUT]
Name cloudwatch_logs
Match
region {{Region}}
log_group_name
log_stream_prefix fallback-stream-
log_stream_template $kubernetes['pod_id'] - $kubernetes['pod_name']
auto_create_group false
parsers.conf: |
[PARSER]
Name crio
Format Regex
Regex ^(?[^ ]+) (?stdout|stderr) (?P|F) (?.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
Cluster Details
EKS cluster version: 1.24, eks 4
@PettitWesley
The text was updated successfully, but these errors were encountered: