You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Various custom and Snowflake ingestion runs fail while trying to send data to the GMS server.
I think this started with v0.14.1 and I guess it is due to the default sink mode: ASYNC_BATCHthat has been activated then.
We already set client_max_body_size: "100m" in our nginx config and nginx.ingress.kubernetes.io/proxy-body-size: 200m in the Frontend ingress config.
log message from ingestion (using acryldata/datahub-ingestion:v0.14.1) {'error': 'Unable to emit metadata to DataHub GMS', 'info': {'message': '413 Client Error: Payload Too Large for url: '...
To Reproduce
Start our (previously working) Snowflake ingestion using datahub rest as sink with version v0.14.1.
Expected behavior
Request size must not exceed our upper size limit.
Additional context
Setting the sink config to mode: ASYNC is a workaround.
The text was updated successfully, but these errors were encountered:
Describe the bug
Various custom and Snowflake ingestion runs fail while trying to send data to the GMS server.
I think this started with v0.14.1 and I guess it is due to the default sink
mode: ASYNC_BATCH
that has been activated then.We already set
client_max_body_size: "100m"
in our nginx config andnginx.ingress.kubernetes.io/proxy-body-size: 200m
in the Frontend ingress config.log message from ingestion (using
acryldata/datahub-ingestion:v0.14.1
){'error': 'Unable to emit metadata to DataHub GMS', 'info': {'message': '413 Client Error: Payload Too Large for url: '...
To Reproduce
Start our (previously working) Snowflake ingestion using datahub rest as sink with version v0.14.1.
Expected behavior
Request size must not exceed our upper size limit.
Additional context
Setting the sink config to
mode: ASYNC
is a workaround.The text was updated successfully, but these errors were encountered: