You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A clear and concise description of what the bug is.
Hi Team,
This is regarding Apache superset scheduling functionality working. I would like to understand more about the celery - redis configuration for chart email schedules module. I have switched to latest version 0.36.0. I can confidently say it is functioning better than earlier version with respect to scheduling performance.
But, . I have been getting the following exceptions for a long time. I have been raised tickets through the superset git-hub as well. I just want to make a conversion one more level through this email. Requesting you to provide your understandings/thoughts/suggestions.
Test case:
I have created a 25+ schedule jobs with 30mins interval (*/30 * * * *) and the reports which are having with max size of 5k rows and 25 columns. I have monitored the schedule jobs continuously 10 to 24hrs. The following screenshot will have scheduled delivery status at each interval for your reference.
Note: The red colored ones are non-delivered emails for that slot.
Screenshot:
The following are the major exceptions the we are encountering continuously.
S.No
Exception Type
1
NoSuchColumnError("Could not locate column in row for column 'slice_email_schedules.id'")
2
ResourceClosedError('This result object does not return rows. It has been closed automatically.')
3
DatabaseError('(psycopg2.DatabaseError) error with status PGRES_TUPLES_OK and no message from the libpq')
Celery configuration: we are using redisdb for broker url as “redis://localhost:6379/0” and celery results backend as “redis://localhost:6379/1”.
class CeleryConfig: # pylint: disable=too-few-public-methods
#BROKER_URL = "sqla+sqlite:///celerydb.sqlite"
if 'BROKER_URL' in os.environ:
BROKER_URL = os.environ['BROKER_URL']
PIP versions:
Python3.7
celery==4.4.2
kombu==4.6.8
psycopg2==2.8.5
redis==3.5.0
postgresql server version 9.2
I hope the above information will help you to understand about my schedule configuration, if any further details required please reply to this email. Please help in order to correct anything here or version upgradations.
Thanks for putting your valuable time.
Best Regards,
Srini T.
Expected results
what you expected to happen.
Actual results
what actually happens.
Screenshots
If applicable, add screenshots to help explain your problem.
How to reproduce the bug
Go to '...'
Click on '....'
Scroll down to '....'
See error
Environment
(please complete the following information):
PIP versions:
Python3.7
celery==4.4.2
kombu==4.6.8
psycopg2==2.8.5
redis==3.5.0
postgresql server version 9.2
Checklist
Make sure these boxes are checked before submitting your issue - thank you!
[ Yes] I have checked the superset logs for python stacktraces and included it here as text if there are any.
[Yes ] I have reproduced the issue with at least the latest released version of superset.
[Yes ] I have checked the issue tracker for the same issue and I haven't found one similar.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. For admin, please label this issue .pinned to prevent stale bot from closing the issue.
A clear and concise description of what the bug is.
Hi Team,
This is regarding Apache superset scheduling functionality working. I would like to understand more about the celery - redis configuration for chart email schedules module. I have switched to latest version 0.36.0. I can confidently say it is functioning better than earlier version with respect to scheduling performance.
But, . I have been getting the following exceptions for a long time. I have been raised tickets through the superset git-hub as well. I just want to make a conversion one more level through this email. Requesting you to provide your understandings/thoughts/suggestions.
Test case:
I have created a 25+ schedule jobs with 30mins interval (*/30 * * * *) and the reports which are having with max size of 5k rows and 25 columns. I have monitored the schedule jobs continuously 10 to 24hrs. The following screenshot will have scheduled delivery status at each interval for your reference.
Note: The red colored ones are non-delivered emails for that slot.
Screenshot:
The following are the major exceptions the we are encountering continuously.
Celery configuration: we are using redisdb for broker url as “redis://localhost:6379/0” and celery results backend as “redis://localhost:6379/1”.
class CeleryConfig: # pylint: disable=too-few-public-methods
#BROKER_URL = "sqla+sqlite:///celerydb.sqlite"
if 'BROKER_URL' in os.environ:
BROKER_URL = os.environ['BROKER_URL']
We are using following celery worker and celery beat commands to initiate schedules.
celery worker --app=superset.tasks.celery_app:app --loglevel=${LOG_LEVEL:-error} --soft-time-limit 400 --time-limit 500 --autoscale=20,6 --pool=prefork -Ofair -c 6
celery beat --app=superset.tasks.celery_app:app
PIP versions:
Python3.7
celery==4.4.2
kombu==4.6.8
psycopg2==2.8.5
redis==3.5.0
postgresql server version 9.2
I hope the above information will help you to understand about my schedule configuration, if any further details required please reply to this email. Please help in order to correct anything here or version upgradations.
Thanks for putting your valuable time.
Best Regards,
Srini T.
Expected results
what you expected to happen.
Actual results
what actually happens.
Screenshots
If applicable, add screenshots to help explain your problem.
How to reproduce the bug
Environment
(please complete the following information):
PIP versions:
Python3.7
celery==4.4.2
kombu==4.6.8
psycopg2==2.8.5
redis==3.5.0
postgresql server version 9.2
Checklist
Make sure these boxes are checked before submitting your issue - thank you!
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: