-
Notifications
You must be signed in to change notification settings - Fork 14.4k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All DAGs triggered through UI fail after upgrading from 2.6.1 to 2.8.1 when using LocalExecutor #37149
Comments
We are running 2.8.1 and all is fine in this area on our side. |
Hello @jscheffl, thanks for looking into this.
Yep, I tested with a fresh conda environment and database.
Yes, I tested with both the python operator example DAG and the bash operator example DAG.
Here is the log from when I triggered the DAG to when the DAG run finshed, using the example bash operator: |
I tried to reproduce it with venv and could not. I guess the problem is with mixing of conda packages and You can do two things to verify this hypothesis.
you will find both values to replace by looking in your log and finding 'airflow', 'tasks', 'run': airflow tasks run example_bash_operator this_will_skip PUT_RUN_ID_OF_A_TASK_YOU_RUN_HERE \
--local --subdir PATH_TO_YOUR_EXAMPLE_DAG This one should run and print something like:
Can you please send us back result of those experiments? |
I ran the task manually as you suggested. I didn't see much in the log output besides for: I installed python through venv with my system's python (3.9)
One notable thing is that when using venv, google-re2 installed through pip without error. That was not the case with my conda environment, which is why I had installed it through conda instead of pip. However, now I just get a new error when I try to access the UI:
Memory doesn't seem like the problem, as I still have an ample amount when these errors are thrown. |
Of course. This will also work in conda as soon as conda maintainers will stop giving compilers MacOS 10.9 system librarires to use to build packages (End of Life of which were 7 years ago). This is basically why installing google-re2 fails on Conda because it forces it to be build using system libraries that had last update 7 years go. Pip does it properly - using the libraries of the current system you are on. And google-re2 has been first released after 10.9 reached EOL. conda-forge/conda-forge.github.io#1844 If you want you can even comment in the issue above. We always recommend to use pip.
I also have no such problem. Having your workers killed indicates that you have something wrong in your system that triggers some errors. You have not mentioned if you have ARM/ M1/2/3 but I guess so. Then use Python 3.10 at least. A number of libraries for Python 3.9 has worse support for ARM, so you are way better to use higher python version and it might be it causes some problems on your system. The problem is that when task is killed with SIGKILL, whoever/whatever kills it, gives it no chance to write anything to the log, so we will not fiind out what killed it. This might be related to some specifics of your environment - for example the way how you installed Python (was it conda?). Various ways of installing python might have some problems and I would not be surprised if conda is causing it. For webserver you can also try different options of gunicorn starting (see configuration) , some of them might cause problems if system libraries got modified Generally - make sure you have whole environment set outside of conda and you should be good.. |
Oh, interesting. I will keep that in mind next time I have compilation issues in a conda environment.
Actually, my MacBook has an intel chip.
The python I used with the venv was not installed with conda. I think it was installed with a MacOS update. Since you mentioned using at least python 3.10, I tested airflow 2.8.1 again but this time creating a venv with python 3.11, which was installed with brew. The error I get when running the example bash operator is the same as the original error:
I also tried other airflow versions and notice the error starts at 3.7.0. Also, I should have checked this earlier but I found crash reports in the Mac Console:
I am using postgres for the Airflow DB. Does your setup still work using postgres? |
Yes it's known issue for Python (not for airflow) to have broken threading behaviour, specifically when you are using proxies #24463 - you can look for similar issues and various workarounds worked for various people's and library problems. One of such solutions was forcing "no proxy" (look in the issue). Also you can try to set this config https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#execute-tasks-new-python-interpreter - it halped a number of people as well. But it is very random and only happens for some people, so we cannot do much about it. Similar issue here: https://bugs.python.org/issue24273 - but there might be other reasons why you get sigsegv - mostly it's because something in your environen |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Apache Airflow version
2.8.1
If "Other Airflow 2 version" selected, which one?
No response
What happened?
After upgrading from airflow version 2.6 to 2.8, all DAGs I trigger through the UI fail immediately with the error:
ERROR - Executor reports task instance <TaskInstance: ... [queued]> finished (failed) although the task says it's queued. (Info: None) Was the task killed externally?
I tested this with python version 3.9 and 3.11, and the issue persisted with both versions.
DAGs can be successfully ran by running them directly with
.test()
What you think should happen instead?
No response
How to reproduce
On MacOS, create a conda environment with python 3.11:
conda create -n airflow-2-8-1 python=3.11
Activate the environment and install google-re through conda:
conda activate airflow-2-8-1
conda install google-re2
Install airflow:
pip install "apache-airflow[async,celery,crypto,jdbc,ldap,password,mysql,postgres,redis,s3,sftp,ssh,slack]==2.8.1" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.8.1/constraints-3.11.txt"
Set up airflow config and database.
Run airflow:
airflow standalone
Trigger an example DAG.
Operating System
MacOS Ventura 13.6
Versions of Apache Airflow Providers
No response
Deployment
Other
Deployment details
airflow was installed in a stand-alone conda environment.
The command used to install airflow (when testing with python 3.11) is:
pip install "apache-airflow[async,celery,crypto,jdbc,ldap,password,mysql,postgres,redis,s3,sftp,ssh,slack]==2.8.1" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.8.1/constraints-3.11.txt"
The package google-re2 was installed through conda prior to running this command because it fails to install through pip.
I created a new airflow database for testing the deployment. I tried running the airflow server with
airflow standalone
and by running each component separately.I have Airflow configured to use the LocalExecutor. I can successfully trigger tasks with the SequentialExecutor.
Anything else?
Output from conda list:
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: