-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confusing Log for FailoverConnectionPlugin #1064
Comments
Hi @droni1234, this log message occurs when the wrapper throws a You can enable driver logging using On the other hand, failover should only be triggered when there are connectivity issues to instances. Were the network issues expected? |
Hi there is some additional config (log4j2) I need to change for the logs to appear. I will try to come back with logs in a month or two. |
I am also running into this problem. The error is being thrown when there is no failover event happening during my application's scale testing. The problem always happens during a DELETE operation as well. Is there something else that can cause the FailoverSuccessSQLException asides from a legitimate failover event? |
Hi @DyllanSowersTR, that exception should only be thrown after a legitimate failover event. Are you able to provide me with more info? If possible, it would be great to have some sample code that reproduces the issue and logs for when this occurs. Additionally:
Any additional details you can provide about your configuration would also be helpful, thanks! |
Hi @droni1234 and @DyllanSowersTR just following up here, are there any updates? @DyllanSowersTR, have you seen my questions above? Please let me know, thanks! |
Hey @aaron-congo, |
Hi @droni1234 , I'm sorry that the driver didn't work out for you in this scenario. We appreciate you taking the time in providing feedback. Without knowing too much how the containers work in your environment, if the container restarts break the connection to your database, then failover is expected alongside the "failover success" message. You mentioned you also observed degradation, could you please clarify what you meant here? If others are experiencing something similar, we'd appreciate any relevant driver logs and info about your workflow to help determine what is happening. Thanks! |
Hi Aaron, there was no degradation (at least I think that), what I meant was that I treat these warnings as a potential degradation. Components on my side have pretty strict alerting if we get too many warnings over 5 minutes there will be alerts. Soo my question, what do I need to tell it to understand that it got restarted if that is the case. Does the pool need to properly close all connections? Can I configure anything? I was contemplating if reducing the amounts of logs is possible, but I think since the connection Pool establishes all connections on boot this is not really possible. Kind Regards |
Hi @droni1234, It sounds like there are two things to address here:
Is this an accurate summary? Also, can you please confirm whether or not the Please let me know, thanks! |
Yeah Aaron that is a pretty good summary. Kind regards |
Hi @droni1234, if you would like to avoid logs on successful failover, does it work for you to disable the failover plugin logs (see here for info on how to configure logging)? When failover succeeds, we are intentionally throwing an exception to get the user's attention, so we would like to keep the log level at SEVERE. If you disable failover plugin logs but still want logs for failover failures, you can catch FailoverFailedSQLException in your logic and log the event in your own code. I'm also curious if you are seeing WARNING/SEVERE logs from tomcat-jdbc or the mariadb/mysql drivers when failover occurs, since failover should only occur when a network issue has been detected by the underlying mariadb/mysql drivers. Regarding tomcat-jdbc, I did some testing and it seems you do not have to do anything special with the pool when failover succeeds. You will have to catch the FailoverSuccessSQLException and re-configure session state if needed, but after that the connection can be used as normal. I did notice that, when failover fails, the broken connection is not evicted unless testOnReturn is enabled and we have passed the validationInterval (default value 3 seconds). This means you may still get the broken connection back from the pool. However, this behavior is controlled by tomcat-jdbc, not our driver, and it will also occur if you are using the plain mysql driver instead of our driver. Perhaps there is a way to configure tomcat-jdbc to evict the broken connection immediately but I could not find a way myself. |
Describe the bug
Receiving Random Error and a potential degraded connection
Expected Behavior
To work without a problem or give a more actionable error message.
I am in the dark :)
What plugins are used? What other connection properties were set?
Standard Configuration
Current Behavior
This is a filtered view of the Log history over the last 24 hours
I had trouble enabling trace logs, I will eventually try to get some later
Reproduction Steps
This is the configuration of my Datasource, I have resolved some constants and removed secrets
Possible Solution
No response
Additional Information/Context
I use tomcat-jdbc for connection pooling and as a Datasource
This issue appeared on the newest mariadb connector j version
as well as for the newest mysql connector j version I tried out.
I have recently upgraded to RDS 3 with mysql 8.
The AWS Advanced JDBC Driver version used
2.3.7
JDK version used
Java 17
Operating System and version
Docker Image eclipse-temurin:17-jre-jammy
The text was updated successfully, but these errors were encountered: