-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VARBINARY not supported and issue with BINARY #33
Comments
Hello @stefanofornari, Thank you for reporting this issue. You're right, If you can, alternatives exist to execute this prepared statement until the s.setBytes(1, ID1); or s.setBlob(1, new SerialBlob(ID1)); or transforming the byte array passed to I will take a closer look at this point to provide a solution as complete as possible (the one you suggested is a good basis), but I cannot guarantee a release date for this. As workarounds exist to get similar results, I'll probably include this fix in the next minor version (4.11.0) to release at the end of this year or beginning of 2024. NB: regarding the tests, you just need a running docker, the image is provided through Testcontainers. But it's ok if you can't run docker for some reason. Using this solution for testing is the more convenient to run tests in an environment as close as possible of real conditions. |
Hi @maximevw, thanks for taking care of this. Let me first disclaim I have little/no control on the actual JDBC calls because I am using a ORM which then use cassandra-wrapper. The proposal solution unfortunately does not work because VARBINARY is currently not supported at all. It would be great if we could have just an hot fix with Regarding docker, I do not want to open a debate, but IMHO unit tests of the driver should not relay on the presence of a real server (for multiple reasons, not just the added complexity). |
@stefanofornari, since you can't apply the workaround in your case, I'll try to provide a quick hotfix for VARBINARY handling in Regarding the tests, I agree pure unit testing, in general, shouldn't rely on servers/containers. However, for a JDBC implementation, we are closer to integration testing because we want to be sure the driver is able to connect to a database and to execute queries properly. Mocking the database is not easier in my opinion and error prone. If you look at how JDBC drivers for MS SQL Server (https://github.com/Microsoft/mssql-jdbc/) or PostgreSQL (https://github.com/pgjdbc/pgjdbc) are tested, they are against a real database. So, this is just two examples (but not the least), but it seems that is a good manner to test a JDBC driver. |
- handle VARBINARY and LONGVARBINARY types with either ByteArrayInputStream or byte[] in the methods CassandraPreparedStatement.setObject(). - fix configuration of the local datacenter using the one from the configuration file when such a file is used.
Hello @stefanofornari, I just released the version 4.10.2 including a fix for this issue. The version will also be available in Maven Central in next hours. A review of types potentially not handled by |
- handle VARBINARY and LONGVARBINARY types with either ByteArrayInputStream or byte[] in the methods CassandraPreparedStatement.setObject(). - fix configuration of the local datacenter using the one from the configuration file when such a file is used.
Great stuff! I will give it a try in the tomorrow and let you know. |
I confirm it work for me. |
Hello,
the following case does not work with cassandra-jdbc-driver:
throwing the errror:
I think the issue is in setObject()
cassandra-jdbc-wrapper/src/main/java/com/ing/data/cassandra/jdbc/CassandraPreparedStatement.java
Line 543 in 5f9fda7
In particular, in the block handling Types.BINARYY
cassandra-jdbc-wrapper/src/main/java/com/ing/data/cassandra/jdbc/CassandraPreparedStatement.java
Lines 556 to 564 in 5f9fda7
As per the JDBC documentation, the method should perform a conversion taking into account multiple cases for the input type. However I agree to take a simpler approach at least to start with, but I see to main issues with the current implementation:
I would propose something like this:
I won't be able to provide a PR due to the need of a dockered Cassandra... :(
The text was updated successfully, but these errors were encountered: