Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepared statement SELECT * FROM pg_catalog.<any_non_empty_table> with setFetchSize gives ERROR: portal "c_2-4858445154544" does not exist #9

Closed
kornilova203 opened this issue Feb 1, 2021 · 11 comments

Comments

@kornilova203
Copy link

kornilova203 commented Feb 1, 2021

Driver version

2.0.0.2
Works fine with 1.2.41.1065

Redshift version

PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.22575

Client Operating System

MacOS

JAVA/JVM version

1.8

Problem description

PreparedStatement statement = connection.prepareStatement("select * from pg_catalog.pg_class");
statement.setFetchSize(5);
statement.execute();
ResultSet resultSet = statement.getResultSet();
while (resultSet.next()) {
    System.out.println(resultSet.getObject(1));
}

Actual output:

pg_xactlock
pg_user_info
pg_database_info
pg_aggregate_fnoid_index
pg_am_name_index
Exception in thread "main" com.amazon.redshift.util.RedshiftException: ERROR: portal "c_2-5062961024059" does not exist
	at com.amazon.redshift.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2576)
	at com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread(QueryExecutorImpl.java:2250)
	at com.amazon.redshift.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1878)
	at com.amazon.redshift.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:2474)
	at com.amazon.redshift.jdbc.RedshiftResultSet.fetchMoreInQueueFromSuspendedPortal(RedshiftResultSet.java:2039)
	at com.amazon.redshift.jdbc.RedshiftResultSet.next(RedshiftResultSet.java:1954)
	at kornilova.TestPortalDoesNotExist.main(TestPortalDoesNotExist.java:22)

Expected output:

pg_xactlock
pg_user_info
pg_database_info
pg_aggregate_fnoid_index
pg_am_name_index
<... and all other names>
  • I cannot reproduce this problem with a table not from pg_catalog
  • regular Statement works fine
  • PreparedStatement without fetch size works fine

Why this is important:
This problem occurs in DataGrip when a user introspects pg_catalog (and maybe other relatively big schemas). He/she may get incorrect schema of pg_catalog

@iggarish
Copy link
Contributor

iggarish commented Feb 2, 2021

Thanks for reporting issue. Let me explain minor difference in 1.x and 2.x driver. 1.x driver doesn't send Fetch size to server. 2.x sends fetch size to server. So it uses server side cursor as wells as client side cursor. In 2.x default client side cursor is based on fetchRingBufferSize i.e. actual memory usage instead of number of rows. This is by default 1GB. Now when 2.x driver sends fetch size to server, in query on PG tables something wrong happens in the query execution or server side cursor. That's why we see the "cursor closed" error. So I suggest to set fetch size to 0 and control rows memory using fetchRingBufferSize, We will update you when issue resolved in the server side.

@iggarish
Copy link
Contributor

iggarish commented Feb 2, 2021

For our internal reference id: RedshiftDP-28917

@kornilova203
Copy link
Author

@iggarish, thank you for the explanation!

@kornilova203
Copy link
Author

@iggarish, I have a question about fetchRingBufferSize
Let's say a user types select * from my_table in DataGrip and runs the query. DataGrip will create a Statement and set fetchSize to 100 (the number depends on DataGrip setting Limit page size to). Then DataGrip fetches 300 rows and shows it to user.
Does it mean that under the hood driver will fetch fetchRingBufferSize (default is 1G) of data? Or will it fetch only 300 rows?

@iggarish
Copy link
Contributor

iggarish commented Feb 2, 2021

fetchRingBufferSize is based on size of rowset. If you set fetch size with this option, it passes fetch size value to server and it sends those many rows to client. The driver fetches upto size of fetchRingBufferSize only. So which ever is lesser size application gets that as a batch. Same way next batch will fetch when application reads more rows. Basically all these are to limit client side memory usage.

@kornilova203
Copy link
Author

@iggarish,
Thanks
I think I got it, correct me if I'm wrong: with fetchRingBufferSize=1G and fetchSize=100

  1. if a client executes a query and calls resultSet.next() 50 times then 100 rows will be fetched from a server
  2. if a client executes a query and calls resultSet.next() 300 times then 300 rows will be fetched from a server

In both this cases fetchRingBufferSize is so large so it most likely will have no effect

@iggarish
Copy link
Contributor

iggarish commented Feb 2, 2021

Yes. But because you got server side issue when you set fetchSize != 0, it's better to set 0 right now as a work around until server get fix.

@kornilova203
Copy link
Author

Thank you. We don't use prepared statements for users' requests so I think fetchSize in this case should be fine

@iggarish
Copy link
Contributor

We fix the issue and fix will be available in next release.

@iggarish
Copy link
Contributor

We just released 2.0.0.3. Please try it and let us know.

@kornilova203
Copy link
Author

@iggarish, I confirm that the issue is fixed. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants