Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hudi exception reading data. com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null #23374

Open
alberttwong opened this issue May 13, 2023 · 17 comments
Assignees
Labels
type/bug Something isn't working

Comments

@alberttwong
Copy link
Contributor

alberttwong commented May 13, 2023

Instructions
Follow the Hudi Docker Quickstart. https://hudi.apache.org/docs/docker_demo

Modifed docker-compose_hadoop284_hive233_spark244_mac_aarch64.yml to include starrocks in the hudi docker compose. Also you need to apply apache/hudi#8700 if they haven't merged it in yet.

  starrocks:
    image: registry.starrocks.io/starrocks/allin1-ubuntu
    hostname: starrocks-fe
    container_name: allin1-ubuntu
    ports:
      - 8030:8030
      - 8040:8040
      - 9030:9030

Do all the steps in the hudi docker compose quickstart. When you can do a show tables with beehive, you know that tables are ready and SR should be able to connect.

login to the SR container within the hudi docker compose

mysql -P9030 -h127.0.0.1 -uroot --prompt="StarRocks > "

then execute the sql commands.

CREATE EXTERNAL CATALOG hudi_catalog_hms PROPERTIES (     "type" = "hudi",     "aws.s3.use_instance_profile" = "true",     "aws.s3.region" = "us-west-2",     "hive.metastore.uris" = "thrift://hivemetastore:9083" );
set catalog hudi_catalog_hms;
use default;
select count(*) from stock_ticks_cow;
2023-05-13 00:42:13,997 INFO (background-get-partitions-statistics-hudi_catalog_hms-default-stock_ticks_cow|174) [PlannerProfile.addCustomProperties():159] Background collect hive column statistics profile: [HMS.PARTITIONS.getPartitionsByNames.stock_ticks_cow:1 partitions]
2023-05-13 00:42:14,004 INFO (background-get-partitions-statistics-hudi_catalog_hms-default-stock_ticks_cow|174) [HiveMetaStoreThriftClient.open():450] Trying to connect to metastore with URI thrift://hivemetastore.hudi:9083
2023-05-13 00:42:14,005 INFO (background-get-partitions-statistics-hudi_catalog_hms-default-stock_ticks_cow|174) [HiveMetaStoreThriftClient.open():530] Opened a connection to metastore, current connections: 2
2023-05-13 00:42:14,005 INFO (background-get-partitions-statistics-hudi_catalog_hms-default-stock_ticks_cow|174) [HiveMetaStoreThriftClient.open():585] Connected to metastore.
2023-05-13 00:42:14,015 WARN (starrocks-mysql-nio-pool-2|161) [HiveMetaClient.getPartitionsByNames():233] Expect to fetch 1 partition on [default.stock_ticks_cow], but actually fetched 0 partition
2023-05-13 00:42:14,016 ERROR (starrocks-mysql-nio-pool-2|161) [CachingHiveMetastore.getAll():459] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.getPartitionByPartitionKeys(HiveMetastoreOperations.java:73) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveStatisticsProvider.getEstimatedRowCount(HiveStatisticsProvider.java:148) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveStatisticsProvider.getTableStatistics(HiveStatisticsProvider.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hudi.HudiMetadata.getTableStatistics(HudiMetadata.java:150) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.lambda$getTableStatistics$3(MetadataMgr.java:170) ~[starrocks-fe.jar:?]
at java.util.Optional.map(Optional.java:265) ~[?:?]
at com.starrocks.server.MetadataMgr.getTableStatistics(MetadataMgr.java:169) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.computeHMSTableScanNode(StatisticsCalculator.java:372) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.visitLogicalHudiScan(StatisticsCalculator.java:340) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.visitLogicalHudiScan(StatisticsCalculator.java:153) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.logical.LogicalHudiScanOperator.accept(LogicalHudiScanOperator.java:86) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.estimatorStats(StatisticsCalculator.java:169) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.DeriveStatsTask.execute(DeriveStatsTask.java:57) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.SeriallyTaskScheduler.executeTasks(SeriallyTaskScheduler.java:68) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.memoOptimize(Optimizer.java:456) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimizeByCost(Optimizer.java:167) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimize(Optimizer.java:109) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:140) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 42 more
2023-05-13 00:42:14,021 ERROR (starrocks-mysql-nio-pool-2|161) [CachingHiveMetastore.getAll():459] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.getPartitionByPartitionKeys(HiveMetastoreOperations.java:73) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveStatisticsProvider.getEstimatedRowCount(HiveStatisticsProvider.java:148) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveStatisticsProvider.getTableStatistics(HiveStatisticsProvider.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hudi.HudiMetadata.getTableStatistics(HudiMetadata.java:150) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.lambda$getTableStatistics$3(MetadataMgr.java:170) ~[starrocks-fe.jar:?]
at java.util.Optional.map(Optional.java:265) ~[?:?]
at com.starrocks.server.MetadataMgr.getTableStatistics(MetadataMgr.java:169) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.computeHMSTableScanNode(StatisticsCalculator.java:372) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.visitLogicalHudiScan(StatisticsCalculator.java:340) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.visitLogicalHudiScan(StatisticsCalculator.java:153) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.logical.LogicalHudiScanOperator.accept(LogicalHudiScanOperator.java:86) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.estimatorStats(StatisticsCalculator.java:169) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.DeriveStatsTask.execute(DeriveStatsTask.java:57) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.SeriallyTaskScheduler.executeTasks(SeriallyTaskScheduler.java:68) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.memoOptimize(Optimizer.java:456) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimizeByCost(Optimizer.java:167) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimize(Optimizer.java:109) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:140) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 33 more
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 33 more
2023-05-13 00:42:14,021 WARN (starrocks-mysql-nio-pool-2|161) [HudiMetadata.getTableStatistics():157] Failed to get table column statistics on [HudiTable{resourceName='hudi_catalog_hms', catalogName='hudi_catalog_hms', hiveDbName='default', hiveTableName='stock_ticks_cow', id=100000001, name='stock_ticks_cow', type=HUDI, createTime=1683938484}]. error : com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
2023-05-13 00:42:14,027 WARN (starrocks-mysql-nio-pool-2|161) [HiveMetaClient.getPartitionsByNames():233] Expect to fetch 1 partition on [default.stock_ticks_cow], but actually fetched 0 partition
2023-05-13 00:42:14,027 ERROR (starrocks-mysql-nio-pool-2|161) [CachingHiveMetastore.getAll():459] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.getPartitionByPartitionKeys(HiveMetastoreOperations.java:73) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveStatisticsProvider.getEstimatedRowCount(HiveStatisticsProvider.java:148) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveStatisticsProvider.createUnknownStatistics(HiveStatisticsProvider.java:185) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hudi.HudiMetadata.getTableStatistics(HudiMetadata.java:163) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.lambda$getTableStatistics$3(MetadataMgr.java:170) ~[starrocks-fe.jar:?]
at java.util.Optional.map(Optional.java:265) ~[?:?]
at com.starrocks.server.MetadataMgr.getTableStatistics(MetadataMgr.java:169) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.computeHMSTableScanNode(StatisticsCalculator.java:372) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.visitLogicalHudiScan(StatisticsCalculator.java:340) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.visitLogicalHudiScan(StatisticsCalculator.java:153) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.logical.LogicalHudiScanOperator.accept(LogicalHudiScanOperator.java:86) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.estimatorStats(StatisticsCalculator.java:169) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.DeriveStatsTask.execute(DeriveStatsTask.java:57) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.SeriallyTaskScheduler.executeTasks(SeriallyTaskScheduler.java:68) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.memoOptimize(Optimizer.java:456) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimizeByCost(Optimizer.java:167) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimize(Optimizer.java:109) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:140) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 42 more
2023-05-13 00:42:14,027 ERROR (starrocks-mysql-nio-pool-2|161) [CachingHiveMetastore.getAll():459] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.getPartitionByPartitionKeys(HiveMetastoreOperations.java:73) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveStatisticsProvider.getEstimatedRowCount(HiveStatisticsProvider.java:148) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveStatisticsProvider.createUnknownStatistics(HiveStatisticsProvider.java:185) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hudi.HudiMetadata.getTableStatistics(HudiMetadata.java:163) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.lambda$getTableStatistics$3(MetadataMgr.java:170) ~[starrocks-fe.jar:?]
at java.util.Optional.map(Optional.java:265) ~[?:?]
at com.starrocks.server.MetadataMgr.getTableStatistics(MetadataMgr.java:169) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.computeHMSTableScanNode(StatisticsCalculator.java:372) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.visitLogicalHudiScan(StatisticsCalculator.java:340) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.visitLogicalHudiScan(StatisticsCalculator.java:153) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.logical.LogicalHudiScanOperator.accept(LogicalHudiScanOperator.java:86) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.statistics.StatisticsCalculator.estimatorStats(StatisticsCalculator.java:169) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.DeriveStatsTask.execute(DeriveStatsTask.java:57) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.SeriallyTaskScheduler.executeTasks(SeriallyTaskScheduler.java:68) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.memoOptimize(Optimizer.java:456) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimizeByCost(Optimizer.java:167) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimize(Optimizer.java:109) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:140) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 33 more
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 33 more
2023-05-13 00:42:14,027 WARN (starrocks-mysql-nio-pool-2|161) [HiveStatisticsProvider.createUnknownStatistics():187] Failed to estimate row count on table [HudiTable{resourceName='hudi_catalog_hms', catalogName='hudi_catalog_hms', hiveDbName='default', hiveTableName='stock_ticks_cow', id=100000001, name='stock_ticks_cow', type=HUDI, createTime=1683938484}]
2023-05-13 00:42:14,036 WARN (starrocks-mysql-nio-pool-2|161) [HiveMetaClient.getPartitionsByNames():233] Expect to fetch 1 partition on [default.stock_ticks_cow], but actually fetched 0 partition
2023-05-13 00:42:14,036 ERROR (starrocks-mysql-nio-pool-2|161) [CachingHiveMetastore.getAll():459] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.getPartitionByPartitionKeys(HiveMetastoreOperations.java:73) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hudi.HudiMetadata.getRemoteFileInfos(HudiMetadata.java:124) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.getRemoteFileInfos(MetadataMgr.java:183) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.getRemoteFileInfos(MetadataMgr.java:174) ~[starrocks-fe.jar:?]
at com.starrocks.connector.RemoteScanRangeLocations.setupScanRangeLocations(RemoteScanRangeLocations.java:174) ~[starrocks-fe.jar:?]
at com.starrocks.planner.HudiScanNode.setupScanRangeLocations(HudiScanNode.java:70) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:863) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:345) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.physical.PhysicalHudiScanOperator.accept(PhysicalHudiScanOperator.java:62) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visit(PlanFragmentBuilder.java:362) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.translate(PlanFragmentBuilder.java:356) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder.createPhysicalPlan(PlanFragmentBuilder.java:204) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:154) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 37 more
2023-05-13 00:42:14,037 ERROR (starrocks-mysql-nio-pool-2|161) [CachingHiveMetastore.getAll():459] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.getPartitionByPartitionKeys(HiveMetastoreOperations.java:73) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hudi.HudiMetadata.getRemoteFileInfos(HudiMetadata.java:124) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.getRemoteFileInfos(MetadataMgr.java:183) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.getRemoteFileInfos(MetadataMgr.java:174) ~[starrocks-fe.jar:?]
at com.starrocks.connector.RemoteScanRangeLocations.setupScanRangeLocations(RemoteScanRangeLocations.java:174) ~[starrocks-fe.jar:?]
at com.starrocks.planner.HudiScanNode.setupScanRangeLocations(HudiScanNode.java:70) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:863) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:345) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.physical.PhysicalHudiScanOperator.accept(PhysicalHudiScanOperator.java:62) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visit(PlanFragmentBuilder.java:362) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.translate(PlanFragmentBuilder.java:356) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder.createPhysicalPlan(PlanFragmentBuilder.java:204) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:154) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 28 more
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 28 more
2023-05-13 00:42:14,037 ERROR (starrocks-mysql-nio-pool-2|161) [MetadataMgr.getRemoteFileInfos():185] Failed to list remote file's metadata on catalog [hudi_catalog_hms], table [HudiTable{resourceName='hudi_catalog_hms', catalogName='hudi_catalog_hms', hiveDbName='default', hiveTableName='stock_ticks_cow', id=100000001, name='stock_ticks_cow', type=HUDI, createTime=1683938484}]
com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.getPartitionByPartitionKeys(HiveMetastoreOperations.java:73) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hudi.HudiMetadata.getRemoteFileInfos(HudiMetadata.java:124) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.getRemoteFileInfos(MetadataMgr.java:183) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.getRemoteFileInfos(MetadataMgr.java:174) ~[starrocks-fe.jar:?]
at com.starrocks.connector.RemoteScanRangeLocations.setupScanRangeLocations(RemoteScanRangeLocations.java:174) ~[starrocks-fe.jar:?]
at com.starrocks.planner.HudiScanNode.setupScanRangeLocations(HudiScanNode.java:70) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:863) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:345) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.physical.PhysicalHudiScanOperator.accept(PhysicalHudiScanOperator.java:62) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visit(PlanFragmentBuilder.java:362) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.translate(PlanFragmentBuilder.java:356) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder.createPhysicalPlan(PlanFragmentBuilder.java:204) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:154) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 28 more
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionsByNames(CachingHiveMetastore.java:249) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsByNames(CachingHiveMetastore.java:261) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$000(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:137) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 28 more
2023-05-13 00:42:14,037 WARN (starrocks-mysql-nio-pool-2|161) [PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan():869] Hudi scan node get scan range locations failed : com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
2023-05-13 00:42:14,037 WARN (starrocks-mysql-nio-pool-2|161) [PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan():870] com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
2023-05-13 00:42:14,038 WARN (starrocks-mysql-nio-pool-2|161) [StmtExecutor.execute():411] New planner error: select * from stock_ticks_cow
com.starrocks.sql.common.StarRocksPlannerException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:871) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:345) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.physical.PhysicalHudiScanOperator.accept(PhysicalHudiScanOperator.java:62) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visit(PlanFragmentBuilder.java:362) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.translate(PlanFragmentBuilder.java:356) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder.createPhysicalPlan(PlanFragmentBuilder.java:204) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:154) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2023-05-13 00:42:14,038 WARN (starrocks-mysql-nio-pool-2|161) [StmtExecutor.execute():558] execute Exception, sql select * from stock_ticks_cow
com.starrocks.sql.common.StarRocksPlannerException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:871) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visitPhysicalHudiScan(PlanFragmentBuilder.java:345) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.operator.physical.PhysicalHudiScanOperator.accept(PhysicalHudiScanOperator.java:62) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.visit(PlanFragmentBuilder.java:362) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder$PhysicalPlanTranslator.translate(PlanFragmentBuilder.java:356) ~[starrocks-fe.jar:?]
at com.starrocks.sql.plan.PlanFragmentBuilder.createPhysicalPlan(PlanFragmentBuilder.java:204) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:154) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planQuery(StatementPlanner.java:115) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:90) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:55) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:396) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2023-05-13 00:42:14,038 INFO (starrocks-mysql-nio-pool-2|161) [MetadataMgr.removeQueryMetadata():94] Succeed to deregister query level connector metadata on query id: 00e7e84d-f127-11ed-884e-0242ac1d0005
2023-05-13 00:42:14,051 WARN (background-get-partitions-statistics-hudi_catalog_hms-default-stock_ticks_cow|174) [HiveMetaClient.getPartitionsByNames():233] Expect to fetch 1 partition on [default.stock_ticks_cow], but actually fetched 0 partition
2023-05-13 00:42:14,051 INFO (background-get-partitions-statistics-hudi_catalog_hms-default-stock_ticks_cow|174) [PlannerProfile$ScopedTimer.printBackgroundLog():104] Get partitions or partition statistics cost time: 54
2023-05-13 00:42:14,051 ERROR (background-get-partitions-statistics-hudi_catalog_hms-default-stock_ticks_cow|174) [CachingHiveMetastore.getAll():459] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:293) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsStatistics(CachingHiveMetastore.java:324) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$200(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$2.loadAll(CachingHiveMetastore.java:154) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:293) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.lambda$getPartitionStatistics$3(HiveMetastoreOperations.java:105) ~[starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionStatistics(HiveMetastore.java:153) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsStatistics(CachingHiveMetastore.java:324) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$200(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$2.loadAll(CachingHiveMetastore.java:154) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 16 more
2023-05-13 00:42:14,052 ERROR (background-get-partitions-statistics-hudi_catalog_hms-default-stock_ticks_cow|174) [CachingHiveMetastore.getAll():459] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:293) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.lambda$getPartitionStatistics$3(HiveMetastoreOperations.java:105) ~[starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:293) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsStatistics(CachingHiveMetastore.java:324) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$200(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$2.loadAll(CachingHiveMetastore.java:154) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 7 more
Caused by: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:171) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:281) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionsByNames(HiveMetastore.java:126) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionStatistics(HiveMetastore.java:153) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsStatistics(CachingHiveMetastore.java:324) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$200(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$2.loadAll(CachingHiveMetastore.java:154) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getAll(LocalCache.java:4032) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4960) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getAll(CachingHiveMetastore.java:457) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:293) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionsStatistics(CachingHiveMetastore.java:324) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.access$200(CachingHiveMetastore.java:60) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore$2.loadAll(CachingHiveMetastore.java:154) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:211) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4069) ~[spark-dpp-1.0.0.jar:?]
... 7 more
2023-05-13 00:42:19,606 INFO (leaderCheckpointer|91) [BDBJEJournal.getFinalizedJournalId():275] database names: 1 
2023-05-13 00:42:19,607 INFO (leaderCheckpointer|91) [Checkpoint.runAfterCatalogReady():95] checkpoint imageVersion 0, checkPointVersion 0
2023-05-13 00:42:19,620 INFO (colocate group clone checker|99) [ColocateTableBalancer.matchGroup():856] finished to match colocate group. cost: 0 ms, in lock time: 0 ms
2023-05-13 00:42:19,636 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: true, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-13 00:42:19,636 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: false, unhealthy/total/added/in_sched/not_ready: 0/30/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-13 00:42:19,637 INFO (tablet checker|34) [TabletChecker.runAfterCatalogReady():200] TStat :
TStat num of tablet check round: 13 (+1)
TStat cost of tablet check(ms): 6 (+0)
TStat num of tablet checked in tablet checker: 270 (+30)
TStat num of unhealthy tablet checked in tablet checker: 0 (+0)
TStat num of tablet being added to tablet scheduler: 0 (+0)
TStat num of tablet schedule round: 240 (+20)
TStat cost of tablet schedule(ms): 17 (+1)
TStat num of tablet being scheduled: 0 (+0)
TStat num of tablet being scheduled succeeded: 0 (+0)
@alberttwong alberttwong added the type/bug Something isn't working label May 13, 2023
@alberttwong
Copy link
Contributor Author

In beehive you can see that I can do show tables and show data in tables.

atwong@Alberts-MBP docker % docker exec -it adhoc-2 /bin/bash
root@adhoc-2:/opt# beeline -u jdbc:hive2://hiveserver:10000 \
>   --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
>   --hiveconf hive.stats.autogather=false
Connecting to jdbc:hive2://hiveserver:10000
23/05/13 02:21:33 INFO jdbc.Utils: Supplied authorities: hiveserver:10000
23/05/13 02:21:33 INFO jdbc.Utils: Resolved authority: hiveserver:10000
23/05/13 02:21:33 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://hiveserver:10000
Connected to: Apache Hive (version 2.3.3)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1.spark2 by Apache Hive
0: jdbc:hive2://hiveserver:10000> show tables;
+---------------------+--+
|      tab_name       |
+---------------------+--+
| stock_ticks_cow     |
| stock_ticks_mor_ro  |
| stock_ticks_mor_rt  |
+---------------------+--+
3 rows selected (0.081 seconds)
0: jdbc:hive2://hiveserver:10000> select count(*) from stock_ticks_cow
0: jdbc:hive2://hiveserver:10000> ;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
+------+--+
| _c0  |
+------+--+
| 197  |
+------+--+
1 row selected (3.117 seconds)

@alberttwong
Copy link
Contributor Author

showing the output in SR.

StarRocks > set catalog hudi_catalog_hms;
Query OK, 0 rows affected (0.01 sec)

StarRocks > use default;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
StarRocks > show tables;
+--------------------+
| Tables_in_default  |
+--------------------+
| stock_ticks_cow    |
| stock_ticks_mor_ro |
| stock_ticks_mor_rt |
+--------------------+
3 rows in set (0.02 sec)

StarRocks > select count(*) from stock_ticks_cow;
ERROR 1064 (HY000): com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null

@Youngwb
Copy link
Contributor

Youngwb commented May 17, 2023

This issue is due to starrocks relying on metadata from hive metatsore for hudi queries. We will fix this issue later. Currently, there is a solution:
After hudi sync hms using run_sync_tool.sh , using hive or spark to run msck repair table stock_ticks_cow, then query stock_ticks_cow in starrocks will work well, if not, plz run refresh external table stock_ticks_cow in starrocks

@alberttwong
Copy link
Contributor Author

Just to confirm. This is an issue with our implementation with Apache Hudi and not something Hudi would fix.

@alberttwong
Copy link
Contributor Author

Confirmed with StarRocks engineering that it is an issue on StarRocks' side.

@alberttwong
Copy link
Contributor Author

alberttwong commented May 17, 2023

It worked when I applied the repair and refresh. I also tried it again and I only needed to do the refresh command to make it work.

atwong@Alberts-MacBook-Pro docker % docker exec -it adhoc-2 /bin/bash
root@adhoc-2:/opt# beeline -u jdbc:hive2://hiveserver:10000 \
>   --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
>   --hiveconf hive.stats.autogather=false
Connecting to jdbc:hive2://hiveserver:10000
23/05/17 16:26:50 INFO jdbc.Utils: Supplied authorities: hiveserver:10000
23/05/17 16:26:50 INFO jdbc.Utils: Resolved authority: hiveserver:10000
23/05/17 16:26:50 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://hiveserver:10000
Connected to: Apache Hive (version 2.3.3)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1.spark2 by Apache Hive
0: jdbc:hive2://hiveserver:10000> msck repair table stock_ticks_cow;
No rows affected (0.129 seconds)
0: jdbc:hive2://hiveserver:10000>
StarRocks > select count(*) from stock_ticks_cow;
ERROR 1064 (HY000): com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null
StarRocks > refresh external table stock_ticks_cow;
Query OK, 0 rows affected (0.11 sec)

StarRocks > select count(*) from stock_ticks_cow;
+----------+
| count(*) |
+----------+
|      197 |
+----------+
1 row in set (1.23 sec)

Copy link

We have marked this issue as stale because it has been inactive for 6 months. If this issue is still relevant, removing the stale label or adding a comment will keep it active. Otherwise, we'll close it in 10 days to keep the issue queue tidy. Thank you for your contribution to StarRocks!

@DanRoscigno
Copy link
Contributor

I think this needs to be reopened, trying today with 3.2.2 allin1 and Hudi 0.14.1 and seeing:

StarRocks > select count(*) from stock_ticks_cow;
ERROR 1064 (HY000): com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: date=2018-08-31=null

Issue a refresh and try the query:

StarRocks > refresh external table stock_ticks_cow;
Query OK, 0 rows affected (0.14 sec)

StarRocks > select count(*) from stock_ticks_cow;
+----------+
| count(*) |
+----------+
|      197 |
+----------+
1 row in set (1.46 sec)

@wangsimo0
Copy link
Contributor

The main cause of this issue is we use hms to get metadata, and in Hudi, user need to enable metadata sink to hms when spark of flink modify hudi table. That is, user need to set hoodie.datasource.hive_sync.enable to true. https://hudi.apache.org/docs/configurations/ it is false by default. @DanRoscigno Dan could you pls check it?

@DanRoscigno
Copy link
Contributor

The main cause of this issue is we use hms to get metadata, and in Hudi, user need to enable metadata sink to hms when spark of flink modify hudi table. That is, user need to set hoodie.datasource.hive_sync.enable to true. https://hudi.apache.org/docs/configurations/ it is false by default. @DanRoscigno Dan could you pls check it?

I will check, thanks so much!

@DanRoscigno
Copy link
Contributor

@wangsimo0 Can you show me how to configure this? I am using the Docker Demo at https://hudi.apache.org/docs/docker_demo/

I tried these env vars in the containers:

HIVE_SITE_CONF_hive_sync_enable=true
HIVE_SITE_CONF_hive_sync_db=default
HIVE_SITE_CONF_hive_sync_mode=hms

But no change, I still have to refresh external table <tablename>;

@wangsimo0
Copy link
Contributor

wangsimo0 commented Jan 24, 2024

@DanRoscigno By still have to refresh, do you get error after select? It's possible if you are quering new partitions. The core reason is like select from hive. We cache meta in starrocks, if the hudi table is being ingested or updated, starrocks cannot get the latest information, and we haven't supported refresh metacache cyclical so there is no way we can know the metadata update. So error may happen when user is querying new partitions because starrocks doesn't have those partitions cache. Also if the old partition is updated, starrocks will return the old data because of the cache. This is absolutely not user-friendly. We are planning to use Hudi SDK to get hudi metadata to solve this problem completely, however, we don't have sufficient manpower and we are seeking for community developers who are interested in this to work with us. By now, unfortunately we do have this limitation.

@DanRoscigno
Copy link
Contributor

After a refresh everything seems fine. Note that with OneHouse Hudi @alberttwong did not have to refresh, everything worked on the first try. I will update the docs to include a refresh of each table. Please let me know when this changes and I will remove the refresh from the docs. Thanks for explaining this to me and your help @wangsimo0

Copy link

We have marked this issue as stale because it has been inactive for 6 months. If this issue is still relevant, removing the stale label or adding a comment will keep it active. Otherwise, we'll close it in 10 days to keep the issue queue tidy. Thank you for your contribution to StarRocks!

@prm-xingcan
Copy link

I hit the same exception when using StarRocks to read AWS Glue external tables.

org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [1064] [42000]: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: dt=2024-07-09/hour=11=null
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:615)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$2(SQLQueryJob.java:506)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:192)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:525)
	at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:977)
	at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:4176)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:123)
	at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:192)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:121)
	at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:5160)
	at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:117)
	at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: java.sql.SQLSyntaxErrorException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException: null value in entry: dt=2024-07-09/hour=11=null
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121)
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
	at com.mysql.cj.jdbc.StatementImpl.executeInternal(StatementImpl.java:770)
	at com.mysql.cj.jdbc.StatementImpl.execute(StatementImpl.java:653)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:330)
	at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:131)
	... 12 more

@prm-xingcan
Copy link

Also, I could only refresh the external table by specifying partitions. When I removed the PARTITION clause and tried to refresh the whole table, I still got the exception above.

@prm-xingcan
Copy link

Debugged the issue locally. It seems to be related to max_hive_partitions_per_rpc. When using the default value 5000, partitions = client.hiveClient.getPartitionsByNames(dbName, tblName, partitionNames) (in HiveMetaClient.java) couldn't get all partitions each time. I guess it's either because we have too many partitions or the metadata for each partition is too large. After reducing the value to 100, everything worked well.

alvin-celerdata pushed a commit that referenced this issue Aug 7, 2024
Why I'm doing:
See #23374 (comment)

What I'm doing:
Made max_hive_partitions_per_rpc mutable.

Signed-off-by: Xingcan Cui <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants