Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARROW-18420: Add int32_with_null_pages.parquet for page index test #32

Merged
merged 1 commit into from
Dec 13, 2022

Conversation

wgtmac
Copy link
Member

@wgtmac wgtmac commented Dec 13, 2022

This patch adds a parquet file generated by parquet-mr with following attributes:

  • a single column with optional int32 type
  • file is created with page index enabled
  • in total 1000 values and are random generated with some nulls
  • a null page has been generated by purpose

After this file has been committed, I can go ahead to finish test cases required for page index in the apache/arrow#14803

Below is the complete java code to generate the file:

package org.apache.parquet.cli.commands;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.column.ParquetProperties;
import org.apache.parquet.example.data.Group;
import org.apache.parquet.example.data.simple.SimpleGroupFactory;
import org.apache.parquet.hadoop.ParquetWriter;

import org.apache.parquet.hadoop.example.GroupWriteSupport;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;
import org.apache.parquet.schema.MessageType;
import org.apache.parquet.schema.PrimitiveType;
import org.apache.parquet.schema.Types;

import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.Random;

public class GenerateTestFile {

    public static void main(String[] args) {
        Path path = new Path("/tmp/int32_with_null_pages.parquet");
        Configuration conf = new Configuration();

        MessageType schema = Types.buildMessage()
                .optional(PrimitiveType.PrimitiveTypeName.INT32)
                .named("int32_field")
                .named("schema");
        SimpleGroupFactory fact = new SimpleGroupFactory(schema);
        GroupWriteSupport.setSchema(schema, conf);

        try (
            ParquetWriter<Group> writer = new ParquetWriter<>(
                path,
                new GroupWriteSupport(),
                CompressionCodecName.UNCOMPRESSED,
                /*blockSize=*/1024 * 1024,
                /*pageSize=*/64,
                /*dictionaryPageSize=*/64,
                /*enableDictionary=*/false,
                /*validating=*/false,
                ParquetProperties.WriterVersion.PARQUET_1_0,
                conf)) {

            Random rnd = new Random();
            for (int i = 0; i < 1_000; ++i) {
                boolean enforceNull = 150 < i && i < 350;
                boolean enforceNotNull = 650 < i && i < 750;
                boolean randomNull = rnd.nextInt(10) == 5;
                if (enforceNull || (!enforceNotNull && randomNull)) {
                    writer.write(fact.newGroup());
                } else {
                    ByteBuffer buffer = ByteBuffer.allocate(Integer.BYTES);
                    writer.write(fact.newGroup()
                            .append("int32_field", rnd.nextInt()));
                }
            }
        } catch (IOException e) {
            throw new RuntimeException(e);
        }
    }

}

@wgtmac
Copy link
Member Author

wgtmac commented Dec 13, 2022

@pitrou Can you take a look please?

Copy link
Member

@pitrou pitrou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants