Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(ingest): Rename csv / s3 / file source and sink #10675

Merged
merged 2 commits into from
Jun 11, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -126,3 +126,6 @@ metadata-service/war/bin/
metadata-utils/bin/
test-models/bin/

datahub-executor/
datahub-integrations-service/
metadata-ingestion-modules/
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this is fine to include? Tired of seeing these directories when switching repos

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be metadata-ingestion-modules/acryl-cloud

4 changes: 4 additions & 0 deletions docs-website/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -148,8 +148,12 @@ clean {
delete 'tmp'
delete 'build'
delete 'just'
delete 'sphinx/venv'
delete 'sphinx/_build'
delete 'versioned_docs'
delete fileTree(dir: 'genDocs', exclude: '.gitignore')
delete fileTree(dir: 'docs', exclude: '.gitignore')
delete fileTree(dir: 'genStatic', exclude: '.gitignore')
Comment on lines +151 to +156
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deleting files that are not persisted by git

delete 'graphql/combined.graphql'
yarnClear
}
Expand Down
16 changes: 8 additions & 8 deletions docs/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -655,8 +655,8 @@ We use a plugin architecture so that you can install only the dependencies you a
Please see our [Integrations page](https://datahubproject.io/integrations) if you want to filter on the features offered by each source.

| Plugin Name | Install Command | Provides |
| ---------------------------------------------------------------------------------------------- | ---------------------------------------------------------- | --------------------------------------- |
| [file](./generated/ingestion/sources/file.md) | _included by default_ | File source and sink |
|------------------------------------------------------------------------------------------------| ---------------------------------------------------------- | --------------------------------------- |
| [metadata-file](./generated/ingestion/sources/metadata-file.md) | _included by default_ | File source and sink |
| [athena](./generated/ingestion/sources/athena.md) | `pip install 'acryl-datahub[athena]'` | AWS Athena source |
| [bigquery](./generated/ingestion/sources/bigquery.md) | `pip install 'acryl-datahub[bigquery]'` | BigQuery source |
| [datahub-lineage-file](./generated/ingestion/sources/file-based-lineage.md) | _no additional dependencies_ | Lineage File source |
Expand Down Expand Up @@ -696,12 +696,12 @@ Please see our [Integrations page](https://datahubproject.io/integrations) if yo

### Sinks

| Plugin Name | Install Command | Provides |
| ----------------------------------------------------------- | -------------------------------------------- | -------------------------- |
| [file](../metadata-ingestion/sink_docs/file.md) | _included by default_ | File source and sink |
| [console](../metadata-ingestion/sink_docs/console.md) | _included by default_ | Console sink |
| [datahub-rest](../metadata-ingestion/sink_docs/datahub.md) | `pip install 'acryl-datahub[datahub-rest]'` | DataHub sink over REST API |
| [datahub-kafka](../metadata-ingestion/sink_docs/datahub.md) | `pip install 'acryl-datahub[datahub-kafka]'` | DataHub sink over Kafka |
| Plugin Name | Install Command | Provides |
|-------------------------------------------------------------------| -------------------------------------------- | -------------------------- |
| [metadata-file](../metadata-ingestion/sink_docs/metadata-file.md) | _included by default_ | File source and sink |
| [console](../metadata-ingestion/sink_docs/console.md) | _included by default_ | Console sink |
| [datahub-rest](../metadata-ingestion/sink_docs/datahub.md) | `pip install 'acryl-datahub[datahub-rest]'` | DataHub sink over REST API |
| [datahub-kafka](../metadata-ingestion/sink_docs/datahub.md) | `pip install 'acryl-datahub[datahub-kafka]'` | DataHub sink over Kafka |

These plugins can be mixed and matched as desired. For example:

Expand Down
2 changes: 1 addition & 1 deletion docs/troubleshooting/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ ALTER TABLE metadata_aspect_v2 CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_
## I've modified the default user.props file to include a custom username and password, but I don't see the new user(s) inside the Users & Groups tab. Why not?

Currently, `user.props` is a file used by the JAAS PropertyFileLoginModule solely for the purpose of **Authentication**. The file is not used as an source from which to
ingest additional metadata about the user. For that, you'll need to ingest some custom information about your new user using the Rest.li APIs or the [File-based ingestion source](../generated/ingestion/sources/file.md).
ingest additional metadata about the user. For that, you'll need to ingest some custom information about your new user using the Rest.li APIs or the [Metadata File ingestion source](../generated/ingestion/sources/metadata-file.md).

For an example of a file that ingests user information, check out [single_mce.json](https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/examples/mce_files/single_mce.json), which ingests a single user object into DataHub. Notice that the "urn" field provided
will need to align with the custom username you've provided in user.props file. For example, if your user.props file contains:
Expand Down
2 changes: 1 addition & 1 deletion metadata-ingestion/cli-ingestion.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ Please refer the following pages for advanced guids on CLI ingestion.
- [Reference for `datahub ingest` command](../docs/cli.md#ingest)
- [UI Ingestion Guide](../docs/ui-ingestion.md)

:::Tip Compatibility
:::tip Compatibility
DataHub server uses a 3 digit versioning scheme, while the CLI uses a 4 digit scheme. For example, if you're using DataHub server version 0.10.0, you should use CLI version 0.10.0.x, where x is a patch version.
We do this because we do CLI releases at a much higher frequency than server releases, usually every few days vs twice a month.

Expand Down
30 changes: 16 additions & 14 deletions metadata-ingestion/docs/sources/s3/README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,11 @@
This connector ingests S3 datasets into DataHub. It allows mapping an individual file or a folder of files to a dataset in DataHub.
This connector ingests AWS S3 datasets into DataHub. It allows mapping an individual file or a folder of files to a dataset in DataHub.
To specify the group of files that form a dataset, use `path_specs` configuration in ingestion recipe. Refer section [Path Specs](https://datahubproject.io/docs/generated/ingestion/sources/s3/#path-specs) for more details.

### Concept Mapping

This ingestion source maps the following Source System Concepts to DataHub Concepts:

| Source Concept | DataHub Concept | Notes |
| ---------------------------------------- |--------------------------------------------------------------------------------------------| ------------------- |
| `"s3"` | [Data Platform](https://datahubproject.io/docs/generated/metamodel/entities/dataplatform/) | |
| s3 object / Folder containing s3 objects | [Dataset](https://datahubproject.io/docs/generated/metamodel/entities/dataset/) | |
| s3 bucket | [Container](https://datahubproject.io/docs/generated/metamodel/entities/container/) | Subtype `S3 bucket` |
| s3 folder | [Container](https://datahubproject.io/docs/generated/metamodel/entities/container/) | Subtype `Folder` |
:::tip
This connector can also be used to ingest local files.
Just replace `s3://` in your path_specs with an absolute path to files on the machine running ingestion.
:::

This connector supports both local files as well as those stored on AWS S3 (which must be identified using the prefix `s3://`).
[a]
### Supported file types
Supported file types are as follows:

Expand All @@ -30,6 +22,16 @@ Schemas for schemaless formats (CSV, TSV, JSONL, JSON) are inferred. For CSV, TS
JSON file schemas are inferred on the basis of the entire file (given the difficulty in extracting only the first few objects of the file), which may impact performance.
We are working on using iterator-based JSON parsers to avoid reading in the entire JSON object.

### Concept Mapping

This ingestion source maps the following Source System Concepts to DataHub Concepts:

| Source Concept | DataHub Concept | Notes |
| ---------------------------------------- |--------------------------------------------------------------------------------------------| ------------------- |
| `"s3"` | [Data Platform](https://datahubproject.io/docs/generated/metamodel/entities/dataplatform/) | |
| s3 object / Folder containing s3 objects | [Dataset](https://datahubproject.io/docs/generated/metamodel/entities/dataset/) | |
| s3 bucket | [Container](https://datahubproject.io/docs/generated/metamodel/entities/container/) | Subtype `S3 bucket` |
| s3 folder | [Container](https://datahubproject.io/docs/generated/metamodel/entities/container/) | Subtype `Folder` |

### Profiling

Expand All @@ -42,4 +44,4 @@ This plugin extracts:
- histograms or frequencies of unique values

Note that because the profiling is run with PySpark, we require Spark 3.0.3 with Hadoop 3.2 to be installed (see [compatibility](#compatibility) for more details). If profiling, make sure that permissions for **s3a://** access are set because Spark and Hadoop use the s3a:// protocol to interface with AWS (schema inference outside of profiling requires s3:// access).
Enabling profiling will slow down ingestion runs.
Enabling profiling will slow down ingestion runs.
11 changes: 8 additions & 3 deletions metadata-ingestion/docs/sources/s3/s3_recipe.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Ingest data from S3
source:
type: s3
config:
path_specs:
-
include: "s3://covid19-lake/covid_knowledge_graph/csv/nodes/*.*"
- include: "s3://covid19-lake/covid_knowledge_graph/csv/nodes/*.*"

aws_config:
aws_access_key_id: *****
Expand All @@ -13,4 +13,9 @@ source:
profiling:
enabled: false

# sink configs
# Ingest data from local filesystem
source:
type: s3
config:
path_specs:
- include: "/absolute/path/*.csv"
3 changes: 2 additions & 1 deletion metadata-ingestion/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,8 @@

delta_lake = {
*s3_base,
"deltalake>=0.6.3, != 0.6.4",
"deltalake>=0.6.3, != 0.6.4, < 0.18.0; platform_system == 'Darwin' and platform_machine == 'arm64'",
"deltalake>=0.6.3, != 0.6.4; platform_system != 'Darwin' or platform_machine != 'arm64'",
Comment on lines +263 to +264
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like 0.18.0 is broken on ARM macs. Can alternately just forbid version 0.18.0 but may run into the same issue on the next version

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we also link to the issue delta-io/delta-rs#2577

}

powerbi_report_server = {"requests", "requests_ntlm"}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File
# Metadata File

For context on getting started with ingestion, check out our [metadata ingestion guide](../README.md).

Expand All @@ -10,7 +10,7 @@ Works with `acryl-datahub` out of the box.

Outputs metadata to a file. This can be used to decouple metadata sourcing from the
process of pushing it into DataHub, and is particularly useful for debugging purposes.
Note that the [file source](../../docs/generated/ingestion/sources/file.md) can read files generated by this sink.
Note that the [file source](../../docs/generated/ingestion/sources/metadata-file.md) can read files generated by this sink.

## Quickstart recipe

Expand All @@ -35,4 +35,3 @@ Note that a `.` is used to denote nested fields in the YAML recipe.
| Field | Required | Default | Description |
| -------- | -------- | ------- | ------------------------- |
| filename | ✅ | | Path to file to write to. |

2 changes: 1 addition & 1 deletion metadata-ingestion/sink_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ When configuring ingestion for DataHub, you're likely to be sending the metadata

For debugging purposes or troubleshooting, the following sinks can be useful:

- [File](sink_docs/file.md)
- [Metadata File](sink_docs/metadata-file.md)
- [Console](sink_docs/console.md)

## Default Sink
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,13 +93,18 @@ class CSVEnricherReport(SourceReport):
num_domain_workunits_produced: int = 0


@platform_name("CSV")
@platform_name("CSV Enricher")
@config_class(CSVEnricherConfig)
@support_status(SupportStatus.INCUBATING)
class CSVEnricherSource(Source):
"""
:::tip Looking to ingest a CSV data file into DataHub, as an asset?
Use the [Local File](./s3.md) ingestion source.
The CSV enricher is used for enriching entities already ingested into DataHub.
:::

This plugin is used to bulk upload metadata to Datahub.
It will apply glossary terms, tags, decription, owners and domain at the entity level. It can also be used to apply tags,
It will apply glossary terms, tags, description, owners and domain at the entity level. It can also be used to apply tags,
glossary terms, and documentation at the column level. These values are read from a CSV file. You have the option to either overwrite
or append existing values.

Expand Down
21 changes: 16 additions & 5 deletions metadata-ingestion/src/datahub/ingestion/source/file.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,17 @@ class FileSourceConfig(ConfigModel):
message="filename is deprecated. Use path instead.",
)
path: str = Field(
description="File path to folder or file to ingest, or URL to a remote file. If pointed to a folder, all files with extension {file_extension} (default json) within that folder will be processed."
description=(
"File path to folder or file to ingest, or URL to a remote file. "
"If pointed to a folder, all files with extension {file_extension} (default json) within that folder will be processed."
)
)
file_extension: str = Field(
".json",
description="When providing a folder to use to read files, set this field to control file extensions that you want the source to process. * is a special value that means process every file regardless of extension",
description=(
"When providing a folder to use to read files, set this field to control file extensions that you want the source to process. "
"* is a special value that means process every file regardless of extension"
),
)
read_mode: FileReadMode = FileReadMode.AUTO
aspect: Optional[str] = Field(
Expand All @@ -69,7 +75,10 @@ class FileSourceConfig(ConfigModel):
)
count_all_before_starting: bool = Field(
default=True,
description="When enabled, counts total number of records in the file before starting. Used for accurate estimation of completion time. Turn it off if startup time is too high.",
description=(
"When enabled, counts total number of records in the file before starting. "
"Used for accurate estimation of completion time. Turn it off if startup time is too high."
),
)

_minsize_for_streaming_mode_in_bytes: int = (
Expand Down Expand Up @@ -163,12 +172,14 @@ def compute_stats(self) -> None:
self.percentage_completion = f"{percentage_completion:.2f}%"


@platform_name("File")
@platform_name("Metadata File")
@config_class(FileSourceConfig)
@support_status(SupportStatus.CERTIFIED)
class GenericFileSource(TestableSource):
"""
This plugin pulls metadata from a previously generated file. The [file sink](../../../../metadata-ingestion/sink_docs/file.md) can produce such files, and a number of samples are included in the [examples/mce_files](../../../../metadata-ingestion/examples/mce_files) directory.
This plugin pulls metadata from a previously generated file.
The [metadata file sink](../../../../metadata-ingestion/sink_docs/metadata-file.md) can produce such files, and a number of
samples are included in the [examples/mce_files](../../../../metadata-ingestion/examples/mce_files) directory.
"""

def __init__(self, ctx: PipelineContext, config: FileSourceConfig):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ class TableData:
number_of_files: int


@platform_name("S3 Data Lake", id="s3")
@platform_name("S3 / Local Files", id="s3")
@config_class(DataLakeSourceConfig)
@support_status(SupportStatus.INCUBATING)
@capability(SourceCapability.DATA_PROFILING, "Optionally enabled via configuration")
Expand Down
4 changes: 1 addition & 3 deletions metadata-integration/java/as-a-library.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ If you're interested in looking at the Kafka emitter code, it is available [here

## File Emitter

The File emitter writes metadata change proposal events (MCPs) into a JSON file that can be later handed off to the Python [File source](docs/generated/ingestion/sources/file.md) for ingestion. This works analogous to the [File sink](../../metadata-ingestion/sink_docs/file.md) in Python. This mechanism can be used when the system producing metadata events doesn't have direct connection to DataHub's REST server or Kafka brokers. The generated JSON file can be transferred later and then ingested into DataHub using the [File source](docs/generated/ingestion/sources/file.md).
The File emitter writes metadata change proposal events (MCPs) into a JSON file that can be later handed off to the Python [Metadata File source](docs/generated/ingestion/sources/metadata-file.md) for ingestion. This works analogous to the [Metadata File sink](../../metadata-ingestion/sink_docs/metadata-file.md) in Python. This mechanism can be used when the system producing metadata events doesn't have direct connection to DataHub's REST server or Kafka brokers. The generated JSON file can be transferred later and then ingested into DataHub using the [Metadata File source](docs/generated/ingestion/sources/metadata-file.md).

### Usage

Expand Down Expand Up @@ -223,5 +223,3 @@ The File emitter only supports writing to the local filesystem currently. If you

Emitter API-s are also supported for:
- [Python](../../metadata-ingestion/as-a-library.md)


Loading