-
Notifications
You must be signed in to change notification settings - Fork 54
Update Flickr to use new time delineated ingester class #995
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is excellent! What a great idea to re-use these components, I'm sure we'll have APIs in the future that will benefit from this. I have a few comments, namely on the interface for the class 🙂
openverse_catalog/dags/providers/provider_api_scripts/time_delineated_provider_data_ingester.py
Outdated
Show resolved
Hide resolved
openverse_catalog/dags/providers/provider_api_scripts/time_delineated_provider_data_ingester.py
Outdated
Show resolved
Hide resolved
) | ||
|
||
|
||
def test_ts_pairs_and_kwargs_are_available_in_get_next_query_params(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great thing to test!
# Assert that attempting to ingest records raises an exception when | ||
# `should_raise_error` is enabled | ||
with (pytest.raises(AirflowException, match=expected_error_string)): | ||
ingester.ingest_records() | ||
|
||
# get_mock should have been called 4 times, twice for each batch (once in `get_batch` | ||
# and a second time in `get_should_continue`). Each time, it returned 2 'new' records, | ||
# even though the resultCount indicated there should only be 2 records total. | ||
assert get_mock.call_count == 4 | ||
# process_mock should only have been called once. We raise an error when getting the | ||
# second batch as soon as it is detected that we've processed too many records. | ||
assert process_mock.call_count == 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for these excellent comments!
pairs_list.extend(minute_slices) | ||
|
||
return pairs_list | ||
max_records = 10_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The simplification of this file is awesome ✨
openverse_catalog/dags/providers/provider_api_scripts/flickr.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested locally and it worked like a charm for both DAGs!
Based on the high urgency of this PR, the following reviewers are being gently reminded to review this PR: @krysal Excluding weekend1 days, this PR was updated 2 day(s) ago. PRs labelled with high urgency are expected to be reviewed within 2 weekday(s)2. @stacimc, if this PR is not ready for a review, please draft it to prevent reviewers from getting further unnecessary pings. Footnotes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It´s incredible how many details are involved in these time-delineated kinds of DAGs. Comments on each step and test are very appreciated!
Fixes
Fixes WordPress/openverse#1789 by @obulat
Description
This PR is largely a refactor and has very little new logic. The goal is to update the Flickr DAG to dynamically break its ingestion day down into time intervals depending on the distribution of data throughout the day. To do so, I've pulled existing logic out of the Finnish Museums DAG into a reusable class to be used by both.
Summary:
ProviderDataIngester
,TimeDelineatedProviderDataIngester
, which adds in all the logic for generating timestamp intervals, iterating over them, and detecting some common bugsTo use the new class all you have to do is define a few variables and implement an abstract method for getting the record count for a particular interval. When
ingest_records
is called, timestamp pairs will be dynamically generated according to the configured variables, and iterated over automatically. The bounds of the current time interval willstart_ts
andend_ts
will be available as kwargs inget_query_params
.The behavior of Finnish Museums should be totally unchanged. Flickr now gets for free the dynamic timeslice generation that Finnish was already working with. It will attempt to generate intervals with less than 4,000 records per slice, as the Flickr documentation reports that maximum 4k unique records are returned for any given query.
Notes about how the timeslice intervals were chosen
When experimenting locally, I found that the benefit of decreasing the interval size rapidly diminishes for Flickr (ie, after a certain point, you start getting far more duplicates when the intervals decrease in size). I suspect that the API isn't actually meant to be used for very small intervals, so the range is 'fuzzy' and causes a lot of overlap/duplicates when these small intervals are used.
For reference, I tried running ingestion using the date
2023-02-08
using (1) 1-minute time slices for the entire day, and (2) the dynamic interval generation used in this PR.For the first, that resulted in 1440 intervals and took almost 1.5 hours to pull data. We received 37,115 unique records and a whopping 709,199 duplicates.
For the latter, only 35 intervals were generated and it took just under 3 minutes to pull (1/30 of the time). We got 34,768 records but only 18,295 duplicates.
So, for the cost of a relatively small difference in the number of records ingested we are able to run ingestion in a fraction of the time and deal with many, many fewer duplicates. It is also worth pointing out that neither approach guarantees that all unique results are collected, so we are making a concession either way. While we pursue other approaches to Flickr, I think we should go with the much faster approach, and perhaps tweak the reingestion schedule for Flickr to be more aggressive.
Testing Instructions
Run tests locally.
Manually test the Flickr and Finnish PRs. For Flickr, you should be able to simply enable it locally and let it run for a few days -- it should be pretty quick! For Finnish you may need to let it run for awhile to make sure that it hits an ingestion day with a reasonable amount of data, as that DAG is quite sparse.
Checklist
Update index.md
).main
) ora parent feature branch.
errors.
Developer Certificate of Origin
Developer Certificate of Origin