You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The aim of the tests is to verify the possibility of using the Private Aggregation API to collect statistics of Fledge auctions and events related to it, specifically the registration of impressions and clicks on the banner. In most cases, our banners consist of a main banner (ad) and "product" ad components.
What mechanisms will we be using?
Registration of 2 bid histograms using contributeToHistogram (CTH).
Registration of 2 win report histograms using contributeToHistogram.
Registration of event histograms in the bidding function using contributeToHistogramOnEvent:
2 reserved.win events.
1 reserved.loss using signalValue (bid-reject-reason).
We register only 1% of histograms related to bid (point 1 and point 3 reserved.loss).
During the tested period, the stable version of Chrome was 113.
Private Aggregation API was enabled on 1% Origin Trial,
Extended Private Aggregation Reporting in FLEDGE was available for 115 (dev/canary) chrome version.
114 is the beta chrome version.
The following results are based on reports received over 7 consecutive days, limited to 5 days based on the schedule date:
We took all the reports that have been received within a 7-day period (2023-05-19, 2023-05-26) and limit ourselves to those reports that have a schedule date (2023-05-19, 2023-05-24).
Time
Time between registering a histogram and the schedule date value.
Let's compare the moment of receiving the report with the histogram registration
Comparing PAA reports with other sources
Impressions
Reported through contributeToHistogram
chrome version
#impressions
#first report CTH
#second report CTH
#first report CTH / imp ratio
Chrome/113.0.0.0
9385457
1598632
1598456
17.03%
Chrome/114.0.0.0
39131
38979
38970
99.61%
Chrome/115.0.0.0
9972
9678
9679
97.05%
During the test period, in the stable version (113), the Private Aggregation API was enabled for 1% of OT traffic, while the Protected Audience accounted for 6%. So expected #first report CTH / imp ratio is ~ 1/6.
Reported through contributeToHistogramOnEvent(‘reserved.win’, …)
chrome version
#impressions
#first report from event reserved.win
#second report from event reserved.win
Chrome/113.0.0.0
9385457
19
19
Chrome/114.0.0.0
39131
310
310
Chrome/115.0.0.0
9972
9757
10034
The extension for the Private Aggregation API was primarily available for version 115, which is why the quantity of reports and impressions is relatively small and susceptible to noise.
For the reserved.loss reports, the values were passed in separate buckets using signalValue with bid-reject-reason. The received values were consistent with the forDebuggingOnly reports from the bidding function.
Comparing debug reports to normal reports.
Normal reports are understand as “non-debug” reports retrieved by /.well-known/private-aggregation/report-protected-audience endpoint
All reports
reports type
auctions
auction_perc
ONLY normal received
33039
0.69%
both debug and normal received
4507755
94.62%
ONLY debug received
223149
4.68%
Reports sent for 1% of bids through contributeToHistogram
reports type
auctions
auction_perc
ONLY normal received
19187
1.17%
both debug and normal received
1546760
94.64%
ONLY debug received
68345
4.18%
Reports sent for all impressions through contributeToHistogram
reports type
auctions
auction_perc
ONLY normal received
13655
0.44%
both debug and normal received
2937018
94.60%
ONLY debug received
153892
4.96%
Summary
During our testing of the Private Aggregation API in Fledge auctions, we encountered a limitation related to reaching the limit of 1000 pending reports in the browser. That’s why we decided to report only 1% of bids.
One notable observation was the difference between debug reports and normal reports, with a gap as high as 5%.
Currently, there is a waiting period of up to 12 hours to receive 95% of the reports. This delay in report delivery can impact the timeliness of data analysis and machine learning processes.
Q: Could you consider reducing the delays in report transmission to decrease the waiting time and enable more real-time access to crucial insights.
Furthermore, we would like to utilise the Private Aggregation API for various purposes such as machine learning, reporting, and system monitoring. Each of these use cases has distinct characteristics, with some requiring data to be delivered as quickly as possible (e.g., monitoring), while others prioritise data accuracy and precision. However, it is important to note that reports originating from a specific hour (exact schedule date truncated to the hour) can only be processed once by the Attribution Service (AS).
Expanding the scale of our tests is necessary to achieve credible results and effectively evaluate the Extended Private Aggregation Reporting in FLEDGE. By conducting tests on a larger scale, we can gather more comprehensive data, gain more accurate insights.
Q: Would it be possible to extend support for Extended Private Aggregation Reporting in FLEDGE?
Q: We would like to plan the next steps regarding the replacement of forDebuggingOnly with Extended Private Aggregation Reporting in FLEDGE. Do we know when forDebuggingOnly will no longer be available?forDebuggingOnly availability WICG/turtledove#632)
The text was updated successfully, but these errors were encountered:
Appreciate you posting this feedback. I'm going to close this as I believe we have addressed some of these through the private aggregation stable ramp up and the proposal to add labels, which you are already engaged on. Please feel free to split out any remaining feedback into separate issues.
The aim of the tests is to verify the possibility of using the Private Aggregation API to collect statistics of Fledge auctions and events related to it, specifically the registration of impressions and clicks on the banner. In most cases, our banners consist of a main banner (ad) and "product" ad components.
What mechanisms will we be using?
We register only 1% of histograms related to bid (point 1 and point 3 reserved.loss).
During the tested period, the stable version of Chrome was 113.
Private Aggregation API was enabled on 1% Origin Trial,
Extended Private Aggregation Reporting in FLEDGE was available for 115 (dev/canary) chrome version.
114 is the beta chrome version.
The following results are based on reports received over 7 consecutive days, limited to 5 days based on the schedule date:
Time
Time between registering a histogram and the schedule date value.
Let's compare the moment of receiving the report with the histogram registration
Comparing PAA reports with other sources
Impressions
Reported through contributeToHistogram
During the test period, in the stable version (113), the Private Aggregation API was enabled for 1% of OT traffic, while the Protected Audience accounted for 6%. So expected #first report CTH / imp ratio is ~ 1/6.
Reported through contributeToHistogramOnEvent(‘reserved.win’, …)
The extension for the Private Aggregation API was primarily available for version 115, which is why the quantity of reports and impressions is relatively small and susceptible to noise.
For the reserved.loss reports, the values were passed in separate buckets using signalValue with bid-reject-reason. The received values were consistent with the forDebuggingOnly reports from the bidding function.
Comparing debug reports to normal reports.
Normal reports are understand as “non-debug” reports retrieved by /.well-known/private-aggregation/report-protected-audience endpoint
All reports
Reports sent for 1% of bids through contributeToHistogram
Reports sent for all impressions through contributeToHistogram
Summary
During our testing of the Private Aggregation API in Fledge auctions, we encountered a limitation related to reaching the limit of 1000 pending reports in the browser. That’s why we decided to report only 1% of bids.
One notable observation was the difference between debug reports and normal reports, with a gap as high as 5%.
Currently, there is a waiting period of up to 12 hours to receive 95% of the reports. This delay in report delivery can impact the timeliness of data analysis and machine learning processes.
The text was updated successfully, but these errors were encountered: