Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MSTransferor] Create rule for each pileup location #11143

Merged
merged 3 commits into from
May 18, 2022

Conversation

amaltaro
Copy link
Contributor

@amaltaro amaltaro commented May 12, 2022

Fixes #10975

Status

ready

Description

The initial objective with this PR was the following:

  • enforce the campaign pileup configuration, such that each location defined there gets a rucio rule for the whole container (grouping=ALL).
  • secondary AAA can trigger secondary input data location as well, in case a location defined in the campaign does not have a rule locking it
  • for workflows with multiple pileup datasets, use an intersection of their location defined in the campaign configuration for the primary/parent data placement
  • as before, the pileup location defines the workflow sitelist (for primary and parent data placement). If secondary is enabled, then primary/parent can go to other locations as defined in the sitelist.
  • special handling for RelVal, which do not define secondaries in the campaign, in that case simply use the workflow sitelist as candidate location for the pileup dataset(s)

However, I decided to refactor many things that were in place to deal with the old PhEDEx-based data placement. A summary of those changes are:

  • no longer increment RSE usage within our own cache, instead only rely on what Rucio provides
  • no longer create chunks of primary/parent input blocks, simply place all the blocks with grouping=DATASET against a logical OR of the final RSEs
  • DQMHarvest workflows will keep getting their input blocks locked with grouping=ALL against a logical OR of the final RSEs (we no longer pick 1 RSE for that)
  • workflows with OpenRunningTimeout will get a rule locking the whole input dataset with grouping=DATASET against a logical OR of the final RSEs
  • workflows with primary and parent dataset will get a rule locking all the input blocks together, thus grouping=ALL against a logical OR of the final RSEs
  • completely remove the logic to find the best RSE, let Rucio take care of that.

Is it backward compatible (if not, which system it affects?)

YES

Related PRs

Given that it modifies a recent feature introduced with: #11141
it needs to be properly tested again.

External dependencies / deployment changes

Service configuration disabling verbose logs:
https://gitlab.cern.ch/cmsweb-k8s/services_config/-/merge_requests/147
and
https://gitlab.cern.ch/cmsweb-k8s/services_config/-/merge_requests/148

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
    • 2 changes in unstable tests
  • Python3 Pylint check: succeeded
    • 5 warnings
    • 61 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 18 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13205/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
    • 1 changes in unstable tests
  • Python3 Pylint check: failed
    • 3 warnings and errors that must be fixed
    • 5 warnings
    • 66 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 18 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13206/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
    • 2 changes in unstable tests
  • Python3 Pylint check: failed
    • 3 warnings and errors that must be fixed
    • 5 warnings
    • 66 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 18 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13207/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
  • Python3 Pylint check: failed
    • 3 warnings and errors that must be fixed
    • 5 warnings
    • 66 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 18 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13208/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
    • 1 changes in unstable tests
  • Python3 Pylint check: failed
    • 3 warnings and errors that must be fixed
    • 5 warnings
    • 66 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 18 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13209/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
    • 3 changes in unstable tests
  • Python3 Pylint check: failed
    • 3 warnings and errors that must be fixed
    • 5 warnings
    • 66 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 18 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13210/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
    • 1 changes in unstable tests
  • Python3 Pylint check: failed
    • 3 warnings and errors that must be fixed
    • 5 warnings
    • 66 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 18 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13211/artifact/artifacts/PullRequestReport.html

@amaltaro
Copy link
Contributor Author

MSTransferor level tests are looking okay, but I am letting those workflows to go through the system to see how it goes until the very end.

I think review can start though. @haozturk could you please have a look at this as well. I tried to highlight all the important changes in the initial description, but if you see any wrong/missing use case, please let me know.

@amaltaro amaltaro requested review from todor-ivanov and vkuznet May 13, 2022 02:04
Copy link
Contributor

@vkuznet vkuznet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to admit that review of such PR requires detailed knowledge of data management system which I don't have. Therefore, I can't comment on why so much stuff has been removed and then added to MSTransferor. Moreover, it seems that the entire structure of MSTransferor requires code-refactoring based on provided data types. Right now, the code handle everything via if/else structures, while I would expect that code itself should be very abstract, i.e. here is data and here are the rules (methods) to call over the data. While, concrete implementations of different data-types should be separated. If you'll have a classes for "Neutrino", "DQMHarvesting", "primary", etc. (whatever data-type data management will need to handle) and let these classes implement common methods, then it would be much more easier to maintain the code. In this design, the MSTransferor will only ask input data what is your type, and then call appropriate class methods which is associated with such data type. This will keep MSTransferor logic abstract and generic, while also will allow to implement different rules/methods for different types of data. Said that, I don't have concrete comments to the proposed changes as I, honestly, do not understand the logic of data transfers.

Here is how I envision decideDataPlacement method should looks like:

def __init__(self):
       self.trasferorObjects = {
            'Neutrino': NeutrinoTransferor()
             'DQMHarvester': DQMHarvestTransferor()
              ....
       }
def decideDataPlacement(self, wflow, dataIn):
       dataType, groupping = self.getDataType(wflow)
       self.trasnferorObject[dataType].decideDataPlacement(wflow, datain, groupping)

def getDataType(self, wflow):
       if wflow.getReqType() == "DQMHarvest":
           return "DQMHarvest", "ALL"
       if wflow.getOpenRunningTimeout() > self.msConfig["openRunning"]:
           return "OpenRunning", "DATASET"

And, then within this MS area (WMCore/MicroServices/MSTransferor) you'll add DQMHarvestTransferor.py, NeutrinoTransferor.py, etc. modules which will keep implementation for concrete use-cases. This will make code more maintainable and extensible since when new use-case will pop-up you'll only need to add new implementation. And, if logic for specific use-case will require changes you'll only change this class logic, while MSTransferor logic will be very generic.

@amaltaro
Copy link
Contributor Author

Valentin, thanks for these comments! I will have to think whether this implementation is feasible and how to do so.

On what concerns the large code removal, I mentioned this in the initial PR review. But it was possible because much of the MSTransferor logic was still based on how we used to work with PhEDEx; while now we can delegate much of the data management logic to the actual data management system :)

Copy link
Contributor

@todor-ivanov todor-ivanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @amaltaro for those huge changes here. In general the implementation looks good. Besides the few minor comments I have left inline I am having the impression that:

  • With those changes a big portion of the previously automated logic i s now completely dropped, and the data placement would rely completely on the campaign level configuration
  • The input data placement becomes much more static than it was before.
  • We loosen the constraints on the service side on what goes where much more than before

There are a lot of changes that go in with the current PR, which are not completely related to reorganizing the secondary location logic, but rather to code refactoring. And I know the best moment to work on something like that is when you are actually reiterating through the code, but I think it could be good to have a separate issue mentioning that refactoring, which to be resolved with the current PR as well. But if you think that would be too much of effort just skip this request of mine.

We would also benefit from a well described current policy of the service behavior as it is right now, so that we can update the relevant section in the documentation here. Which I believe we need to review together with P&R Team.

@@ -357,7 +357,7 @@ def getChunkBlocks(self, numChunks=1):
thisChunk.update(list(self.getParentBlocks()))
thisChunkSize += sum([blockInfo['blockSize'] for blockInfo in viewvalues(self.getParentBlocks())])
# keep same data structure as multiple chunks, so list of lists
return [thisChunk], [thisChunkSize]
return thisChunk, thisChunkSize
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the types to be returned from here are supposed to be: a list && an integer, as stated in the docstring above, this line sound like it should be:

            return list(thisChunk), thisChunkSize

because thisChunk is defined as a set I think. If what is returned here is the truth, then please correct the docstring.

:return: a string with the final pileup destination PNN
"""
# FIXME: workflows should be marked as failed if there is no common
# site between SiteWhitelist and secondary location
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This I believe still holds, regardless from the fact where the PU location comes from - either campaign level configuration or workflow description.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are not marked as failed, but there is a Prometheus alert. This way people can react and get it through the system.

campSecLocations = campConfig['Secondaries'].get(dsetName, [])
campBasedLocation.append(set(campSecLocations))

if not campSecLocations:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems now we rely completely on the campaign level secondaries configuration. I'd say this may lead to Human errors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is how it was before, but now we have N replicas of the pileup, where N is the number of locations in the campaign. Before, we had a single replica enforced (but from discussions with Hasan, they were manually creating replicas for all the other locations. So, I'd say we are at the same state, just with more automation now :-D


if not campSecLocations:
msg = "Workflow has been incorrectly assigned: %s. The secondary dataset: %s, "
msg += "belongs to the campaign: %s, with does not define the secondary "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: with -> which

@@ -352,221 +330,80 @@ def checkDataLocation(self, wflow):
self.logger.info("Request %s has %d final blocks from %s",
wflow.getName(), len(getattr(wflow, methodName)()), methodName)

def _checkPrimaryDataVolume(self, wflow, wflowPnns):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this volume estimation method is no longer called, but I am not sure it was completely useless. The only issue with that that was making it somehow constrained in its work was the single PNN returned. It could have been returning the weighted list of all the PNNs having any piece of the dataset using a normalized value of the volume as a weight. But anyway ... this was a side comment. Such approach would completely change many other things.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In very short, we provide a list of RSEs to Rucio and let Rucio do the data management job.

# figure out to configure the rucio rule
dids, didsSize, grouping = self.decideDataPlacement(wflow, dataIn)
if not dids and dataIn["type"] == "primary":
# no valid files in any blocks, it will likely fail in global workqueue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you are changing blocks to dids in the code, please do the equivalent in the comment.

listBlockSets, listSetsSize = wflow.getChunkBlocks()
msg = f"Placing {len(listBlockSets)} blocks ({gigaBytes(listSetsSize)} GB), "
msg += f"with grouping: {grouping} for DQMHarvest workflow."
elif wflow.getOpenRunningTimeout() > self.msConfig["openRunning"]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still not comfortable with this openRunningTimeout naming. If we know all the places where this is used, we'd better change it to something that reflects its real purpose.

@amaltaro
Copy link
Contributor Author

@vkuznet Valentin, could you please have a quick look at my 3rd commit. It's very much raw, but I just wanted to make it available to see whether that is more or less what you were suggesting above? If so, then I can proceed with this. With that code in mind, each class would behave slightly different and also provide the final setup for Rucio rules. There are still a few cases where we have all sort of data in the same workflow, and in that case it's hard to classify anything!

Thanks for your review, Todor. I apologize again for providing so much changes with something that was supposed to be small, but I thought the small changes would only make things more complex and ugly, reason why I made this substantial refactoring.
Yes, with these changes, things are no longer much in our control and we start depending more on the data management system to manage data (which is ideal IMO). But we had some clarification on this over zoom as well.
I will look and work on your comments as soon as we decide how to proceed with these changes (another reorganization might be coming up).

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
    • 3 changes in unstable tests
  • Python3 Pylint check: failed
    • 7 warnings and errors that must be fixed
    • 14 warnings
    • 98 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 24 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13213/artifact/artifacts/PullRequestReport.html

Copy link
Contributor

@vkuznet vkuznet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think it is proper architecture, and each Workflow has a set of common methods. Then, individual details will be delegated to specific classes. I only found one issue with return and what kind of return it should be.

@amaltaro
Copy link
Contributor Author

Thanks for the confirmation, I just wanted to make sure I was on the right direction. I will work on the remaining implementation for Monday or so.

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 7 tests added
    • 3 changes in unstable tests
  • Python3 Pylint check: failed
    • 6 warnings and errors that must be fixed
    • 7 warnings
    • 87 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 20 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13214/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 13 tests deleted
    • 43 tests added
    • 2 changes in unstable tests
  • Python3 Pylint check: failed
    • 15 warnings and errors that must be fixed
    • 6 warnings
    • 91 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 20 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13218/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 13 tests deleted
    • 43 tests added
    • 1 changes in unstable tests
  • Python3 Pylint check: failed
    • 12 warnings and errors that must be fixed
    • 6 warnings
    • 88 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 19 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13219/artifact/artifacts/PullRequestReport.html

@amaltaro
Copy link
Contributor Author

@vkuznet I think I have addressed the service structure concern that you had. It made the MSTransferor code even smaller now and hopefully more maintainable.

Todor, Valentin, I still have to run some real tests with this code in, but I think it's ready for another code review. Note that none of the logic has been (intentionally changed) and all the features/changes mentioned in the PR description are still valid.

@amaltaro amaltaro requested a review from vkuznet May 16, 2022 18:40
@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: failed
    • 13 tests deleted
    • 41 tests added
    • 2 changes in unstable tests
  • Python3 Pylint check: failed
    • 23 warnings and errors that must be fixed
    • 20 warnings
    • 113 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 30 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13226/artifact/artifacts/PullRequestReport.html

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 13 tests deleted
    • 37 tests added
    • 1 changes in unstable tests
  • Python3 Pylint check: failed
    • 23 warnings and errors that must be fixed
    • 20 warnings
    • 113 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 30 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13228/artifact/artifacts/PullRequestReport.html

@amaltaro
Copy link
Contributor Author

@todor-ivanov @vkuznet I think I addressed most of your comments, other than making the grouping setting configurable. I'd rather leave that one to the future (if it's really needed). The last 2 commits have those recent changes.

Copy link
Contributor

@todor-ivanov todor-ivanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @amaltaro
It looks good to me.

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 13 tests deleted
    • 37 tests added
    • 2 changes in unstable tests
  • Python3 Pylint check: failed
    • 23 warnings and errors that must be fixed
    • 20 warnings
    • 113 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 30 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13232/artifact/artifacts/PullRequestReport.html

@amaltaro
Copy link
Contributor Author

I took the opportunity to change a few logging level here and there, and finally disabled the verbose mode for MSTransferor in the configuration file. Initial PR description has been updated as well.

@amaltaro amaltaro requested a review from vkuznet May 18, 2022 14:41
fix data returned from getChunkBlocks method

do not make it a list of a set

special case for secondaries in RelVals

another bugfix for the secondary relval case

remove confusing log about location used
@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: failed
    • 18 new failures
    • 13 tests deleted
    • 30 tests added
    • 2 changes in unstable tests
  • Python3 Pylint check: failed
    • 23 warnings and errors that must be fixed
    • 20 warnings
    • 113 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 30 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13234/artifact/artifacts/PullRequestReport.html

@amaltaro
Copy link
Contributor Author

unit tests are failing with while contacting DBS, e.g.:

DBS Server error: [{'error': {'reason': 'dbs error', 'message': '', 'function': 'dbs.parameters.CheckQueryParameters', 'code': 108}, 'http': {'method': 'GET', 'code': 400, 'timestamp': '2022-05-18 16:19:10.348970569 +0000 UTC m=+1411.120187744', 'path': '/dbs/prod/global/DBSReader/datasets?dataset=/MinimumBias/ComissioningHI-v1/RAW&dataset_access_type=*&detail=False', 'user_agent': 'DBSClient/Unknown/', 'x_forwarded_host': 'dbs-prod.cern.ch', 'x_forwarded_for': '137.138.157.32', 'remote_addr': '137.138.63.204:44858'}, 'exception': 400, 'type': 'HTTPError', 'message': 'DBSError Code:108 Description:DBS file load error, e.g. fail to load DB template Function:dbs.parameters.CheckQueryParameters Message: Error: dbs error'}]

@vkuznet I know you are working on a new release, ping'ing you just so you are aware of this.

@amaltaro
Copy link
Contributor Author

test this please

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: failed
    • 18 new failures
    • 13 tests deleted
    • 30 tests added
    • 1 changes in unstable tests
  • Python3 Pylint check: failed
    • 23 warnings and errors that must be fixed
    • 20 warnings
    • 113 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 30 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13236/artifact/artifacts/PullRequestReport.html

@vkuznet
Copy link
Contributor

vkuznet commented May 18, 2022

sorry, I didn't update configuration file (doing it now). Things should be back to normal now.

@vkuznet
Copy link
Contributor

vkuznet commented May 18, 2022

test this please

@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 13 tests deleted
    • 30 tests added
    • 1 changes in unstable tests
  • Python3 Pylint check: failed
    • 23 warnings and errors that must be fixed
    • 20 warnings
    • 113 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 30 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13237/artifact/artifacts/PullRequestReport.html

amaltaro added 2 commits May 18, 2022 13:03
complete code reorganization

fix getInputSecData and GrowingWorkflow

remove dual space from RequestInfo

remove useless init method from sub-classes

remove duplicate Workflow instantiation in RequestInfo

apply Todor and Valentin suggestions

change a few log level line from debug to info

remove unused import
more unit tests

unit tests with the Workflow class relocation

bunch of new unit tests for the new templates

update unit tests with the getChunkBlocks removal

remove getChunkBlocks from unit tests as well
@cmsdmwmbot
Copy link

Jenkins results:

  • Python3 Unit tests: succeeded
    • 13 tests deleted
    • 30 tests added
    • 2 changes in unstable tests
  • Python3 Pylint check: failed
    • 22 warnings and errors that must be fixed
    • 20 warnings
    • 113 comments to review
  • Pylint py3k check: succeeded
  • Pycodestyle check: succeeded
    • 30 comments to review

Details at https://cmssdt.cern.ch/dmwm-jenkins/view/All/job/DMWM-WMCore-PR-test/13238/artifact/artifacts/PullRequestReport.html

@amaltaro
Copy link
Contributor Author

Thank you very much for the review and great suggestions, Valentin and Todor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Pileup input data placement: Create input rules for every secondary location defined in the campaign
4 participants