Skip to content

Commit

Permalink
Merge branch 'main' into dependabot/maven/dockerfile-image-update/org…
Browse files Browse the repository at this point in the history
….sonatype.plugins-nexus-staging-maven-plugin-1.6.13
  • Loading branch information
jeetchoudhary authored Jun 15, 2023
2 parents e498fa0 + 054443d commit 3c07449
Show file tree
Hide file tree
Showing 10 changed files with 261 additions and 36 deletions.
2 changes: 1 addition & 1 deletion .ci.prepare-ssh-gpg.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ openssl aes-256-cbc -K "${encrypted_96e73e3cb232_key}" -iv "${encrypted_96e73e3c
mkdir -p "${HOME}/.ssh"
mv -f id_rsa_dockerfile_image_update "${HOME}/.ssh/id_rsa"
chmod 600 "${HOME}/.ssh/id_rsa"
echo "github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> "${HOME}/.ssh/known_hosts"
echo "github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk=" >> "${HOME}/.ssh/known_hosts"

# Import code signing keys
openssl aes-256-cbc -K "${encrypted_00fae8efff8c_key}" -iv "${encrypted_00fae8efff8c_iv}" -in codesigning.asc.enc -out codesigning.asc -d
Expand Down
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,9 +141,9 @@ named arguments:
-s {true,false}, --skipprcreation {true,false}
Only update image tag store. Skip creating PRs
-x X comment snippet mentioned in line just before FROM instruction for ignoring a child image. Defaults to 'no-dfiu'
-r, --rate-limit-pr-creations
-r, --rate_limit_pr_creations
Enable rateLimiting for throttling the number of PRs DFIU will cut over a period of time.
The argument value should be in format "<positive_integer>-<ISO-8601_formatted_time>". For example "--rate-limit-pr-creations 60-PT1H" to create 60 PRs per hour.
The argument value should be in format "<positive_integer>-<ISO-8601_formatted_time>". For example "--rate_limit_pr_creations 60-PT1H" to create 60 PRs per hour.
Default is not set, this means no ratelimiting is imposed.
subcommands:
Expand Down Expand Up @@ -220,11 +220,11 @@ FROM imagename:imagetag # no-dfiu
### PR throttling

In case you want to throttle the number of PRs cut by DFIU over a period of time,
set --rate-limit-pr-creations with appropriate value.
set --rate_limit_pr_creations with appropriate value.

##### Default case:

By default, this feature is disabled. This will be enabled when argument ``--rate-limit-pr-creations`` will be passed
By default, this feature is disabled. This will be enabled when argument ``--rate_limit_pr_creations`` will be passed
with appropriate value.

```
Expand All @@ -234,9 +234,9 @@ example: dockerfile-image-update all image-tag-store-repo-falcon //throttling wi
##### Configuring the rate limit:

Below are some examples that will throttle the number of PRs cut based on values passed to the
argument ``--rate-limit-pr-creations``
argument ``--rate_limit_pr_creations``
The argument value should be in format ``<positive_integer>-<ISO-8601_formatted_time>``.
For example ``--rate-limit-pr-creations 60-PT1H`` would mean the tool will cut 60 PRs every hour and the rate of adding
For example ``--rate_limit_pr_creations 60-PT1H`` would mean the tool will cut 60 PRs every hour and the rate of adding
a new PR will be (PT1H/60) i.e. one minute.
This will distribute the load uniformly and avoid sudden spikes, The process will go in waiting state until next PR
could be sent.
Expand All @@ -245,11 +245,11 @@ Below are some more examples:

```
Usage:
dockerfile-image-update --rate-limit-pr-creations 60-PT1H all image-tag-store-repo-falcon //DFIU can send up to 60 PRs per hour.
dockerfile-image-update --rate-limit-pr-creations 500-PT1H all image-tag-store-repo-falcon //DFIU can send up to 500 PRs per hour.
dockerfile-image-update --rate-limit-pr-creations 86400-PT24H all image-tag-store-repo-falcon //DFIU can send up to 1 PRs per second.
dockerfile-image-update --rate-limit-pr-creations 1-PT1S all image-tag-store-repo-falcon //Same as above. DFIU can send up to 1 PRs per second.
dockerfile-image-update --rate-limit-pr-creations 5000 all image-tag-store-repo-falcon //rate limiting will be disabled because argument is not in correct format.
dockerfile-image-update --rate_limit_pr_creations 60-PT1H all image-tag-store-repo-falcon //DFIU can send up to 60 PRs per hour.
dockerfile-image-update --rate_limit_pr_creations 500-PT1H all image-tag-store-repo-falcon //DFIU can send up to 500 PRs per hour.
dockerfile-image-update --rate_limit_pr_creations 86400-PT24H all image-tag-store-repo-falcon //DFIU can send up to 1 PRs per second.
dockerfile-image-update --rate_limit_pr_creations 1-PT1S all image-tag-store-repo-falcon //Same as above. DFIU can send up to 1 PRs per second.
dockerfile-image-update --rate_limit_pr_creations 5000 all image-tag-store-repo-falcon //rate limiting will be disabled because argument is not in correct format.
```

## Developer Guide
Expand Down
2 changes: 1 addition & 1 deletion dockerfile-image-update/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@
<dependency>
<groupId>org.kohsuke</groupId>
<artifactId>github-api</artifactId>
<version>1.308</version>
<version>1.315</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,6 @@ static ArgumentParser getArgumentParser() {
"(default: Dockerfile,docker-compose)");
parser.addArgument("-r", "--" + RATE_LIMIT_PR_CREATION)
.type(String.class)
.setDefault("")
.required(false)
.help("Use RateLimiting when sending PRs. RateLimiting is enabled only if this value is set it's disabled by default.");
return parser;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectInputStream;
import com.amazonaws.services.s3.model.S3ObjectSummary;
Expand All @@ -26,6 +27,7 @@ public class S3BackedImageTagStore implements ImageTagStore {
private final AmazonS3 s3;
private final String store;


public S3BackedImageTagStore(AmazonS3 s3, @NonNull String store) {
this.s3 = s3;
this.store = store;
Expand Down Expand Up @@ -58,8 +60,7 @@ public void updateStore(String img, String tag) throws IOException {
public List<ImageTagStoreContent> getStoreContent(DockerfileGitHubUtil dockerfileGitHubUtil, String storeName) throws InterruptedException {
List<ImageTagStoreContent> imageNamesWithTag;
Map<String, Date> imageNameWithAccessTime = new HashMap<>();
ListObjectsV2Result result = getS3Objects();
List<S3ObjectSummary> objects = result.getObjectSummaries();
List<S3ObjectSummary> objects = getS3Objects();
for (S3ObjectSummary os : objects) {
Date lastModified = os.getLastModified();
String key = os.getKey();
Expand Down Expand Up @@ -108,8 +109,21 @@ private String convertS3ObjectKeyToImageString(String key) {
return key.replace(S3_FILE_KEY_PATH_DELIMITER, '/');
}

private ListObjectsV2Result getS3Objects() {
return s3.listObjectsV2(store);
private List<S3ObjectSummary> getS3Objects() {
ListObjectsV2Request request = new ListObjectsV2Request().withBucketName(store);
ListObjectsV2Result listObjectsV2Result;
List<S3ObjectSummary> objectSummaries = null;

do {
listObjectsV2Result = s3.listObjectsV2(request);
if (objectSummaries == null)
objectSummaries = listObjectsV2Result.getObjectSummaries();
else
objectSummaries.addAll(listObjectsV2Result.getObjectSummaries());
request.setContinuationToken(listObjectsV2Result.getNextContinuationToken());
} while(listObjectsV2Result.isTruncated());

return objectSummaries;
}

private S3Object getS3Object(String store, String key) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,13 @@ private Constants() {
public static final String SKIP_PR_CREATION = "skipprcreation";
public static final String IGNORE_IMAGE_STRING = "x";
public static final String FILE_NAMES_TO_SEARCH = "filenamestosearch";
public static final String RATE_LIMIT_PR_CREATION = "rate-limit-pr-creations";
public static final String RATE_LIMIT_PR_CREATION = "rate_limit_pr_creations";
//max number of PRs to be sent (or tokens to be added) per DEFAULT_RATE_LIMIT_DURATION(per hour in this case)
public static final long DEFAULT_RATE_LIMIT = 60;

public static final long DEFAULT_CONSUMING_TOKEN_RATE = 1;
public static final Duration DEFAULT_RATE_LIMIT_DURATION = Duration.ofMinutes(DEFAULT_RATE_LIMIT);
//token adding rate(here:a token added every 2 minutes in the bucket)
//token adding rate(here:a token added every 1 minutes in the bucket)
public static final Duration DEFAULT_TOKEN_ADDING_RATE = Duration.ofMinutes(DEFAULT_CONSUMING_TOKEN_RATE);
public static final String FILENAME_DOCKERFILE = "dockerfile";
public static final String FILENAME_DOCKER_COMPOSE = "docker-compose";
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -280,14 +280,21 @@ public GHBlob tryRetrievingBlob(GHRepository repo, String path, String branch)
public void modifyOnGithub(GHContent content,
String branch, String img, String tag,
String customMessage, String ignoreImageString) throws IOException {
modifyContentOnGithub(content, branch, img, tag, customMessage, ignoreImageString);
}

protected boolean modifyContentOnGithub(GHContent content,
String branch, String img, String tag,
String customMessage, String ignoreImageString) throws IOException {
try (InputStream stream = content.read();
InputStreamReader streamR = new InputStreamReader(stream);
BufferedReader reader = new BufferedReader(streamR)) {
findImagesAndFix(content, branch, img, tag, customMessage, reader, ignoreImageString);
return findImagesAndFix(content, branch, img, tag, customMessage, reader,
ignoreImageString);
}
}

protected void findImagesAndFix(GHContent content, String branch, String img,
protected boolean findImagesAndFix(GHContent content, String branch, String img,
String tag, String customMessage, BufferedReader reader,
String ignoreImageString) throws IOException {
StringBuilder strB = new StringBuilder();
Expand All @@ -296,6 +303,7 @@ protected void findImagesAndFix(GHContent content, String branch, String img,
content.update(strB.toString(),
"Fix Docker base image in /" + content.getPath() + "\n\n" + customMessage, branch);
}
return modified;
}

protected boolean rewriteDockerfile(String img, String tag,
Expand Down Expand Up @@ -542,9 +550,10 @@ public void changeDockerfiles(Namespace ns,
if (content == null) {
log.info("No Dockerfile found at path: '{}'", pathToDockerfile);
} else {
modifyOnGithub(content, gitForkBranch.getBranchName(), gitForkBranch.getImageName(), gitForkBranch.getImageTag(),
ns.get(Constants.GIT_ADDITIONAL_COMMIT_MESSAGE), ns.get(Constants.IGNORE_IMAGE_STRING));
isContentModified = true;
isContentModified |= modifyContentOnGithub(content, gitForkBranch.getBranchName(),
gitForkBranch.getImageName(), gitForkBranch.getImageTag(),
ns.get(Constants.GIT_ADDITIONAL_COMMIT_MESSAGE),
ns.get(Constants.IGNORE_IMAGE_STRING));
isRepoSkipped = false;
}
}
Expand All @@ -567,6 +576,8 @@ public void changeDockerfiles(Namespace ns,
forkedRepo,
pullRequestInfo,
rateLimiter);
} else {
log.info("No files changed in repo {}. Skipping PR creation attempt.", parentName);
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,7 @@ public int createPullReq(GHRepository origRepo, String branch,
try {
GHPullRequest pullRequest = origRepo.createPullRequest(title, forkRepo.getOwnerName() + ":" + branch,
origRepo.getDefaultBranch(), body);
pullRequest.setLabels("dependencies");
// origRepo.createPullRequest("Update base image in Dockerfile", forkRepo.getOwnerName() + ":" + branch,
// origRepo.getDefaultBranch(), "Automatic Dockerfile Image Updater. Please merge.");
log.info("A pull request has been created at {}", pullRequest.getHtmlUrl());
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
package com.salesforce.dockerfileimageupdate.storage;


import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectInputStream;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.salesforce.dockerfileimageupdate.utils.DockerfileGitHubUtil;
import org.testng.annotations.Test;
import static org.mockito.ArgumentMatchers.any;

import java.io.ByteArrayInputStream;
import java.io.IOException;
Expand Down Expand Up @@ -36,12 +39,60 @@ public void testUpdateStoreThrowsExceptionWhenBucketDoesNotExist() throws IOExce
verify(amazonS3, times(0)).putObject("store", "image", "tag");
}

@Test
public void testGetStoreContentReturnsStoreContentWithTruncatedResults() throws InterruptedException {
AmazonS3 amazonS3 = mock(AmazonS3.class);
S3BackedImageTagStore s3BackedImageTagStore = spy(new S3BackedImageTagStore(amazonS3, "store"));
DockerfileGitHubUtil dockerfileGitHubUtil = mock(DockerfileGitHubUtil.class);
ListObjectsV2Result listObjectsV2Result1 = mock(ListObjectsV2Result.class);
ListObjectsV2Result listObjectsV2Result2 = mock(ListObjectsV2Result.class);

S3ObjectSummary s3ObjectSummary = mock(S3ObjectSummary.class);
List<S3ObjectSummary> s3ObjectSummaryList = new ArrayList<>();
s3ObjectSummaryList.add(s3ObjectSummary);

Date date = mock(Date.class);
S3Object s3Object = mock(S3Object.class);
S3Object s3Object2 = mock(S3Object.class);
String tag = "tag";
String tag2 = "tag2";
byte tagBytes[] = tag.getBytes();
byte tagBytes2[] = tag2.getBytes();
S3ObjectInputStream objectContent = new S3ObjectInputStream(new ByteArrayInputStream(tagBytes), null);
S3ObjectInputStream objectContent2 = new S3ObjectInputStream(new ByteArrayInputStream(tagBytes2), null);
s3Object.setObjectContent(objectContent);
s3Object2.setObjectContent(objectContent2);

when(amazonS3.listObjectsV2(any(ListObjectsV2Request.class))).thenReturn(listObjectsV2Result1, listObjectsV2Result2);
when(listObjectsV2Result1.getObjectSummaries()).thenReturn(s3ObjectSummaryList);
when(listObjectsV2Result1.isTruncated()).thenReturn(true);
when(listObjectsV2Result2.getObjectSummaries()).thenReturn(s3ObjectSummaryList);
when(listObjectsV2Result2.isTruncated()).thenReturn(false);
when(s3ObjectSummary.getLastModified()).thenReturn(date , date);
when(s3ObjectSummary.getKey()).thenReturn("domain!namespace!image", "domain!namespace!image2");
when(amazonS3.getObject("store", "domain!namespace!image")).thenReturn(s3Object);
when(amazonS3.getObject("store", "domain!namespace!image2")).thenReturn(s3Object2);
when(s3Object.getObjectContent()).thenReturn(objectContent);
when(s3Object2.getObjectContent()).thenReturn(objectContent2);

List<ImageTagStoreContent> actualResult = s3BackedImageTagStore.getStoreContent(dockerfileGitHubUtil, "store");

verify(amazonS3).getObject("store", "domain!namespace!image");
verify(amazonS3).getObject("store", "domain!namespace!image2");
assertEquals(actualResult.size(), 2);
assertEquals(actualResult.get(0).getImageName(), "domain/namespace/image");
assertEquals(actualResult.get(0).getTag(), "tag");
assertEquals(actualResult.get(1).getImageName(), "domain/namespace/image2");
assertEquals(actualResult.get(1).getTag(), "tag2");
}

@Test
public void testGetStoreContentReturnsStoreContent() throws InterruptedException {
AmazonS3 amazonS3 = mock(AmazonS3.class);
S3BackedImageTagStore s3BackedImageTagStore = spy(new S3BackedImageTagStore(amazonS3, "store"));
DockerfileGitHubUtil dockerfileGitHubUtil = mock(DockerfileGitHubUtil.class);
ListObjectsV2Result listObjectsV2Result = mock(ListObjectsV2Result.class);

S3ObjectSummary s3ObjectSummary = mock(S3ObjectSummary.class);
List<S3ObjectSummary> s3ObjectSummaryListList = Collections.singletonList(s3ObjectSummary);
Date date = mock(Date.class);
Expand All @@ -51,8 +102,9 @@ public void testGetStoreContentReturnsStoreContent() throws InterruptedException
S3ObjectInputStream objectContent = new S3ObjectInputStream(new ByteArrayInputStream(tagBytes), null);
s3Object.setObjectContent(objectContent);

when(amazonS3.listObjectsV2("store")).thenReturn(listObjectsV2Result);
when(amazonS3.listObjectsV2(any(ListObjectsV2Request.class))).thenReturn(listObjectsV2Result);
when(listObjectsV2Result.getObjectSummaries()).thenReturn(s3ObjectSummaryListList);
when(listObjectsV2Result.isTruncated()).thenReturn(false);
when(s3ObjectSummary.getLastModified()).thenReturn(date);
when(s3ObjectSummary.getKey()).thenReturn("domain!namespace!image");
when(amazonS3.getObject("store", "domain!namespace!image")).thenReturn(s3Object);
Expand Down Expand Up @@ -97,8 +149,9 @@ public void testGetStoreContentReturnsStoreContentSorted() throws InterruptedExc
when(s3ObjectSummaryIterator.next()).thenReturn(s3ObjectSummary, s3ObjectSummary);
when(s3ObjectSummaryIterator.hasNext()).thenReturn(true, true, false);
when(s3ObjectSummaryList.iterator()).thenReturn(s3ObjectSummaryIterator);
when(amazonS3.listObjectsV2("store")).thenReturn(listObjectsV2Result);
when(amazonS3.listObjectsV2(any(ListObjectsV2Request.class))).thenReturn(listObjectsV2Result);
when(listObjectsV2Result.getObjectSummaries()).thenReturn(s3ObjectSummaryList);
when(listObjectsV2Result.isTruncated()).thenReturn(false);
when(s3ObjectSummary.getLastModified()).thenReturn(date1, date2);
when(s3ObjectSummary.getKey()).thenReturn(key1, key2);
when(amazonS3.getObject("store", key1)).thenReturn(s3Object1);
Expand Down
Loading

0 comments on commit 3c07449

Please sign in to comment.