-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get simple PipelineRun implementation working #128
Get simple PipelineRun implementation working #128
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bobcatfish The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lots of Code and Lots of Test!!! Yey!!!
@@ -91,6 +95,11 @@ func NewController( | |||
UpdateFunc: controller.PassNew(impl.Enqueue), | |||
DeleteFunc: impl.Enqueue, | |||
}) | |||
|
|||
r.tracker = tracker.New(impl.EnqueueKey, 30*time.Minute) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
30 seconds?
if err != nil { | ||
return fmt.Errorf("error getting next TaskRun to create for PipelineRun %s: %s", pr.Name, err) | ||
return fmt.Errorf("error getting Tasks for Pipeline %s, Pipeline may be invalid!: %s", p.Name, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/error getting Tasks for Pipeline/error getting Tasks or TaskRuns for Pipeline/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call!
type GetTask func(namespace, name string) (*v1alpha1.Task, error) | ||
|
||
// GetTaskRun is a function that will retrieve the TaskRun name from namespace. | ||
type GetTaskRun func(namespace, name string) (*v1alpha1.TaskRun, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not your fault, but this is not rendered properly :) i wonder if its a formatting issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah im not sure why the syntax highlighting is having a problem here 🤔
last time I saw this happen it was b/c @jonjohnsonjr decided to replace a e
with an ℯ
or something like that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the syntax highlighter that GitHub uses has a hard time with func
if it isn't followed by { ... }
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah that could be it, thanks @jonjohnsonjr :D
and seriously that 𝛾
was pretty sweet, gonna be talking about that one for years to come
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well that was distracting
pt := p.Spec.Tasks[i] | ||
t, err := getTask(p.Namespace, pt.TaskRef.Name) | ||
if err != nil { | ||
return nil, fmt.Errorf("failed to get tasks for Pipeline %q: Error getting task %q : %s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, this is case where we need to check if Task does not exists.
If it does not exist, then we should quit re-conciling this pipeline since its invalid and nothing can be done.
For that to happen, we should return "nil" from the pipelinerun.Reconciler.reconcile
however, there was some other error while fetching task then we keep trying.
Does that make sense?
I would say, we can create a InvalidPipelineError and return that from here when a Task is not found.
In the reconcile
method, we should if err is something other than InvalidPipelineError and return nil if its InvalidPipelineError
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep that makes sense! okay ill update this so that if we can't find a task, we return nil from the reconcile loop and stop reconciling.
thanks for catching this and the detailed explanation!
test/pipelinerun_test.go
Outdated
logger.Infof("Making sure the expected TaskRuns were created") | ||
expectedTaskRuns := []string{ | ||
hwPipelineName + hwPipelineTaskName1, | ||
hwPipelineName + hwPipelineTaskName2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoops somehow forgot to include a fix for this
5a02ce8
to
3283605
Compare
ab1dc80
to
33036c9
Compare
Ready for another look @tejal29 ! |
@@ -91,6 +95,11 @@ func NewController( | |||
UpdateFunc: controller.PassNew(impl.Enqueue), | |||
DeleteFunc: impl.Enqueue, | |||
}) | |||
|
|||
r.tracker = tracker.New(impl.EnqueueKey, 30*time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a better way to get the tracker lease as a function of the resync period in #120 which I can update here once this is merged, no point in both of us making the same change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!! thanks @pivotal-nader-ziada :D if you merge first ill pickup your change, ill leave this at 30 min for now then
oh dear
|
oh i guess that volume error is about deletion, maybe a red herring 🤔 |
43bbb0a
to
bb6e0f8
Compare
yyyyyyyyyyyyyyyyyyyyyyyy |
/lgtm |
oh man i just keep making these tests better and better XD
|
It turns out that we need to look at TaskRuns for a few reasons, including 1) figuring out what to run next and 2) determining the status of the PipelineRun, so I've refactored the logic that grabs these to collect a bunch of related state that can be reused. When the graph becomes more sophisticated, we will need to make this structure more than just a list.
Added logic to check statuses of other TaskRuns when deciding if a new one should be started for #61
PipelineRun status will be based on the condition of the TaskRuns which it has created, for #61. If any TaskRuns have failed, the PipelineRun has failed. If all are successful, it is successful. If any are in progress, it is in progress. This is assuming a linear Pipeline, we will have to tweak this a bit when we implement the graph (for #65)
Added the Task reference to the TaskRun so that when a PipelineRun creates a TaskRun, it actually executes! (For #61) While running the integration test, noticed that the PipelineRuns weren't getting reconciled quickly enough, but adding a tracker which will invoke reconcile when the created TaskRuns are updated fixed this - however it did still take > 1 minute to create 3 helloworld TaskRuns and wait for them to complete, so since 3 was arbitrary, reduced to 2. Also cleaned up the TaskRun controller a bit: using the Logger object on the controller/reconciler itself, made the log messages a bit more descriptive.
If a PipelineRun references a Pipeline that uses Tasks which don't exist, we should immediately stop trying to Reconcile it. To fix this, the user/trigger should create a new PipelineRun after creating the Tasks needed.
bb6e0f8
to
87d6926
Compare
/lgtm |
87d6926
to
3a4cad3
Compare
/lgtm |
Something has gone wrong with one of the integration tests on my PR and I don't know what so I'm trying to add more info. Added Builds to the dumped CRDs, and also moved the step that deploys the examples is now after the integration tests b/c it produces a lot of errors in the logs (hahaha...) and makes it harder to debug integration tests failures.
I think it's reasonable for only one of our eventually many integration tests to verify the build output, especially when it involves adding a volume mount to the pile of things that could go wrong in the test. Refactored the test a bit, so we don't assert inside the test, and we output some logs before polling. Removed dumping of CRDs in test script b/c each test runs in its own namespace and cleans up after itself, so there is never anything to dump (see tektoncd#145). Updated condition checking so that if the Run fails, we bail immediately instead of continuing to hope it will succeed.
3a4cad3
to
360d282
Compare
😩 |
/lgtm WHAT ITS WORTH A TRY |
@bobcatfish: you cannot LGTM your own PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
Sorry this is a big one, with even more refactoring 😅The net effect is that when we create a PipelineRun, it actually creates and executes the required Task runs! 🎉
I tried to break it up into separate commits, so might be easier to review that way:
Combine logic to grab Tasks and their Runs for a PipelineRun
It turns out that we need to look at TaskRuns for a few reasons, including 1) figuring out what to run next and 2) determining the status of the PipelineRun, so I've refactored the logic that grabs these to collect a bunch of related state that can be reused. When the graph becomes more sophisticated, we will need to make this structure more than just a list.
Check status of TaskRuns when finding TaskRun to start
Added logic to check statuses of other TaskRuns when deciding if a new one should be started for Implement simple PipelineRun #61
Add condition status to PipelineRun
PipelineRun status will be based on the condition of the TaskRuns which it has created, for Implement simple PipelineRun #61. If any TaskRuns have failed, the PipelineRun has failed. If all are successful, it is successful. If any are in progress, it is in progress. This is assuming a linear Pipeline, we will have to tweak this a bit
when we implement the graph (for Pipeline uses
passedConstraints
to provide correct inputs and outputs #65)Create TaskRun from PipelineRun that runs a Task Added the Task reference to the TaskRun so that when a PipelineRun creates a TaskRun, it actually executes! (For Implement simple PipelineRun #61)
While running the integration test, noticed that the PipelineRuns weren't getting reconciled quickly enough, but adding a tracker which will invoke reconcile when the created TaskRuns are updated fixed this - however it did still take > 1 minute to create 3 helloworld TaskRuns and wait for them to complete, so since 3 was arbitrary, reduced to 2. Also cleaned up the TaskRun controller a bit: using the Logger object on the controller/reconciler itself, made the log messages a bit more descriptive.
Fixes #61