New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separated Packaging and Deployment for CI/CD #3344
Conversation
Nice! Really need to test this thoroughly and look into ways this might affect other provider plugins such as Azure, OpenWhisk or Google... |
Will the "deploy function" command also use the --package option if set? I do not know if single function deployment is CI/CD relevant at all, so the support there probably can be skipped at all but should be mentioned in the docs. |
@HyperBrain excellent question!!! And I ended up with exactly the same conclusion. I think |
@eahefnawy if you'd like some help testing I would be very happy to try out this PR. If so, let me know when it's ready to test. |
lib/plugins/aws/deploy/index.js
Outdated
|
||
'deploy:deploy': () => BbPromise.bind(this) | ||
.then(this.mergeCustomProviderResources) | ||
.then(this.createStack) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be combined with the entry point PR to have the deploy split up in more fine-grained hookable lifecycle events.
|
||
module.exports = { | ||
uploadCloudFormationFile() { | ||
this.serverless.cli.log('Uploading CloudFormation file to S3...'); | ||
|
||
const body = JSON.stringify(this.serverless.service.provider.compiledCloudFormationTemplate); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be important to keep the compiled template under the provider object. E.g. the alias plugin needs that information to create the alias stack information and modifies the compiled template. So it accesses this.serverless.service.provider.compiledCloudFormationTemplate to extract its information and modify the template in-memory. If the template is made local to a component, it will break all plugins that modify the template before it gets further processed by the AWSDeploy plugin.
The package command (including awsPackage) should spawn some inner lifecycles:
-> package
->package:prepare
-> awsPackage:prepare
-> package:compile
-> package:compileFunctions
-> package:compileTemplate
-> awsPackage:compileTemplate
-> awsPackage:mergeIamTemplates
-> awsPackage:mergeUserResources
-> package:persistArtifacts
And keep the compiled template under provider until the package:persistArtifacts. Then plugins can hook after or before package:compileTemplate. The awsPackage entrypoint would hook the package lifecycle events.
BTW: The lifecycle list above is just a quickly written down example - just to get an idea what I mean.
I could create a further PR based on your PR/branch that would extend and arrange the lifecycles in a usable way utilizing the entrypoint PR's spawn function. Just let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just gave it a first spin.
Really like the separation of package
and deploy
and the way it works together when I only run serverless deploy
👍
Super convenient!
It seems like the .serverless
dir is removed when I run serverless package --package foo
which surprised me. I as a user would assume that the package
command is nont destructive.
Furthermore I did a global search for options.noDeploy
and noDeploy
. Some old references are still around and should be removed before merging.
I'll review / test it more in depth this week!
@pmuens I will add a PR to the package-plugin branch as soon as the ep PR has been merged to change/set the lifecycle model in a convienient way. Additionally we would change it a bit to maximize code and lifecycle reuse. Had some discussions already with @eahefnawy on Slack about that. |
Can this be explained with a little more detail? For instance, consider the following plugin:
Will that still run correctly during Seeing as how such a plugin is really concerned with packaging, should it be refactored to hook into a packaging event? If it is hooked to a packaging event, will it still run during |
@stevecaldwell77 The goal is to have the old events still available, so nothing would break for plugin authors. Instead it would be so, that new sub-lifecycles would be spawned that allow even more fine-grained control for plugin writers for future plugin versions, as the current ones are really coarse. You can have a look at my recent blog post, on how to optimize and expose lifecycles in a plugin to get a picture how this also works for plugins (https://serverless.com/blog/advanced-plugin-development-extending-the-core-lifecycle/). I am pretty sure that the solution here will be non breaking for existing plugins but will improve further versions a lot. BTW: I think the "lifecycle events will break for any plugin author who's hooking into the deploy lifecycle events, because lifecycle events are tied to commands." is outdated information already. |
@HyperBrain - great news, thanks for the detailed response. Nice blog post too. |
19bdcbd
to
ac9ee65
Compare
Any possibility of the file path supporting an s3 path? |
@eahefnawy @pmuens |
@horike37 nice! Thanks for jumping on this one!
Not quite sure how this might impact this PR. Any thoughts @eahefnawy ? |
@horike37 not sure about the scope of your implementation, but this PR only changes packing and deployment and their relevant lifecycle events. Anything that happens after the functions are deployed (invoking/logging...etc) should stay the same way. |
@@ -49,6 +49,22 @@ module.exports = { | |||
return `${this.provider.serverless.service.service}-${this.provider.getStage()}`; | |||
}, | |||
|
|||
getServiceArtifactName() { | |||
return `${this.provider.serverless.service.service}.zip`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this cover the case for Java projects, where the package artifact is a jar file? I think the current functionality allows users to specify the artifact name under the package:artifact section of serverless.yml. I for one need and use this functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing that out. I made sure package.artifact
is not dropped in this change to keep it backward compatible.
ac9ee65
to
27fb0c8
Compare
As noted, my PR's cure may be far too improper and costly to consider. More than anything, it submitted the surface issue and history analysis. This is a good change, thanks for driving it! |
@erikerikson thanks for the quick update on the PR. It's a good first step 👍 @HyperBrain already digged deep into it and knows more since he was heavily involved into this change. |
Just opened up an issue so that we can coordinate all the different tests a little bit --> #3467 (added the test cases from this thread as well). Please comment there if you face some issues with this PR so that we can derive a test case from it. Thanks 👍 🙌 |
@mike-suggitt sorry for the late reply on this. What you can do is provide stage/region params while packaging and deploying as usual. So you'd do the following:
Is that what you're trying to accomplish? |
@eahefnawy Hi thanks for getting back to me. I was ideally looking to package an environment-agnostic zip and then do environment-specific deployments of that single package. I'm hoping I can achieve it by simply ignoring the clouformation script created as part of the package step |
@mike-suggitt you can do that by using only the generated zip file under Keep in mind that you're free to edit the generated CF templates to your liking and to match any env you wanna deploy to hope this helps! |
@eahefnawy When you say "under |
@mike-suggitt yep you can refer to the artifact like this: service: my-service
package:
artifact: path/to/zip/file or at the function level: functions:
hello:
package:
artifact: path/to/zip/file Hope that makes sense! 😊 |
@ebahsini Thanks, the only issue I can see with that is the fact that if you specify |
Hello 👋 . I'm the maintainer of the OpenWhisk provider plugin. @eahefnawy asked me to test out the changes with the plugin. I like the new approach to packaging and deployment, it'll be a great feature. I've read the overview of the changes, had a look through the code and I've been testing it out recently. The current changes break the OpenWhisk provider plugin (details below).
|
@jthomas As I mentioned in the docs, the hook redirection will happen implicitly - by using the event deprecation mechanism. What could be, is, that some of the new "spawns" are one level too deep within the structure (namely in the aws plugin folder). I could try to move these event triggers up into the general level -> Imo that should fix any compatibility issues. What do you think? BTW: The changes must be non-breaking for the other provider plugins - that was and still is the intention 😄 . I will try to apply the changes in a fresh PR and we'll see if it works. |
@jthomas Can you provide me some small test project setup that I can use to test? |
@HyperBrain just did and left a comment in your new PR |
@HyperBrain I'll head over to the new PR and add my comments to see if this works locally. For testing, you can use the following.
If it works, you should see the following output:
At the moment, only the deployment hooks are being run so we don't get anything from the packaging and compiling stages. |
Has the ordering of hooks and/or writing cloudformation scripts changed? My API requires a basepath mapping which I place in serverless.yml and because there is the known issue around waiting for api deployment i use the plugin Thanks |
The ordering has not changed as hooks will be redirected automatically. Of course existing plugins have to be checked, if they do their hooks correctly - I found some plugins that behave a bit hacky. The changes should be non-breaking for plugins that interact correctly with the framework and I've tested it with 'serverless-webpack' and 'serverless-aws-alias' to have plugins tested that do their work in the early phase and the late phase and both. Nevertheless #3469 and #3496 is also needed to keep plugins working. Please try again after these have been merged. Side-Note: Nevertheless I suggest to adapt plugins after some time to use the new hooks that provide much better integration regarding the exact place in the package/deploy sequence where the plugin has to be run. |
@HyperBrain Thanks for getting back to me. Yea the issue appeared to be that the template creation is now done exclusively within the package plugin and without any hooks to allow in memory change for the the template (which it was doing before). The object you mentioned is slightly different to the one previously being used but once I overwrite that value in memory it does indeed work so thanks for that. I would suggest at some point in the future, providing a hook to allow in-memory changes of the cloudformation resources BEFORE being written to disk. I could maybe create a feature request for this if it would help? |
@mike-suggitt The package solution provides the new hooks already. You can hook just before the templates are persisted to disk. See https://gist.github.com/HyperBrain/bba5c9698e92ac693bb461c99d6cfeec. The document might need some updates for the available new aws hooks, but in general, it should give you a picture about how to optimize plugins in the future. |
- Adds a small amount of logging when a unhandled pip error so people know where the crash happened in the future - changed hooks from before:deploy:createDeploymentArtifacts to before:package:createDeploymentArtifacts due to issue #3344 - link: serverless/serverless#3344
What did you implement:
Completely separated the packaging and deployment of services/functions into two separate commands/plugins for easier use and control inside CI/CD systems
Closes #2660
Closes #2473
How did you implement it:
Created a new
package
aws plugin that handles compilation and packaging, and kept thedeploy
plugin for actually creating/updating and uploading the stack/artifacts. The implementation is backward compatible even after changing the deploy hooks and lifecycle events. So any plugin hooking into deprecated hooks would be redirected to the new hooks.How can we verify it:
sls deploy
=> this would package into the default.serverless
path and deploy directly afterwardssls package --package <path to package folder>
=> This would package the service into the provided pathsls deploy --package <path to package folder>
=> This would deploy a package from the provided pathNote A: package path is the path to the folder that contains the zip file along with the core and compiled CF stack templates. So it's not just the path to the artifact zip file because deployment also includes deploying the stack. So the templates are required.
Note B: you can also set the package path in
serverless.yml
in thepackage.path
property. This property will be used by the package and deploy commands as output/input paths respectively for packaging and deployment. This way you can customize that default.serverless
packing folder.Note C: the
--noDeploy
option is now dropped since the same functionality is now achieved with the package command.Todos:
Is this ready for review?: YES
Is it a breaking change?: NO