Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separated Packaging and Deployment for CI/CD #3344

Merged
merged 55 commits into from Apr 12, 2017
Merged

Separated Packaging and Deployment for CI/CD #3344

merged 55 commits into from Apr 12, 2017

Conversation

eahefnawy
Copy link
Member

@eahefnawy eahefnawy commented Mar 9, 2017

What did you implement:

Completely separated the packaging and deployment of services/functions into two separate commands/plugins for easier use and control inside CI/CD systems

Closes #2660
Closes #2473

How did you implement it:

Created a new package aws plugin that handles compilation and packaging, and kept the deploy plugin for actually creating/updating and uploading the stack/artifacts. The implementation is backward compatible even after changing the deploy hooks and lifecycle events. So any plugin hooking into deprecated hooks would be redirected to the new hooks.

How can we verify it:

  • sls deploy => this would package into the default .serverless path and deploy directly afterwards
  • sls package --package <path to package folder> => This would package the service into the provided path
  • sls deploy --package <path to package folder> => This would deploy a package from the provided path

Note A: package path is the path to the folder that contains the zip file along with the core and compiled CF stack templates. So it's not just the path to the artifact zip file because deployment also includes deploying the stack. So the templates are required.

Note B: you can also set the package path in serverless.yml in the package.path property. This property will be used by the package and deploy commands as output/input paths respectively for packaging and deployment. This way you can customize that default .serverless packing folder.

Note C: the --noDeploy option is now dropped since the same functionality is now achieved with the package command.

Todos:

  • Write tests
  • Write documentation
  • Fix linting errors
  • Make sure code coverage hasn't dropped
  • Provide verification config / commands / resources
  • Enable "Allow edits from maintainers" for this PR
  • Update the messages below

Is this ready for review?: YES
Is it a breaking change?: NO

@eahefnawy eahefnawy added this to the 1.10 milestone Mar 9, 2017
@eahefnawy eahefnawy self-assigned this Mar 9, 2017
@eahefnawy eahefnawy requested a review from pmuens March 9, 2017 15:11
@pmuens
Copy link
Contributor

pmuens commented Mar 9, 2017

Nice! Really need to test this thoroughly and look into ways this might affect other provider plugins such as Azure, OpenWhisk or Google...

@HyperBrain
Copy link
Member

Will the "deploy function" command also use the --package option if set? I do not know if single function deployment is CI/CD relevant at all, so the support there probably can be skipped at all but should be mentioned in the docs.
Any thoughts on that?

@eahefnawy
Copy link
Member Author

eahefnawy commented Mar 9, 2017

@HyperBrain excellent question!!! And I ended up with exactly the same conclusion. I think deploy function is meant to quickly update function code and not in CI/CD, so right now deploy function handles packaging the function as well.

@johncmckim
Copy link
Member

@eahefnawy if you'd like some help testing I would be very happy to try out this PR. If so, let me know when it's ready to test.


'deploy:deploy': () => BbPromise.bind(this)
.then(this.mergeCustomProviderResources)
.then(this.createStack)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be combined with the entry point PR to have the deploy split up in more fine-grained hookable lifecycle events.


module.exports = {
uploadCloudFormationFile() {
this.serverless.cli.log('Uploading CloudFormation file to S3...');

const body = JSON.stringify(this.serverless.service.provider.compiledCloudFormationTemplate);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be important to keep the compiled template under the provider object. E.g. the alias plugin needs that information to create the alias stack information and modifies the compiled template. So it accesses this.serverless.service.provider.compiledCloudFormationTemplate to extract its information and modify the template in-memory. If the template is made local to a component, it will break all plugins that modify the template before it gets further processed by the AWSDeploy plugin.

The package command (including awsPackage) should spawn some inner lifecycles:

-> package
  ->package:prepare
    -> awsPackage:prepare
  -> package:compile
    -> package:compileFunctions
    -> package:compileTemplate
      -> awsPackage:compileTemplate
      -> awsPackage:mergeIamTemplates
      -> awsPackage:mergeUserResources
  -> package:persistArtifacts

And keep the compiled template under provider until the package:persistArtifacts. Then plugins can hook after or before package:compileTemplate. The awsPackage entrypoint would hook the package lifecycle events.

BTW: The lifecycle list above is just a quickly written down example - just to get an idea what I mean.

I could create a further PR based on your PR/branch that would extend and arrange the lifecycles in a usable way utilizing the entrypoint PR's spawn function. Just let me know.

Copy link
Contributor

@pmuens pmuens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just gave it a first spin.

Really like the separation of package and deploy and the way it works together when I only run serverless deploy 👍

Super convenient!

It seems like the .serverless dir is removed when I run serverless package --package foo which surprised me. I as a user would assume that the package command is nont destructive.

Furthermore I did a global search for options.noDeploy and noDeploy. Some old references are still around and should be removed before merging.

I'll review / test it more in depth this week!

@HyperBrain
Copy link
Member

HyperBrain commented Mar 14, 2017

@pmuens I will add a PR to the package-plugin branch as soon as the ep PR has been merged to change/set the lifecycle model in a convienient way. Additionally we would change it a bit to maximize code and lifecycle reuse. Had some discussions already with @eahefnawy on Slack about that.

@stevecaldwell77
Copy link
Contributor

lifecycle events will break for any plugin author who's hooking into the deploy lifecycle events, because lifecycle events are tied to commands.

Can this be explained with a little more detail? For instance, consider the following plugin:

  • hooks into 'before:deploy:deploy'
  • applies some transformations to the resources section

Will that still run correctly during sls deploy?

Seeing as how such a plugin is really concerned with packaging, should it be refactored to hook into a packaging event?

If it is hooked to a packaging event, will it still run during sls deploy? And not run during sls deploy --package foo?

@HyperBrain
Copy link
Member

HyperBrain commented Mar 15, 2017

@stevecaldwell77 The goal is to have the old events still available, so nothing would break for plugin authors. Instead it would be so, that new sub-lifecycles would be spawned that allow even more fine-grained control for plugin writers for future plugin versions, as the current ones are really coarse.

You can have a look at my recent blog post, on how to optimize and expose lifecycles in a plugin to get a picture how this also works for plugins (https://serverless.com/blog/advanced-plugin-development-extending-the-core-lifecycle/).

I am pretty sure that the solution here will be non breaking for existing plugins but will improve further versions a lot.

BTW: I think the "lifecycle events will break for any plugin author who's hooking into the deploy lifecycle events, because lifecycle events are tied to commands." is outdated information already.

@stevecaldwell77
Copy link
Contributor

@HyperBrain - great news, thanks for the detailed response. Nice blog post too.

@ryan-dyer-sp
Copy link

Any possibility of the file path supporting an s3 path?

@horike37
Copy link
Member

@eahefnawy @pmuens
I'm implementing the cloudwatch logs event feature.
Is there anything I need to be careful of in the relation to this issue?

@pmuens
Copy link
Contributor

pmuens commented Mar 23, 2017

@horike37 nice! Thanks for jumping on this one!

Is there anything I need to be careful of in the relation to this issue?

Not quite sure how this might impact this PR. Any thoughts @eahefnawy ?

@eahefnawy
Copy link
Member Author

@horike37 not sure about the scope of your implementation, but this PR only changes packing and deployment and their relevant lifecycle events. Anything that happens after the functions are deployed (invoking/logging...etc) should stay the same way.

@eahefnawy eahefnawy modified the milestones: 1.11, 1.10 Mar 29, 2017
@@ -49,6 +49,22 @@ module.exports = {
return `${this.provider.serverless.service.service}-${this.provider.getStage()}`;
},

getServiceArtifactName() {
return `${this.provider.serverless.service.service}.zip`;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this cover the case for Java projects, where the package artifact is a jar file? I think the current functionality allows users to specify the artifact name under the package:artifact section of serverless.yml. I for one need and use this functionality.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing that out. I made sure package.artifact is not dropped in this change to keep it backward compatible.

@eahefnawy eahefnawy changed the title BREAKING - Separated Packaging and Deployment for CI/CD Separated Packaging and Deployment for CI/CD Apr 6, 2017
@erikerikson
Copy link
Member

As noted, my PR's cure may be far too improper and costly to consider. More than anything, it submitted the surface issue and history analysis. This is a good change, thanks for driving it!

@pmuens
Copy link
Contributor

pmuens commented Apr 13, 2017

@erikerikson thanks for the quick update on the PR. It's a good first step 👍

@HyperBrain already digged deep into it and knows more since he was heavily involved into this change.

@pmuens
Copy link
Contributor

pmuens commented Apr 13, 2017

@HyperBrain @erikerikson

Just opened up an issue so that we can coordinate all the different tests a little bit --> #3467 (added the test cases from this thread as well).

Please comment there if you face some issues with this PR so that we can derive a test case from it. Thanks 👍 🙌

@eahefnawy
Copy link
Member Author

@mike-suggitt sorry for the late reply on this. What you can do is provide stage/region params while packaging and deploying as usual. So you'd do the following:

sls package --path some/path --stage qa
sls deploy --path some/path --stage qa

Is that what you're trying to accomplish?

@mike-suggitt
Copy link

@eahefnawy Hi thanks for getting back to me. I was ideally looking to package an environment-agnostic zip and then do environment-specific deployments of that single package. I'm hoping I can achieve it by simply ignoring the clouformation script created as part of the package step

@eahefnawy
Copy link
Member Author

@mike-suggitt you can do that by using only the generated zip file under package.artifact and ignoring the generated CF files.

Keep in mind that you're free to edit the generated CF templates to your liking and to match any env you wanna deploy to

hope this helps!

@mike-suggitt
Copy link

mike-suggitt commented Apr 19, 2017

@eahefnawy When you say "under package.artifact" is that in reference to settings within the serverless.yml as opposed to a param on command line? The docs mention that if specified then serverless will no longer create the zip, so that would break the package command no?

@eahefnawy
Copy link
Member Author

@mike-suggitt yep you can refer to the artifact like this:

service: my-service
package:
  artifact: path/to/zip/file

or at the function level:

functions:
  hello:
    package:
      artifact: path/to/zip/file

Hope that makes sense! 😊

@mike-suggitt
Copy link

mike-suggitt commented Apr 20, 2017

@ebahsini Thanks, the only issue I can see with that is the fact that if you specify package.artifact with an entirely "clean" project (no previous builds) it complains that it can't find the artifact. So in order to use this solution we would have to maintain a known empty artifact and overwrite it?

@jthomas
Copy link
Contributor

jthomas commented Apr 20, 2017

Hello 👋 . I'm the maintainer of the OpenWhisk provider plugin. @eahefnawy asked me to test out the changes with the plugin. I like the new approach to packaging and deployment, it'll be a great feature.

I've read the overview of the changes, had a look through the code and I've been testing it out recently.
https://gist.github.com/HyperBrain/bba5c9698e92ac693bb461c99d6cfeec

The current changes break the OpenWhisk provider plugin (details below).
I imagine other providers and plugins could have the same issue.

sls deploy fails

Updating to master on 987abae all deployments fail due to the deployment artefact not being produced during the deploy step.

Looking into the code, I see that the events (compileFunctions, createDeploymentArtifacts) have been moved to the package plugin. These events aren't run by the deploy command anymore. The aws plugin manually handles this example by manually spawning the package plugin prior to deploy if no package is explicitly given.
https://github.com/serverless/serverless/blob/master/lib/plugins/aws/deploy/index.js#L71-L75

solution

Long-term I will migrate the provider plugin to mirror the aws plugin and use the new event sources. I need to review the code for other issues given the large number of changes to the codebase. I'm can't commit to have this ready and tested by the release date of next week due to work commitments.

Unless, I've missed something, this is a breaking API change. It will affect any plugins which hook into the compile and deploy events. If we are following semver, this is a major, rather than minor release.

  • Are we comfortable pushing out a breaking API change in a minor release?

If so, it might be worth searching Github repos for affected serverless plugins and sending PRs or notifications to the owners, to give them fair warning?

  • Could we maintain backwards compatibility for a longer period to give people more time to mitigate?

What does everyone think? 🤔

@HyperBrain
Copy link
Member

HyperBrain commented Apr 20, 2017

@jthomas As I mentioned in the docs, the hook redirection will happen implicitly - by using the event deprecation mechanism.
I added the deprecation at the very top "deploy" command plugin/definition, so that old hooks are automatically redirected to the correct package events.

What could be, is, that some of the new "spawns" are one level too deep within the structure (namely in the aws plugin folder). I could try to move these event triggers up into the general level -> Imo that should fix any compatibility issues. What do you think?

BTW: The changes must be non-breaking for the other provider plugins - that was and still is the intention 😄 . I will try to apply the changes in a fresh PR and we'll see if it works.

@HyperBrain
Copy link
Member

@jthomas Can you provide me some small test project setup that I can use to test?

@HyperBrain
Copy link
Member

@jthomas @pmuens Can you do a quick test with OpenWhisk and Google and see if a simple "sls deploy" now works?
I even did not test the AWS plugin. As soon as I have some response here, I will finalize the fix and adapt the tests accordingly.

@eahefnawy
Copy link
Member Author

@HyperBrain just did and left a comment in your new PR

@jthomas
Copy link
Contributor

jthomas commented Apr 20, 2017

@HyperBrain I'll head over to the new PR and add my comments to see if this works locally.

For testing, you can use the following.

$ export OW_APIHOST=blah.com
$ export OW_AUTH=user:pass
$ serverless create --template openwhisk-nodejs --path testing
$ cd testing
$ serverless deploy

If it works, you should see the following output:

$ serverless deploy
Serverless: Packaging service...
Serverless: Compiling Functions...
Serverless: Compiling API Gateway definitions...
Serverless: Compiling Rules...
Serverless: Compiling Triggers & Feeds...
Serverless: Deploying Functions...
Serverless: Deployment successful!

At the moment, only the deployment hooks are being run so we don't get anything from the packaging and compiling stages.

@mike-suggitt
Copy link

mike-suggitt commented Apr 21, 2017

Has the ordering of hooks and/or writing cloudformation scripts changed? My API requires a basepath mapping which I place in serverless.yml and because there is the known issue around waiting for api deployment i use the plugin serverless-plugin-bind-deployment-id . This plugins replaces a sentinel e.g. dependsOn: __deployment__ with the randomised ID of the api deployment. It hooks into the before:deploy:deploy hook and simple replaces the value inline but is now failing with this new work. Despite the correct find and replace the cloudformation template still contains the sentinel and i'm just trying to understand where the changes might have occured.

Thanks

@HyperBrain
Copy link
Member

HyperBrain commented Apr 21, 2017

The ordering has not changed as hooks will be redirected automatically. Of course existing plugins have to be checked, if they do their hooks correctly - I found some plugins that behave a bit hacky. The changes should be non-breaking for plugins that interact correctly with the framework and I've tested it with 'serverless-webpack' and 'serverless-aws-alias' to have plugins tested that do their work in the early phase and the late phase and both. Nevertheless #3469 and #3496 is also needed to keep plugins working.

Please try again after these have been merged.

Side-Note:
In the before:deploy:deploy hook the compiled CF template should be available in this.serverless.service.provider.compiledCloudFormationTemplate and the service properties in this.serverless.service...sentinel. So the plugin should be able to modify the the CF template in-memory here, so that it gets uploaded correctly.

Nevertheless I suggest to adapt plugins after some time to use the new hooks that provide much better integration regarding the exact place in the package/deploy sequence where the plugin has to be run.

@mike-suggitt
Copy link

@HyperBrain Thanks for getting back to me. Yea the issue appeared to be that the template creation is now done exclusively within the package plugin and without any hooks to allow in memory change for the the template (which it was doing before). The object you mentioned is slightly different to the one previously being used but once I overwrite that value in memory it does indeed work so thanks for that.

I would suggest at some point in the future, providing a hook to allow in-memory changes of the cloudformation resources BEFORE being written to disk. I could maybe create a feature request for this if it would help?

@HyperBrain
Copy link
Member

@mike-suggitt The package solution provides the new hooks already. You can hook just before the templates are persisted to disk. See https://gist.github.com/HyperBrain/bba5c9698e92ac693bb461c99d6cfeec. The document might need some updates for the available new aws hooks, but in general, it should give you a picture about how to optimize plugins in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet