Skip to content

Commit

Permalink
fetchs3: init simple S3 downloader
Browse files Browse the repository at this point in the history
  • Loading branch information
copumpkin committed Apr 26, 2017
1 parent 5aa936d commit 9e764af
Show file tree
Hide file tree
Showing 2 changed files with 31 additions and 0 deletions.
29 changes: 29 additions & 0 deletions pkgs/build-support/fetchs3/default.nix
@@ -0,0 +1,29 @@
{ stdenv, runCommand, awscli }:

{ s3url
, sha256
, region ? "us-east-1"
, credentials ? null # Default to looking at local EC2 metadata service
, executable ? false
, recursiveHash ? false
, postFetch ? null
}:

let
credentialAttrs = stdenv.lib.optionalAttrs (credentials != null) {
AWS_ACCESS_KEY_ID = credentials.access_key_id;
AWS_SECRET_ACCESS_KEY = credentials.secret_access_key;
AWS_SESSION_TOKEN = credentials.session_token ? null;
};
in runCommand "foo" ({

This comment has been minimized.

Copy link
@cstrahan

cstrahan May 3, 2017

Contributor

@copumpkin I don't suppose we want this to remain "foo" 😉

This comment has been minimized.

Copy link
@copumpkin

copumpkin via email May 3, 2017

Author Member
buildInputs = [ awscli ];
outputHashAlgo = "sha256";
outputHash = sha256;
outputHashMode = if recursiveHash then "recursive" else "flat";
} // credentialAttrs) (if postFetch != null then ''

This comment has been minimized.

Copy link
@edolstra

edolstra May 3, 2017

Member

This causes AWS credentials to be stored world-readable in the Nix store, which seems undesirable.

This comment has been minimized.

Copy link
@copumpkin

copumpkin May 3, 2017

Author Member

True; I've been using it with the EC2 metadata service but you're right, and only left static creds in here to satisfy folks with more traditional workflows. Is there a better way for this sort of thing?

One :trollface: approach is to require that AWS_SESSION_TOKEN be non-empty, thus forcing all credentials passed in this way to be temporary (<1 hour), making their presence in the store much less of a concern.

But I don't know how to do it beyond that.

This comment has been minimized.

Copy link
@edolstra

edolstra May 4, 2017

Member

One way is to inherit the credentials from the environment, via impureEnvVars. However this is a bit inconvenient when using the daemon because the variables have to be in the environment of the daemon.

This comment has been minimized.

Copy link
@copumpkin

copumpkin May 4, 2017

Author Member

@edolstra perhaps the daemon could get some notion of forwarded impure env vars, and then we use those in these private fetchers? Might be a reasonably general solution for private retrievers like this.

downloadedFile="$(mktemp)"
aws s3 cp ${s3url} $downloadedFile
${postFetch}
'' else ''
aws s3 cp ${s3url} $out
'')
2 changes: 2 additions & 0 deletions pkgs/top-level/all-packages.nix
Expand Up @@ -153,6 +153,8 @@ with pkgs;

fetchpatch = callPackage ../build-support/fetchpatch { };

fetchs3 = callPackage ../build-support/fetchs3 { };

fetchsvn = callPackage ../build-support/fetchsvn {
sshSupport = true;
};
Expand Down

0 comments on commit 9e764af

Please sign in to comment.