📦 AWS S3 storage plugin for Verdaccio
This plugin was forked based on
verdaccio-s3-storage
built in Typescript + other features added along the time. Both plugins might have vary in behaviour since then, we recommend use the AWS plugin on this repo due is under control of Verdaccio community and constantly upated.
- Test on Docker + LocalStack + Verdaccio 4 + S3 Plugin example.
- Using
docker-compose
on this repo based on verdaccio-minio developed by barolab. - Feel free to propose new ways to run this plugin.
- AWS Account (in case you are using the cloud)
- Verdaccio server (4.0) (for 3.x use
verdaccio-s3-storage
instead)
npm install -g verdaccio
npm install verdaccio-aws-s3-storage
This will pull AWS credentials from your environment.
In your verdaccio config, configure
store:
aws-s3-storage:
bucket: your-s3-bucket
keyPrefix: some-prefix # optional, has the effect of nesting all files in a subdirectory
region: us-west-2 # optional, will use aws s3's default behavior if not specified
endpoint: https://{service}.{region}.amazonaws.com # optional, will use aws s3's default behavior if not specified
s3ForcePathStyle: false # optional, will use path style URLs for S3 objects
tarballACL: private # optional, use public-read to work with CDN like Amazon CloudFront
accessKeyId: your-access-key-id # optional, aws accessKeyId for private S3 bucket
secretAccessKey: your-secret-access-key # optional, aws secretAccessKey for private S3 bucket
sessionToken: your-session-token # optional, aws sessionToken for private S3 bucket
The configured values can either be the actual value or the name of an environment variable that contains the value for the following options:
bucket
keyPrefix
region
endpoint
accessKeyID
secretAccessKey
sessionToken
store:
aws-s3-storage:
bucket: S3_BUCKET # If an environment variable named S3_BUCKET is set, it will use that value. Otherwise assumes the bucket is named 'S3_BUCKET'
keyPrefix: S3_KEY_PREFIX # If an environment variable named S3_KEY_PREFIX is set, it will use that value. Otherwise assumes the bucket is named 'S3_KEY_PREFIX'
endpoint: S3_ENDPOINT # If an environment variable named S3_ENDPOINT is set, it will use that value. Otherwise assumes the bucket is named 'S3_ENDPOINT'
...
store properties can be defined for packages. The storage location corresponds to the folder in s3 bucket.
packages:
'@scope/*':
access: all
publish: $all
storage: 'scoped'
'**':
access: $all
publish: $all
proxy: npmjs
storage: 'public'
You can specify ACL of tarball files in S3 by the tarballACL configuration, set to 'private' by default. To enable S3 integrated CDN service (Amazon CloudFront for example), set tarballACL to 'public-read' to grant tarball files anonymous read permission.
store:
aws-s3-storage:
tarballACL: public-read
In case of local testing, this project can be used self-efficiently. Four main ingredients are as follows:
config.yaml
, see verdaccio documentation- The provided docker file allows to test the plugin, with no need for main verdaccio application
- The provided docker-compose also provides minio in orchestration as a local substitute for S3 backend
- Create and set content of
registry.envs
as follows. This file does not exist on the repo and should be generated manually after cloning the project.
AWS_ACCESS_KEY_ID=foobar
AWS_SECRET_ACCESS_KEY=1234567e
AWS_DEFAULT_REGION=eu-central-1
AWS_S3_ENDPOINT=https://localhost:9000/
AWS_S3_PATH_STYLE=true
You need the latest docker installed in your computer
docker-compose up
By default there is no bucket created, you might need to browse
http://127.0.0.1:9000/minio/
and create the example bucket manually namedrise
and then restartdocker-compose up
.
The default values should work out of the box. If you change anything, make sure the corresponding variables are set in other parts of the ingredient as well.