diff --git a/README.md b/README.md index dcfd70c0..a58b2f61 100644 --- a/README.md +++ b/README.md @@ -9,8 +9,8 @@ The [code](lega) is written in Python (3.6+). You can provision and deploy the different components: * locally, using [docker-compose](deployments/docker). -* on an OpenStack cluster, using [terraform](deployments/terraform). -* on a Kubernetes/OpenShift cluster, using [kubernetes](deployments/kube) +* on an OpenStack cluster, using [terraform](https://github.com/NBISweden/LocalEGA-deploy-terraform). +* on a Kubernetes/OpenShift cluster, using [kubernetes](https://github.com/NBISweden/LocalEGA-deploy-k8s) * on a Docker Swarm cluster, using [Gradle](deployments/swarm) # Architecture diff --git a/deployments/kube/.gitignore b/deployments/kube/.gitignore deleted file mode 100644 index 3a5dc54b..00000000 --- a/deployments/kube/.gitignore +++ /dev/null @@ -1 +0,0 @@ -auto/config/* diff --git a/deployments/kube/README.md b/deployments/kube/README.md deleted file mode 100644 index e5396707..00000000 --- a/deployments/kube/README.md +++ /dev/null @@ -1,131 +0,0 @@ -## Kubernetes Deployment - -#### Table of Contents - -- [Deployment the Somewhat Easy Way](#deployment-the-somewhat-easy-way) -- [Deployment The Difficult Way](#deployment-the-difficult-way) - - [Deploy Fake CEGA](#deploy-fake-cega) - - [Deploy LocalEGA](#deploy-localega) - - [Other useful information](#other-useful-information) -- [Deployment the OpenShift Way](#deployment-the-openshift-way) - - -### Deployment the somewhat easy Way - -We provide an python script based on https://github.com/kubernetes-client/python that sets up all the necessary configuration (e.g. generating keys, certificates, configuration files etc.) and pods along with necessary services and volumes. -The script is intended to work both with a minikube or any Kubernetes cluster, provided the user has an API key. - -**NOTES:** - - **Requires Python >3.6.** - - **Work in Progress** - -The script is in `auto` folder and can be run as: -``` -cd ~/LocalEGA/deployments/kube/auto -pip install -r requirements.txt -python deploy.py --fake-cega --config --deploy all -``` - -In the `deploy.py` service/pods names and other parameters should be configured: -```json -_localega = { -"role": "LocalEGA", -"email": "test@csc.fi", -"services": {"keys": "keys", - "inbox": "inbox", - "ingest": "ingest", - "s3": "minio", - "broker": "mq", - "db": "db", - "verify": "verify"}, -"key": {"name": "Test PGP", - "comment": "SOme comment", - "expire": "30/DEC/19 08:00:00", - "id": "key.1"}, -"ssl": {"country": "Finland", - "country_code": "FI", - "location": "Espoo", "org": "CSC"}, -"cega": {"user": "lega", - "endpoint": "http://cega-users.testing:8001/user/"} -} -``` - -Using the deploy script: -``` -╰─$ python deploy.py --help -Usage: deploy.py [OPTIONS] - - LocalEGA deployment script. - -Options: - --config Flag for generating configuration if does not exist, or - generating a new one. - --deploy TEXT Deploying the configuration secrets and pods. Options - available: "all" (default), "secrets" or "sc", "services" - or "svc", "configmap" or "cm" and "pods" or "pd". - --ns TEXT Deployment namespace, defaults to "testing". - --cega-ip TEXT CEGA MQ IP, for fake CEGA MQ it is set up with a default - for testing namespace. - --cega-pwd TEXT CEGA MQ Password, for fake CEGA MQ it is set up with a - default. - --key-pass TEXT CEGA Users RSA key password. - --fake-cega Fake CEGA-Users and CEGA MQ. - --help Show this message and exit. -``` - -### Deployment The Difficult Way - -The YAML files (from the `yml` directory) represent vanilla deployment setup configuration for LocalEGA, configuration that does not include configuration/passwords for starting services. Such configuration can generated using the `make bootstrap` script in the `~/LocalEGA/deployment/docker` folder or provided per each case. The YAML file only provide base `hostPath` volumes, for other volume types check [Kubernetes Volumes](https://kubernetes.io/docs/concepts/storage/volumes/). - -Files that require configuration: -* `keys/cm.keyserver.yml` -* `keys/secret.keyserver.yml` -* `lega-config/cm.lega.yml` -* `lega-config/secret.lega.yml` -* `mq/cm.lega-mq.yml` -* `mq/sts.lega-mq.yml` - -Following instructions are for Minikube deployment: -Once [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) are installed: - -``` -cd ~/LocalEGA/deployment/kube/yml -minikube start -kubectl create namespace localega -``` -#### Deploy Fake CEGA - -CEGA Broker is available for now and its address needs to be setup LocalEGA broker in `amqp://:@:5672/` -The `` is the IP of the Kubernetes Pod for CEGA Broker. -``` -kubectl create -f ./cega-mq --namespace=localega -``` -CEGA Users requires the setting up a user `ega-box-999` with a public SSH RSA key and added to the `yml/cega-users/cm.cega.yml` line 153. -After that it can be started using - -``` -kubectl create -f ./cega-users --namespace=localega -``` - -#### Deploy LocalEGA -``` -kubectl create -f ./lega-config --namespace=localega -kubectl create -f ./mq -f ./postgres -f ./s3 --namespace=localega -kubectl create -f ./keys -f ./verify -f ./ingest -f ./inbox --namespace=localega -``` - -#### Other useful information - -* See minikube services: `minikube service list` -* Delete services: `kubectl delete -f ./keys` -* Working with [volumes in Minio](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/minio.html) - -### Deployment the OpenShift Way - -The files provided in the `yml` directory can be reused for deployment to OpenShift with some changes: -- Minio requires `10Gi` volume to start properly in Openshift, although in minikube it it seems to do by with just 0.5Gi. -- By default, OpenShift Origin runs containers using an arbitrarily assigned user ID as per [OpenShift Guidelines](https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines), thus using `gosu` command for changing user is not allowed. The command for keyserver would look like `["ega-keyserver","--keys","/etc/ega/keys.ini"]` instead of `["gosu","lega","ega-keyserver","--keys","/etc/ega/keys.ini"]`. - -* Postgres DB requires a different container therefore we provided a different YAML configuration file for it in the [`oc/postgres` directory](oc/postgres), also the volume attached to Postgres DB needs `ReadWriteMany` permissions. -* Keyserver requires different configuration therefore we provided a different YAML configuration file for it in the [`oc/keys` directory](oc/keys). -* Inbox requires different configuration therefore we provided a different YAML configuration file for it in the [`oc/inbox` directory](oc/inbox). diff --git a/deployments/kube/auto/configure.py b/deployments/kube/auto/configure.py deleted file mode 100644 index 6a02dcf8..00000000 --- a/deployments/kube/auto/configure.py +++ /dev/null @@ -1,269 +0,0 @@ -from cryptography.hazmat.primitives.asymmetric import rsa -from cryptography.hazmat.primitives import serialization -from cryptography.hazmat.backends import default_backend -from cryptography import x509 -from cryptography.x509.oid import NameOID -from cryptography.hazmat.primitives import hashes -import datetime -import os -import errno -import logging -import configparser -import secrets -import string -import hashlib -from base64 import b64encode - -from pgpy import PGPKey, PGPUID -from pgpy.constants import PubKeyAlgorithm, KeyFlags, HashAlgorithm, SymmetricKeyAlgorithm, CompressionAlgorithm - -from cryptography.hazmat.primitives import padding -from cryptography.hazmat.primitives.ciphers import ( - Cipher, - algorithms, - modes) - -# Logging -FORMAT = '[%(asctime)s][%(name)s][%(process)d %(processName)s][%(levelname)-8s] (L:%(lineno)s) %(funcName)s: %(message)s' -logging.basicConfig(format=FORMAT, datefmt='%Y-%m-%d %H:%M:%S') -LOG = logging.getLogger(__name__) - - -class ConfigGenerator: - """Configuration generator. - - For when one needs to do create configuration files. - """ - - def __init__(self, config_path, name, email, namespace, services,): - """Set things up.""" - self.name = name - self.email = email - self.namespace = namespace - self._key_service = services['keys'] - self._db_service = services['db'] - self._s3_service = services['s3'] - self._broker_service = services['broker'] - self._config_path = config_path - - if not os.path.exists(self._config_path): - try: - os.makedirs(self._config_path) - except OSError as exc: # Guard against race condition - if exc.errno != errno.EEXIST: - raise - - # Based on - # https://www.pythonsheets.com/notes/python-crypto.html#aes-cbc-mode-encrypt-via-password-using-cryptography - # Provided under MIT license: https://github.com/crazyguitar/pysheeet/blob/master/LICENSE - - def _EVP_ByteToKey(self, pwd, md, salt, key_len, iv_len): - """Derive key and IV. - - Based on https://www.openssl.org/docs/man1.0.2/crypto/EVP_BytesToKey.html - """ - buf = md(pwd + salt).digest() - d = buf - while len(buf) < (iv_len + key_len): - d = md(d + pwd + salt).digest() - buf += d - return buf[:key_len], buf[key_len:key_len + iv_len] - - def aes_encrypt(self, pwd, ptext, md): - """Encrypt AES.""" - key_len, iv_len = 32, 16 - - # generate salt - salt = os.urandom(8) - - # generate key, iv from password - key, iv = self._EVP_ByteToKey(pwd, md, salt, key_len, iv_len) - - # pad plaintext - pad = padding.PKCS7(128).padder() - ptext = pad.update(ptext) + pad.finalize() - - # create an encryptor - cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend()) - encryptor = cipher.encryptor() - - # encrypt plain text - ctext = encryptor.update(ptext) + encryptor.finalize() - ctext = b'Salted__' + salt + ctext - - # encode base64 - return ctext - - def _generate_pgp_pair(self, comment, passphrase, armor): - """Generate PGP key pair to be used by keyserver.""" - # We need to specify all of our preferences because PGPy doesn't have any built-in key preference defaults at this time. - # This example is similar to GnuPG 2.1.x defaults, with no expiration or preferred keyserver - key = PGPKey.new(PubKeyAlgorithm.RSAEncryptOrSign, 4096) - uid = PGPUID.new(self.name, email=self.email, comment=comment) - key.add_uid(uid, - usage={KeyFlags.Sign, KeyFlags.EncryptCommunications, KeyFlags.EncryptStorage}, - hashes=[HashAlgorithm.SHA256, HashAlgorithm.SHA384, HashAlgorithm.SHA512, HashAlgorithm.SHA224], - ciphers=[SymmetricKeyAlgorithm.AES256, SymmetricKeyAlgorithm.AES192, SymmetricKeyAlgorithm.AES128], - compression=[CompressionAlgorithm.ZLIB, CompressionAlgorithm.BZ2, CompressionAlgorithm.ZIP, CompressionAlgorithm.Uncompressed]) - - # Protecting the key - key.protect(passphrase, SymmetricKeyAlgorithm.AES256, HashAlgorithm.SHA256) - pub_data = str(key.pubkey) if armor else bytes(key.pubkey) # armored or not - sec_data = str(key) if armor else bytes(key) # armored or not - - return (pub_data, sec_data) - - def generate_ssl_certs(self, country, country_code, location, org, email, org_unit="SysDevs", common_name="LocalEGA"): - """Generate SSL self signed certificate.""" - # Following https://cryptography.io/en/latest/x509/tutorial/?highlight=certificate - key = rsa.generate_private_key(public_exponent=65537, key_size=2048, backend=default_backend()) - priv_key = key.private_bytes(encoding=serialization.Encoding.PEM, - format=serialization.PrivateFormat.TraditionalOpenSSL, - encryption_algorithm=serialization.NoEncryption(),) - - subject = issuer = x509.Name([x509.NameAttribute(NameOID.COUNTRY_NAME, country_code), - x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, country), - x509.NameAttribute(NameOID.LOCALITY_NAME, location), - x509.NameAttribute(NameOID.ORGANIZATION_NAME, org), - x509.NameAttribute(NameOID.ORGANIZATIONAL_UNIT_NAME, org_unit), - x509.NameAttribute(NameOID.COMMON_NAME, common_name), - x509.NameAttribute(NameOID.EMAIL_ADDRESS, email), ]) - cert = x509.CertificateBuilder().subject_name( - subject).issuer_name( - issuer).public_key( - key.public_key()).serial_number( - x509.random_serial_number()).not_valid_before( - datetime.datetime.utcnow()).not_valid_after( - datetime.datetime.utcnow() + datetime.timedelta(days=1000)).add_extension( - x509.SubjectAlternativeName([x509.DNSName(u"localhost")]), critical=False,).sign( - key, hashes.SHA256(), default_backend()) - return (cert.public_bytes(serialization.Encoding.PEM).decode('utf-8'), priv_key.decode('utf-8')) - - def _hash_pass(self, password): - """Hashing password according to RabbitMQ specs.""" - # 1.Generate a random 32 bit salt: - # This will generate 32 bits of random data: - salt = os.urandom(4) - - # 2.Concatenate that with the UTF-8 representation of the password (in this case "simon") - tmp0 = salt + password.encode('utf-8') - - # 3. Take the SHA256 hash and get the bytes back - tmp1 = hashlib.sha256(tmp0).digest() - - # 4. Concatenate the salt again: - salted_hash = salt + tmp1 - - # 5. convert to base64 encoding: - pass_hash = b64encode(salted_hash).decode("utf-8") - - return pass_hash - - def generate_user_auth(self, password): - """Generate user auth for CEGA Users.""" - key = rsa.generate_private_key(backend=default_backend(), public_exponent=65537, key_size=4096) - - # get public key in OpenSSH format - public_key = key.public_key().public_bytes(serialization.Encoding.OpenSSH, serialization.PublicFormat.OpenSSH) - - # get private key in PEM container format - pem = key.private_bytes(encoding=serialization.Encoding.PEM, format=serialization.PrivateFormat.TraditionalOpenSSL, - encryption_algorithm=serialization.BestAvailableEncryption(password.encode('utf-8'))) # yeah not really that secret - - # decode to printable strings - with open(self._config_path / 'user.key', "wb") as f: - f.write(pem) - return public_key.decode('utf-8') - - def generate_mq_auth(self): - """Generate CEGA MQ auth.""" - generated_secret = ''.join(secrets.choice(string.ascii_letters + string.digits) for i in range(32)) - cega_defs_mq = """{{"rabbit_version":"3.6.11",\r\n "users":[{{"name":"lega", - "password_hash":"{0}","hashing_algorithm":"rabbit_password_hashing_sha256","tags":"administrator"}}],\r\n "vhosts":[{{"name":"lega"}}],\r\n - "permissions":[{{"user":"lega", "vhost":"lega", "configure":".*", "write":".*", "read":".*"}}],\r\n "parameters":[],\r\n "global_parameters":[{{"name":"cluster_name", "value":"rabbit@localhost"}}],\r\n "policies":[],\r\n - "queues":[{{"name":"inbox", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{{}}}},\r\n - {{"name":"inbox.checksums", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{{}}}},\r\n - {{"name":"files", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{{}}}},\r\n {{"name":"completed", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{{}}}},\r\n - {{"name":"errors", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{{}}}}],\r\n - "exchanges":[{{"name":"localega.v1", "vhost":"lega", "type":"topic", "durable":true, "auto_delete":false, "internal":false, "arguments":{{}}}}],\r\n - "bindings":[{{"source":"localega.v1","vhost":"lega","destination_type":"queue","arguments":{{}},"destination":"inbox","routing_key":"files.inbox"}},\r\n \t {{"source":"localega.v1","vhost":"lega","destination_type":"queue", - "arguments":{{}},"destination":"inbox.checksums","routing_key":"files.inbox.checksums"}},\r\n {{"source":"localega.v1","vhost":"lega","destination_type":"queue","arguments":{{}},"destination":"files","routing_key":"files"}},\r\n - {{"source":"localega.v1","vhost":"lega","destination_type":"queue","arguments":{{}},"destination":"completed","routing_key":"files.completed"}},\r\n - {{"source":"localega.v1","vhost":"Flega","destination_type":"queue","arguments":{{}},"destination":"errors","routing_key":"files.error"}}]\r\n}}""".format(self._hash_pass(generated_secret)) - cega_config_mq = """%% -*- mode: erlang -*- - %% - [{rabbit,[{loopback_users, [ ] }, - {disk_free_limit, "1GB"}]}, - {rabbitmq_management, [ {load_definitions, "/etc/rabbitmq/defs.json"} ]} - ].""" - return (generated_secret, cega_config_mq, cega_defs_mq) - - def create_conf_shared(self, scheme=None): - """Create default configuration file, namely ```conf.ini`` file.""" - config = configparser.RawConfigParser() - file_flag = 'w' - scheme = scheme if scheme else '' - config.set('DEFAULT', 'log', 'console') - # keyserver - config.add_section('keyserver') - config.set('keyserver', 'port', '8443') - # quality control - config.add_section('quality_control') - config.set('quality_control', 'keyserver_endpoint', f'https://{self._key_service}.{self.namespace}{scheme}:8443/retrieve/%s/private') - # inbox - config.add_section('inbox') - config.set('inbox', 'location', '/ega/inbox/%s') - config.set('inbox', 'mode', '2750') - # vault - config.add_section('vault') - config.set('vault', 'driver', 'S3Storage') - config.set('vault', 'url', f'http://{self._s3_service}.{self.namespace}{scheme}:9000') - # outgestion - config.add_section('outgestion') - config.set('outgestion', 'keyserver_endpoint', f'https://{self._key_service}.{self.namespace}{scheme}:8443/retrieve/%s/private') - # broker - config.add_section('broker') - config.set('broker', 'host', f'{self._broker_service}.{self.namespace}{scheme}') - config.set('broker', 'connection_attempts', '30') - config.set('broker', 'retry_delay', '10') - # Postgres - config.add_section('postgres') - config.set('postgres', 'host', f'{self._db_service}.{self.namespace}{scheme}') - config.set('postgres', 'user', 'lega') - config.set('postgres', 'try', '30') - - with open(self._config_path / 'conf.ini', file_flag) as configfile: - config.write(configfile) - - def add_conf_key(self, expire, file_name, comment, passphrase, armor=True, active=False): - """Create default configuration for keyserver. - - .. note: Information for the key is provided as dictionary for ``key_data``, - and should be in the format ``{'comment': '','passphrase': None, 'armor': True}. - If a passphrase is not provided it will generated.`` - """ - _generate_secret = ''.join(secrets.choice(string.ascii_letters + string.digits) for i in range(32)) - _passphrase = passphrase if passphrase else _generate_secret - comment = comment if comment else "Generated for use in LocalEGA." - config = configparser.RawConfigParser() - file_flag = 'w' - if os.path.exists(self._config_path / 'keys.ini'): - config.read(self._config_path / 'keys.ini') - if active: - config.set('DEFAULT', 'active', file_name) - if not config.has_section(file_name): - pub, sec = self._generate_pgp_pair(comment, _passphrase, armor) - config.add_section(file_name) - config.set(file_name, 'path', '/etc/ega/pgp/%s' % file_name) - config.set(file_name, 'passphrase', _passphrase) - config.set(file_name, 'expire', expire) - with open(self._config_path / f'{file_name}.pub', 'w' if armor else 'bw') as f: - f.write(pub) - with open(self._config_path / f'{file_name}.sec', 'w' if armor else 'bw') as f: - f.write(sec) - with open(self._config_path / 'keys.ini', file_flag) as configfile: - config.write(configfile) - - -# if __name__ == '__main__': - # main() diff --git a/deployments/kube/auto/deploy.py b/deployments/kube/auto/deploy.py deleted file mode 100644 index 8c191acb..00000000 --- a/deployments/kube/auto/deploy.py +++ /dev/null @@ -1,49 +0,0 @@ -import logging -from kube import kubernetes_deployment -import click - -# Logging -FORMAT = '[%(asctime)s][%(name)s][%(process)d %(processName)s][%(levelname)-8s] (L:%(lineno)s) %(funcName)s: %(message)s' -logging.basicConfig(format=FORMAT, datefmt='%Y-%m-%d %H:%M:%S') -LOG = logging.getLogger(__name__) -LOG.setLevel(logging.INFO) - - -@click.command() -@click.option('--config', is_flag=True, - help='Flag for generating configuration if does not exist, or generating a new one.') -@click.option('--deploy', multiple=True, - help='Deploying the configuration secrets and pods. Options available: "all" (default), "secrets" or "sc", "services" or "svc", "configmap" or "cm" and "pods" or "pd".') -@click.option('--ns', default="testing", help='Deployment namespace, defaults to "testing".') -@click.option('--cega-ip', help='CEGA MQ IP, for fake CEGA MQ it is set up with a default for testing namespace.') -@click.option('--cega-pwd', help='CEGA MQ Password, for fake CEGA MQ it is set up with a default.') -@click.option('--key-pass', default='password', help='CEGA Users RSA key password.') -@click.option('--fake-cega', is_flag=True, - help='Fake CEGA-Users and CEGA MQ.') -def main(config, deploy, ns, fake_cega, cega_ip, cega_pwd, key_pass): - """Local EGA deployment script.""" - _localega = { - 'role': 'LocalEGA', - 'email': 'test@csc.fi', - 'services': {'keys': 'keys', - 'inbox': 'inbox', - 'ingest': 'ingest', - 's3': 'minio', - 'broker': 'mq', - 'db': 'db', - 'verify': 'verify'}, - # Only using one key - 'key': {'name': 'Test PGP', - 'comment': None, - 'expire': '30/DEC/19 08:00:00', - 'id': 'key.1'}, - 'ssl': {'country': 'Finland', 'country_code': 'FI', 'location': 'Espoo', 'org': 'CSC'}, - 'cega': {'user': 'lega', - 'endpoint': 'http://cega-users.testing:8001/user/'} - } - - kubernetes_deployment(_localega, config, deploy, ns, fake_cega, cega_ip, cega_pwd, key_pass) - - -if __name__ == '__main__': - main() diff --git a/deployments/kube/auto/kube.py b/deployments/kube/auto/kube.py deleted file mode 100644 index d4a5716f..00000000 --- a/deployments/kube/auto/kube.py +++ /dev/null @@ -1,498 +0,0 @@ -import string -import secrets -import logging -from kubernetes import client, config -from kubernetes.client.rest import ApiException -from pathlib import Path -from configure import ConfigGenerator -from hashlib import md5 -from base64 import b64encode, b64decode -import click - -# Logging -FORMAT = '[%(asctime)s][%(name)s][%(process)d %(processName)s][%(levelname)-8s] (L:%(lineno)s) %(funcName)s: %(message)s' -logging.basicConfig(format=FORMAT, datefmt='%Y-%m-%d %H:%M:%S') -LOG = logging.getLogger(__name__) -LOG.setLevel(logging.INFO) - -# Setup kubernete configuration -config.load_kube_config() -api_core = client.CoreV1Api() -api_app = client.AppsV1Api() -api_beta_app = client.AppsV1beta1Api() -api_extension = client.ExtensionsV1beta1Api() - - -class LocalEGADeploy: - """LocalEGA kubernetes deployment. - - Deployment configuration for LocalEGA to kubernetes. - """ - - def __init__(self, keys, namespace): - """Set things up.""" - self.keys = keys - self._namespace = namespace - self._role = keys["role"] - - def create_namespace(self): - """Create default namespace if not exists.""" - namespace_list = api_core.list_namespace(label_selector='role') - namespaces = [x for x in namespace_list.items if x.metadata.labels['role'] == self._role] - - if len(namespaces) == 0: - namespace = client.V1Namespace() - namespace.metadata = client.V1ObjectMeta(name=self._namespace, labels={'role': self._role}) - api_core.create_namespace(namespace) - LOG.info(f'Namespace: {self._namespace} created.') - else: - pass - LOG.info(f'Namespace: {self._namespace} exists.') - - def _generate_secret(self, value): - """Generate secret of specifig value. - - .. note: If the value is of type integer it will generate a random of that value, - else it will take that value. - """ - if isinstance(value, int): - secret = ''.join(secrets.choice(string.ascii_letters + string.digits) for i in range(value)).encode("utf-8") - return b64encode(secret).decode("utf-8") - else: - return b64encode(value.encode("utf-8")).decode("utf-8") - - # Default Secrets - def config_secret(self, name, data, patch=False): - """Create and upload secret, patch option also available.""" - sec_conf = client.V1Secret() - sec_conf.metadata = client.V1ObjectMeta(name=name) - sec_conf.type = "Opaque" - sec_conf.data = {key: self._generate_secret(value) for (key, value) in data.items()} - try: - api_core.create_namespaced_secret(namespace=self._namespace, body=sec_conf) - LOG.info(f'Secret: {name} created.') - except ApiException as e: - if e.status == 409 and patch: - api_core.patch_namespaced_secret(name=name, namespace=self._namespace, body=sec_conf) - LOG.info(f'Secret: {name} patched.') - else: - LOG.error(f'Exception message: {e}') - - def read_secret(self, name): - """Read secret.""" - api_response = '' - try: - api_response = api_core.read_namespaced_secret(name, self._namespace, exact=True, export=True) - LOG.info(f'Secret: {name} read.') - except ApiException as e: - LOG.error(f'Exception message: {e}') - else: - return api_response - - def config_map(self, name, data, binary=False, patch=False): - """Create and upload configMap, patch option also available.""" - conf_map = client.V1ConfigMap() - conf_map.metadata = client.V1ObjectMeta(name=name) - if not binary: - conf_map.data = data - else: - conf_map.binary_data = data - - try: - api_core.create_namespaced_config_map(namespace=self._namespace, body=conf_map) - LOG.info(f'ConfigMap: {name} created.') - except ApiException as e: - if e.status == 409 and patch: - api_core.patch_namespaced_config_map(name=name, namespace=self._namespace, body=conf_map) - LOG.info(f'ConfigMap: {name} patched.') - else: - LOG.error(f'Exception message: {e}') - - def deployment(self, name, image, command, env, vmounts, volumes, lifecycle=None, args=None, ports=None, replicas=1, patch=False): - """Create and upload deployment, patch option also available.""" - deploy = client.V1Deployment(kind="Deployment", api_version="apps/v1") - deploy.metadata = client.V1ObjectMeta(name=name) - container = client.V1Container(name=name, image=image, image_pull_policy="IfNotPresent", - volume_mounts=vmounts, command=command, env=env, args=args, lifecycle=lifecycle) - if ports: - container.ports = list(map(lambda x: client.V1ContainerPort(container_port=x), ports)) - template = client.V1PodTemplateSpec(metadata=client.V1ObjectMeta(labels={"app": name}), - spec=client.V1PodSpec(containers=[container], volumes=volumes, restart_policy="Always")) - spec = client.V1DeploymentSpec(replicas=replicas, template=template, selector=client.V1LabelSelector(match_labels={"app": name})) - deploy.spec = spec - try: - api_app.create_namespaced_deployment(namespace=self._namespace, body=deploy) - LOG.info(f'Deployment: {name} created.') - except ApiException as e: - if e.status == 409 and patch: - api_app.patch_namespaced_deployment(name=name, namespace=self._namespace, body=deploy) - LOG.info(f'Deployment: {name} patched.') - else: - LOG.error(f'Exception message: {e}') - - def service(self, name, ports, pod_name=None, type="ClusterIP", patch=False): - """Create and upload service, patch option also available.""" - svc_conf = client.V1Service(kind="Service", api_version="v1") - svc_conf.metadata = client.V1ObjectMeta(name=name) - spec = client.V1ServiceSpec(selector={"app": pod_name if pod_name else name}, ports=ports, type=type) - svc_conf.spec = spec - - try: - api_core.create_namespaced_service(namespace=self._namespace, body=svc_conf) - LOG.info(f'Service: {name} created.') - except ApiException as e: - if e.status == 409 and patch: - api_core.patch_namespaced_service(name=name, namespace=self._namespace, body=svc_conf) - LOG.info(f'Service: {name} patched.') - else: - LOG.error(f'Exception message: {e}') - - def stateful_set(self, name, image, command, env, vmounts, vol, vol_claims=None, sec=None, args=None, ports=None, replicas=1, patch=False): - """Create and upload StatefulSet, patch option also available.""" - sts_conf = client.V1StatefulSet() - sts_conf.metadata = client.V1ObjectMeta(name=name) - container = client.V1Container(name=name, image=image, image_pull_policy="IfNotPresent", - volume_mounts=vmounts, command=command, env=env, args=args, security_context=sec) - if ports: - container.ports = list(map(lambda x: client.V1ContainerPort(container_port=x), ports)) - template = client.V1PodTemplateSpec(metadata=client.V1ObjectMeta(labels={"app": name}), - spec=client.V1PodSpec(containers=[container], volumes=vol, restart_policy="Always")) - spec = client.V1StatefulSetSpec(replicas=replicas, template=template, selector=client.V1LabelSelector(match_labels={"app": name}), - service_name=name, volume_claim_templates=vol_claims) - sts_conf.spec = spec - try: - api_app.create_namespaced_stateful_set(namespace=self._namespace, body=sts_conf) - LOG.info(f'Service: {name} created.') - except ApiException as e: - if e.status == 409 and patch and not (vol_claims is None): - api_app.patch_namespaced_stateful_set(name=name, namespace=self._namespace, body=sts_conf) - LOG.info(f'Service: {name} patched.') - else: - LOG.error(f'Exception message: {e}') - - def persistent_volume_claim(self, name, volume_name, storage, accessModes=["ReadWriteOnce"]): - """Create a volume claim.""" - claim_vol = client.V1PersistentVolumeClaim(kind="PersistentVolumeClaim", api_version="v1") - claim_vol.metadata = client.V1ObjectMeta(name=name) - spec = client.V1PersistentVolumeClaimSpec(volume_name=volume_name, access_modes=accessModes, storage_class_name=volume_name) - spec.resources = client.V1ResourceRequirements(requests={"storage": storage}) - claim_vol.spec = spec - try: - api_core.create_namespaced_persistent_volume_claim(namespace=self._namespace, body=claim_vol) - LOG.info(f'Volume claim: {name} created.') - except ApiException as e: - LOG.error(f'Exception message: {e}') - - def persistent_volume(self, name, storage, accessModes=["ReadWriteOnce"], host_path=True, patch=False): - """Create persistent volume by default on host.""" - ps_vol = client.V1PersistentVolume(kind="PersistentVolume", api_version="v1") - ps_vol.metadata = client.V1ObjectMeta(name=name) - spec = client.V1PersistentVolumeSpec(capacity={"storage": storage}, access_modes=accessModes, storage_class_name=name) - if host_path: - spec.host_path = client.V1HostPathVolumeSource(path=f'/mnt/data/{name}') - ps_vol.spec = spec - try: - api_core.create_persistent_volume(body=ps_vol) - LOG.info(f'Persistent Volume: {name} created.') - except ApiException as e: - if e.status == 409 and patch: - api_core.patch_persistent_volume(name=name, body=ps_vol) - LOG.info(f'PeVolume: {name} patched.') - else: - LOG.error(f'Exception message: {e}') - - def horizontal_scale(self, name, pod_name, pod_kind, max, metric): - """Create horizontal pod scaller, based on metric.""" - api = client.AutoscalingV1Api() - pd_scale = client.V1HorizontalPodAutoscaler() - pd_scale.metadata = client.V1ObjectMeta(name=name) - target = client.V1CrossVersionObjectReference(name=pod_name, kind=pod_kind) - spec = client.V1HorizontalPodAutoscalerSpec(min_replicas=1, max_replicas=max, scale_target_ref=target) - status = client.V1HorizontalPodAutoscalerStatus(current_replicas=1, desired_replicas=2) - pd_scale.spec = spec - pd_scale.status = status - try: - api.create_namespaced_horizontal_pod_autoscaler(namespace=self._namespace, body=pd_scale) - LOG.info(f'Persistent Volume: {name} created.') - except ApiException as e: - LOG.error(f'Exception message: {e}') - - def destroy(self): - """No need for the namespace, delete everything.""" - namespace_list = api_core.list_namespace(label_selector='role') - namespaces = [x for x in namespace_list.items if x.metadata.labels['role'] == self._role] - - if len(namespaces) == 0: - namespace = client.V1Namespace() - namespace.metadata = client.V1ObjectMeta(name=self._namespace, labels={'role': self._role}) - api_core.delete_namespace(self._namespace) - LOG.info('Namespace: {self._namespace} deleted.') - else: - LOG.info('Namespace: {self._namespace} exists.') - - -def kubernetes_deployment(_localega, config, deploy, ns, fake_cega, cega_ip, cega_pwd, key_pass): - """Wrap all the kubernetes settings.""" - val = set(["secrets", "sc", "configmap", "cm", "pods", "pd", "services", "svc", "all"]) - set_sc = set(["secrets", "sc", "all"]) - set_cm = set(["configmap", "cm", "all"]) - set_pd = set(["pods", "pd", "all"]) - set_sv = set(["services", "svc", "all"]) - - _here = Path(__file__).parent - config_dir = _here / 'config' - - # Generate Configuration - conf = ConfigGenerator(config_dir, _localega['key']['name'], _localega['email'], ns, _localega['services']) - deploy_lega = LocalEGADeploy(_localega, ns) - if fake_cega: - cega_pass, cega_config_mq, cega_defs_mq = conf.generate_mq_auth() - cega_address = f"amqp://{_localega['cega']['user']}:{cega_pass}@cega-mq.{ns}:5672/{_localega['cega']['user']}" - else: - cega_address = f"amqp://{_localega['cega']['user']}:{cega_pwd}@{cega_ip}:5672/{_localega['cega']['user']}" - - if config: - conf.create_conf_shared() - conf.add_conf_key(_localega['key']['expire'], _localega['key']['id'], comment=_localega['key']['comment'], - passphrase=None, armor=True, active=True) - ssl_cert, ssl_key = conf.generate_ssl_certs(country=_localega['ssl']['country'], country_code=_localega['ssl']['country_code'], - location=_localega['ssl']['location'], org=_localega['ssl']['org'], email=_localega['email']) - - # Setting ENV variables and Volumes - env_cega_api = client.V1EnvVar(name="CEGA_ENDPOINT", value=f"{_localega['cega']['endpoint']}") - env_inbox_mq = client.V1EnvVar(name="BROKER_HOST", value=f"{_localega['services']['broker']}.{ns}") - env_inbox_port = client.V1EnvVar(name="INBOX_PORT", value="2222") - env_db_data = client.V1EnvVar(name="PGDATA", value="/var/lib/postgresql/data/pgdata") - env_cega_mq = client.V1EnvVar(name="CEGA_CONNECTION", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='cega-connection', - key="address"))) - env_cega_creds = client.V1EnvVar(name="CEGA_ENDPOINT_CREDS", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='cega-creds', - key="credentials"))) - env_acc_minio = client.V1EnvVar(name="MINIO_ACCESS_KEY", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='s3-keys', - key="access"))) - env_sec_minio = client.V1EnvVar(name="MINIO_SECRET_KEY", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='s3-keys', - key="secret"))) - env_acc_s3 = client.V1EnvVar(name="S3_ACCESS_KEY", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='s3-keys', - key="access"))) - env_sec_s3 = client.V1EnvVar(name="S3_SECRET_KEY", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='s3-keys', - key="secret"))) - env_db_pass = client.V1EnvVar(name="POSTGRES_PASSWORD", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='lega-db-secret', - key="postgres_password"))) - env_db_user = client.V1EnvVar(name="POSTGRES_USER", - value_from=client.V1EnvVarSource(config_map_key_ref=client.V1ConfigMapKeySelector(name='lega-db-config', - key="user"))) - env_db_name = client.V1EnvVar(name="POSTGRES_DB", - value_from=client.V1EnvVarSource(config_map_key_ref=client.V1ConfigMapKeySelector(name='lega-db-config', - key="dbname"))) - env_lega_pass = client.V1EnvVar(name="LEGA_PASSWORD", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='lega-password', - key="password"))) - env_keys_pass = client.V1EnvVar(name="KEYS_PASSWORD", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='keys-password', - key="password"))) - mount_config = client.V1VolumeMount(name="config", mount_path='/etc/ega') - mount_inbox = client.V1VolumeMount(name="inbox", mount_path='/ega/inbox') - mount_mq_temp = client.V1VolumeMount(name="mq-temp", mount_path='/temp') - mount_mq_rabbitmq = client.V1VolumeMount(name="rabbitmq", mount_path='/etc/rabbitmq') - mount_mq_script = client.V1VolumeMount(name="mq-entrypoint", mount_path='/script') - mount_db_data = client.V1VolumeMount(name="data", mount_path='/var/lib/postgresql/data', read_only=False) - mound_db_init = client.V1VolumeMount(name="initsql", mount_path='/docker-entrypoint-initdb.d') - mount_minio = client.V1VolumeMount(name="data", mount_path='/data') - - pmap_ini_conf = client.V1VolumeProjection(config_map=client.V1ConfigMapProjection(name="lega-config", - items=[client.V1KeyToPath(key="conf.ini", path="conf.ini", mode=0o744)])) - pmap_ini_keys = client.V1VolumeProjection(config_map=client.V1ConfigMapProjection(name="lega-keyserver-config", - items=[client.V1KeyToPath(key="keys.ini.enc", - path="keys.ini.enc", mode=0o744)])) - sec_keys = client.V1VolumeProjection(secret=client.V1SecretProjection(name="keyserver-secret", - items=[client.V1KeyToPath(key="key1.sec", path="pgp/key.1"), client.V1KeyToPath(key="ssl.cert", path="ssl.cert"), client.V1KeyToPath(key="ssl.key", path="ssl.key")])) - if set.intersection(set(deploy), val) or fake_cega: - deploy_lega.create_namespace() - deploy_lega.config_secret('cega-creds', {'credentials': 32}) - else: - click.echo("Option not recognised.") - if set.intersection(set(deploy), set_sc): - # Create Secrets - deploy_lega.config_secret('cega-connection', {'address': cega_address}) - deploy_lega.config_secret('lega-db-secret', {'postgres_password': 32}) - deploy_lega.config_secret('s3-keys', {'access': 16, 'secret': 32}) - deploy_lega.config_secret('lega-password', {'password': 32}) - deploy_lega.config_secret('keys-password', {'password': 32}) - with open(_here / 'config/key.1.sec') as key_file: - key1_data = key_file.read() - - deploy_lega.config_secret('keyserver-secret', {'key1.sec': key1_data, - 'ssl.cert': ssl_cert, 'ssl.key': ssl_key}) - if set.intersection(set(deploy), set_cm): - # Read conf from files - with open('../../../extras/db.sql') as sql_init: - init_sql = sql_init.read() - - with open(_here / 'scripts/mq.sh') as mq_init: - init_mq = mq_init.read() - - with open('../../docker/images/mq/defs.json') as mq_defs: - defs_mq = mq_defs.read() - - with open('../../docker/images/mq/rabbitmq.config') as mq_config: - config_mq = mq_config.read() - - with open(_here / 'config/conf.ini') as conf_file: - data_conf = conf_file.read() - - with open(_here / 'config/keys.ini') as keys_file: - data_keys = keys_file.read() - - secret = deploy_lega.read_secret('keys-password') - enc_keys = conf.aes_encrypt(b64decode(secret.to_dict()['data']['password'].encode('utf-8')), data_keys.encode('utf-8'), md5) - - with open(_here / 'config/keys.ini.enc', 'w') as enc_file: - enc_file.write(b64encode(enc_keys).decode('utf-8')) - - # Upload Configuration Maps - deploy_lega.config_map('initsql', {'db.sql': init_sql}) - deploy_lega.config_map('mq-config', {'defs.json': defs_mq, 'rabbitmq.config': config_mq}) - deploy_lega.config_map('mq-entrypoint', {'mq.sh': init_mq}) - deploy_lega.config_map('lega-config', {'conf.ini': data_conf}) - deploy_lega.config_map('lega-keyserver-config', {'keys.ini.enc': b64encode(enc_keys).decode('utf-8')}, binary=True) - deploy_lega.config_map('lega-db-config', {'user': 'lega', 'dbname': 'lega'}) - - if set.intersection(set(deploy), set_pd): - # Volumes - deploy_lega.persistent_volume("postgres", "0.5Gi", accessModes=["ReadWriteMany"]) - deploy_lega.persistent_volume("rabbitmq", "0.5Gi") - deploy_lega.persistent_volume("inbox", "0.5Gi", accessModes=["ReadWriteMany"]) - deploy_lega.persistent_volume_claim("db-storage", "postgres", "0.5Gi", accessModes=["ReadWriteMany"]) - deploy_lega.persistent_volume_claim("mq-storage", "rabbitmq", "0.5Gi") - deploy_lega.persistent_volume_claim("inbox", "inbox", "0.5Gi", accessModes=["ReadWriteMany"]) - volume_db = client.V1Volume(name="data", persistent_volume_claim=client.V1PersistentVolumeClaimVolumeSource(claim_name="db-storage")) - volume_rabbitmq = client.V1Volume(name="rabbitmq", - persistent_volume_claim=client.V1PersistentVolumeClaimVolumeSource(claim_name="mq-storage")) - volume_db_init = client.V1Volume(name="initsql", config_map=client.V1ConfigMapVolumeSource(name="initsql")) - volume_mq_temp = client.V1Volume(name="mq-temp", config_map=client.V1ConfigMapVolumeSource(name="mq-config")) - volume_mq_script = client.V1Volume(name="mq-entrypoint", config_map=client.V1ConfigMapVolumeSource(name="mq-entrypoint", - default_mode=0o744)) - volume_config = client.V1Volume(name="config", config_map=client.V1ConfigMapVolumeSource(name="lega-config")) - # volume_ingest = client.V1Volume(name="ingest-conf", config_map=client.V1ConfigMapVolumeSource(name="lega-config")) - volume_inbox = client.V1Volume(name="inbox", persistent_volume_claim=client.V1PersistentVolumeClaimVolumeSource(claim_name="inbox")) - volume_keys = client.V1Volume(name="config", - projected=client.V1ProjectedVolumeSource(sources=[pmap_ini_conf, pmap_ini_keys, sec_keys])) - - pvc_minio = client.V1PersistentVolumeClaim(metadata=client.V1ObjectMeta(name="data"), - spec=client.V1PersistentVolumeClaimSpec(access_modes=["ReadWriteOnce"], - resources=client.V1ResourceRequirements(requests={"storage": "10Gi"}))) - # Deploy LocalEGA Pods - deploy_lega.deployment('keys', 'nbisweden/ega-base:latest', - ["ega-keyserver", "--keys", "/etc/ega/keys.ini.enc"], - [env_lega_pass, env_keys_pass], [mount_config], [volume_keys], ports=[8443], patch=True) - deploy_lega.deployment('db', 'postgres:9.6', None, [env_db_pass, env_db_user, env_db_name, env_db_data], - [mount_db_data, mound_db_init], [volume_db, volume_db_init], ports=[5432]) - deploy_lega.deployment('ingest', 'nbisweden/ega-base:latest', ["ega-ingest"], - [env_lega_pass, env_acc_s3, env_sec_s3, env_db_pass], - [mount_config, mount_inbox], [volume_config, volume_inbox]) - - deploy_lega.stateful_set('minio', 'minio/minio:latest', None, [env_acc_minio, env_sec_minio], - [mount_minio], None, args=["server", "/data"], vol_claims=[pvc_minio], ports=[9000]) - - deploy_lega.stateful_set('verify', 'nbisweden/ega-base:latest', ["ega-verify"], - [env_acc_s3, env_sec_s3, env_lega_pass, env_db_pass], [mount_config], [volume_config]) - - deploy_lega.stateful_set('mq', 'rabbitmq:3.6.14-management', ["/script/mq.sh"], - [env_cega_mq], [mount_mq_temp, mount_mq_script, mount_mq_rabbitmq], - [volume_mq_temp, volume_mq_script, volume_rabbitmq], - ports=[15672, 5672, 4369, 25672]) - deploy_lega.stateful_set('inbox', 'nbisweden/ega-mina-inbox:latest', None, - [env_inbox_mq, env_cega_api, env_cega_creds, env_inbox_port], - [mount_inbox], [volume_inbox], ports=[2222]) - - # Ports - ports_db = [client.V1ServicePort(protocol="TCP", port=5432, target_port=5432)] - ports_inbox = [client.V1ServicePort(protocol="TCP", port=2222, target_port=2222)] - ports_s3 = [client.V1ServicePort(name="web", protocol="TCP", port=9000)] - ports_keys = [client.V1ServicePort(protocol="TCP", port=8443, target_port=8443)] - ports_mq_management = [client.V1ServicePort(name="http", protocol="TCP", port=15672, target_port=15672)] - ports_mq = [client.V1ServicePort(name="amqp", protocol="TCP", port=5672, target_port=5672), - client.V1ServicePort(name="epmd", protocol="TCP", port=4369, target_port=4369), - client.V1ServicePort(name="rabbitmq-dist", protocol="TCP", port=25672, target_port=25672)] - - if set.intersection(set(deploy), set_sv): - - # Deploy Services - deploy_lega.service('db', ports_db) - deploy_lega.service('mq-management', ports_mq_management, pod_name="mq", type="NodePort") - deploy_lega.service('mq', ports_mq) - deploy_lega.service('keys', ports_keys) - deploy_lega.service('inbox', ports_inbox, type="NodePort") - deploy_lega.service('minio', ports_s3) # Headless - deploy_lega.service('minio-service', ports_s3, pod_name="minio", type="LoadBalancer") - - if set.intersection(set(deploy), set(["scale"])): - metric_cpu = client.V2beta1MetricSpec(type="Resource", - resource=client.V2beta1ResourceMetricSource(name="cpu", target_average_utilization=50)) - deploy_lega.horizontal_scale("ingest", "ingest", "Deployment", 5, [metric_cpu]) - - if fake_cega: - deploy_fake_cega(deploy_lega, _here, conf, cega_config_mq, cega_defs_mq, ports_mq, ports_mq_management, key_pass) - - -def deploy_fake_cega(deploy_lega, _here, conf, cega_config_mq, cega_defs_mq, ports_mq, ports_mq_management, key_pass): - """Deploy the Fake CEGA.""" - user_pub = conf.generate_user_auth(key_pass) - with open(_here / 'scripts/server.py') as users_init: - init_users = users_init.read() - - with open('../../docker/images/cega/users.html') as user_list: - users = user_list.read() - - with open(_here / 'scripts/cega-mq.sh') as ceg_mq_init: - cega_init_mq = ceg_mq_init.read() - - deploy_lega.config_map('users-config', {'server.py': init_users, 'users.html': users, - 'ega-box-999.yml': f'---\npubkey: {user_pub}'}) - env_users_inst = client.V1EnvVar(name="LEGA_INSTANCES", value="lega") - env_users_creds = client.V1EnvVar(name="CEGA_REST_lega_PASSWORD", - value_from=client.V1EnvVarSource(secret_key_ref=client.V1SecretKeySelector(name='cega-creds', - key="credentials"))) - mount_users = client.V1VolumeMount(name="users-config", mount_path='/cega') - users_map = client.V1ConfigMapProjection(name="users-config", - items=[client.V1KeyToPath(key="server.py", path="server.py"), - client.V1KeyToPath(key="users.html", path="users.html"), - client.V1KeyToPath(key="ega-box-999.yml", path="users/ega-box-999.yml"), - client.V1KeyToPath(key="ega-box-999.yml", path="users/lega/ega-box-999.yml")]) - users_vol = client.V1VolumeProjection(config_map=users_map) - volume_users = client.V1Volume(name="users-config", - projected=client.V1ProjectedVolumeSource(sources=[users_vol])) - - deploy_lega.config_map('cega-mq-entrypoint', {'cega-mq.sh': cega_init_mq}) - deploy_lega.config_map('cega-mq-config', {'defs.json': cega_defs_mq, 'rabbitmq.config': cega_config_mq}) - deploy_lega.persistent_volume("cega-rabbitmq", "1Gi") - deploy_lega.persistent_volume_claim("cega-mq-storage", "cega-rabbitmq", "1Gi") - mount_cega_temp = client.V1VolumeMount(name="cega-mq-temp", mount_path='/temp') - mount_cega_rabbitmq = client.V1VolumeMount(name="cega-rabbitmq", mount_path='/etc/rabbitmq') - volume_cega_temp = client.V1Volume(name="cega-mq-temp", config_map=client.V1ConfigMapVolumeSource(name="cega-mq-config")) - volume_cega_rabbitmq = client.V1Volume(name="cega-rabbitmq", - persistent_volume_claim=client.V1PersistentVolumeClaimVolumeSource(claim_name="cega-mq-storage")) - mount_mq_cega = client.V1VolumeMount(name="cega-mq-entrypoint", mount_path='/script') - volume_mq_cega = client.V1Volume(name="cega-mq-entrypoint", config_map=client.V1ConfigMapVolumeSource(name="cega-mq-entrypoint", - default_mode=0o744)) - - deploy_lega.stateful_set('cega-mq', 'rabbitmq:3.6.14-management', ["/script/cega-mq.sh"], None, - [mount_cega_temp, mount_mq_cega, mount_cega_rabbitmq], - [volume_cega_temp, volume_mq_cega, volume_cega_rabbitmq], - ports=[15672, 5672, 4369, 25672]) - - deploy_lega.deployment('cega-users', 'python:3.6-alpine3.7', ["/bin/sh", "-c"], - [env_users_inst, env_users_creds], - [mount_users], [volume_users], - args=["pip install PyYAML aiohttp aiohttp_jinja2; python /cega/server.py"], - ports=[8001]) - ports_users = [client.V1ServicePort(protocol="TCP", port=8001, target_port=8001)] - deploy_lega.service('cega-mq', ports_mq, type="NodePort") - deploy_lega.service('cega-mq-management', ports_mq_management, pod_name="cega-mq", type="NodePort") - deploy_lega.service('cega-users', ports_users, type="NodePort") diff --git a/deployments/kube/auto/requirements.txt b/deployments/kube/auto/requirements.txt deleted file mode 100644 index bd92cd8b..00000000 --- a/deployments/kube/auto/requirements.txt +++ /dev/null @@ -1,4 +0,0 @@ -kubernetes -cryptography -PGPy==0.4.3 -click==6.7 diff --git a/deployments/kube/auto/scripts/cega-mq.sh b/deployments/kube/auto/scripts/cega-mq.sh deleted file mode 100644 index 17832108..00000000 --- a/deployments/kube/auto/scripts/cega-mq.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash - -set -e - -# Initialization -rabbitmq-plugins enable --offline rabbitmq_federation -rabbitmq-plugins enable --offline rabbitmq_federation_management -rabbitmq-plugins enable --offline rabbitmq_shovel -rabbitmq-plugins enable --offline rabbitmq_shovel_management - -cp --remove-destination /temp/rabbitmq.config /etc/rabbitmq/rabbitmq.config -cp --remove-destination /temp/defs.json /etc/rabbitmq/defs.json -chmod 640 /etc/rabbitmq/rabbitmq.config -chmod 640 /etc/rabbitmq/defs.json - -exec rabbitmq-server diff --git a/deployments/kube/auto/scripts/mq.sh b/deployments/kube/auto/scripts/mq.sh deleted file mode 100644 index a8b81193..00000000 --- a/deployments/kube/auto/scripts/mq.sh +++ /dev/null @@ -1,93 +0,0 @@ -#!/bin/bash - -set -e -set -x - -[[ -z "${CEGA_CONNECTION}" ]] && echo 'Environment CEGA_CONNECTION is empty' 1>&2 && exit 1 - -# Initialization -cp --remove-destination /temp/rabbitmq.config /etc/rabbitmq/rabbitmq.config -cp --remove-destination /temp/defs.json /etc/rabbitmq/defs.json -rabbitmq-plugins enable --offline rabbitmq_federation -rabbitmq-plugins enable --offline rabbitmq_federation_management -rabbitmq-plugins enable --offline rabbitmq_shovel -rabbitmq-plugins enable --offline rabbitmq_shovel_management - -chmod 640 /etc/rabbitmq/rabbitmq.config -chmod 640 /etc/rabbitmq/defs.json - -# Problem of loading the plugins and definitions out-of-orders. -# Explanation: https://github.com/rabbitmq/rabbitmq-shovel/issues/13 -# Therefore: we run the server, with some default confs -# and then we upload the cega-definitions through the HTTP API - -# We cannot add those definitions to defs.json (loaded by the -# management plugin. See /etc/rabbitmq/rabbitmq.config) -# So we use curl afterwards, to upload the extras definitions -# See also https://pulse.mozilla.org/api/ - -# dest-exchange-key is not set for the shovel, so the key is re-used. - -# For the moment, still using guest:guest -cat > /etc/rabbitmq/defs-cega.json <&1 && exit 1 - - ROUND=30 - until rabbitmqadmin import /etc/rabbitmq/defs-cega.json || ((ROUND<0)) - do - sleep 1 - $((ROUND--)) - done - ((ROUND<0)) && echo "Central EGA connections *_not_* loaded" 2>&1 && exit 1 - echo "Central EGA connections loaded" -} & - -exec rabbitmq-server diff --git a/deployments/kube/auto/scripts/server.py b/deployments/kube/auto/scripts/server.py deleted file mode 100644 index dab56f26..00000000 --- a/deployments/kube/auto/scripts/server.py +++ /dev/null @@ -1,110 +0,0 @@ -#!/usr/bin/env python3.6 -# -*- coding: utf-8 -*- - -''' -Test server to act as CentralEGA endpoint for users - -:author: Frédéric Haziza -:copyright: (c) 2017, NBIS System Developers. -''' - -import sys -import os -import asyncio -import ssl -import yaml -from pathlib import Path -from functools import wraps -from base64 import b64decode - -import logging as LOG - -from aiohttp import web -import jinja2 -import aiohttp_jinja2 - -instances = {} -for instance in os.environ.get('LEGA_INSTANCES','').strip().split(','): - instances[instance] = (Path(f'/cega/users/{instance}'), os.environ[f'CEGA_REST_{instance}_PASSWORD']) -default_inst = os.environ.get('DEFAULT_INSTANCE','lega') - -def protected(func): - @wraps(func) - def wrapped(request): - auth_header = request.headers.get('AUTHORIZATION') - if not auth_header: - raise web.HTTPUnauthorized(text=f'Protected access\n') - _, token = auth_header.split(None, 1) # Skipping the Basic keyword - passwd = b64decode(token).decode() - info = instances.get(default_inst) - if info is not None and info[1] == passwd: - request.match_info['lega'] = default_inst - request.match_info['users_dir'] = info[0] - return func(request) - raise web.HTTPUnauthorized(text=f'Protected access\n') - return wrapped - - -@aiohttp_jinja2.template('users.html') -async def index(request): - users={} - for instance, (users_dir, _) in instances.items(): - users[instance]= {} - files = [f for f in users_dir.iterdir() if f.is_file()] - for f in files: - with open(f, 'r') as stream: - users[instance][f.stem] = yaml.load(stream) - return { "cega_users": users } - -@protected -async def user(request): - name = request.match_info['id'] - lega_instance = request.match_info['lega'] - users_dir = request.match_info['users_dir'] - try: - with open(f'{users_dir}/{name}.yml', 'r') as stream: - d = yaml.load(stream) - json_data = { 'password_hash': d.get("password_hash",None), 'pubkey': d.get("pubkey",None), 'expiration': d.get("expiration",None) } - return web.json_response(json_data) - except OSError: - raise web.HTTPBadRequest(text=f'No info for that user {name} in LocalEGA {lega_instance}... yet\n') - -# Unprotected access -async def pgp_public_key(request): - name = request.match_info['id'] - try: - with open(f'/cega/users/pgp/{name}.pub', 'r') as stream: # 'rb' - return web.Response(text=stream.read()) # .hex() - except OSError: - raise web.HTTPBadRequest(text=f'No info about {name} in CentralEGA... yet\n') - -def main(): - - host = sys.argv[1] if len(sys.argv) > 1 else "0.0.0.0" - - # ssl_certfile = Path(CONF.get('keyserver', 'ssl_certfile')).expanduser() - # ssl_keyfile = Path(CONF.get('keyserver', 'ssl_keyfile')).expanduser() - # LOG.debug(f'Certfile: {ssl_certfile}') - # LOG.debug(f'Keyfile: {ssl_keyfile}') - - # sslcontext = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) - # sslcontext.check_hostname = False - # sslcontext.load_cert_chain(ssl_certfile, ssl_keyfile) - sslcontext = None - - loop = asyncio.get_event_loop() - server = web.Application(loop=loop) - - template_loader = jinja2.FileSystemLoader("/cega") - aiohttp_jinja2.setup(server, loader=template_loader) - - # Registering the routes - server.router.add_get( '/' , index, name='root') - server.router.add_get( '/user/{id}', user , name='user') - server.router.add_get( '/pgp/{id}' , pgp_public_key, name='pgp') - - # And ...... cue music! - web.run_app(server, host=host, port=8001, shutdown_timeout=0, ssl_context=sslcontext) - -if __name__ == '__main__': - main() diff --git a/deployments/kube/oc/inbox/oc-inbox.yml b/deployments/kube/oc/inbox/oc-inbox.yml deleted file mode 100644 index a85dfa67..00000000 --- a/deployments/kube/oc/inbox/oc-inbox.yml +++ /dev/null @@ -1,60 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: inbox - labels: - role: inbox - app: LocalEGA -spec: - replicas: 1 - serviceName: inbox - selector: - matchLabels: - app: inbox - template: - metadata: - labels: - app: inbox - role: inbox - spec: - containers: - - name: inbox - image: nbisweden/ega-mina-inbox:latest - imagePullPolicy: Always - env: - - name: BROKER_HOST - valueFrom: - configMapKeyRef: - name: lega-inbox - key: broker - - name: INBOX_PORT - value: "2222" - - name: CEGA_ENDPOINT - valueFrom: - configMapKeyRef: - name: lega-inbox - key: cega_endpoint - - name: CEGA_ENDPOINT_CREDS - valueFrom: - secretKeyRef: - name: cega-creds - key: credentials - ports: - - name: inbox - containerPort: 2222 - protocol: TCP - livenessProbe: - httpGet: - path: /healthcheck - port: 8080 - scheme: HTTP - initialDelaySeconds: 120 - periodSeconds: 20 - volumeMounts: - - name: lega-inbox - mountPath: /ega/inbox - # restartPolicy: Always - volumes: - - name: lega-inbox - persistentVolumeClaim: - claimName: inbox-storage diff --git a/deployments/kube/oc/keys/oc-keyserver.yml b/deployments/kube/oc/keys/oc-keyserver.yml deleted file mode 100644 index 92d28934..00000000 --- a/deployments/kube/oc/keys/oc-keyserver.yml +++ /dev/null @@ -1,83 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - app: LocalEGA - role: keyserver - name: keys - namespace: lega -spec: - replicas: 1 - selector: - matchLabels: - app: keys - strategy: - rollingUpdate: - maxSurge: 1 - maxUnavailable: 1 - type: RollingUpdate - template: - metadata: - creationTimestamp: null - labels: - app: keys - spec: - containers: - - command: - - ega-keyserver - - '--keys' - - /etc/ega/keys.ini - env: - - name: LEGA_PASSWORD - valueFrom: - secretKeyRef: - key: password - name: lega-password - - name: KEYS_PASSWORD - valueFrom: - secretKeyRef: - key: password - name: keys-password - image: 'nbisweden/ega-base:latest' - imagePullPolicy: Always - name: keys - ports: - - containerPort: 8443 - protocol: TCP - livenessProbe: - httpGet: - path: /health - port: 8443 - scheme: HTTPS - initialDelaySeconds: 120 - periodSeconds: 20 - volumeMounts: - - mountPath: /etc/ega - name: config - - volumes: - - name: config - projected: - defaultMode: 420 - sources: - - configMap: - items: - - key: conf.ini - mode: 484 - path: conf.ini - name: lega-config - - configMap: - items: - - key: keys.ini - mode: 484 - path: keys.ini - name: lega-keyserver-config - - secret: - items: - - key: key1.sec - path: pgp/key.1 - - key: ssl.cert - path: ssl.cert - - key: ssl.key - path: ssl.key - name: keyserver-secret diff --git a/deployments/kube/oc/postgresql/README.md b/deployments/kube/oc/postgresql/README.md deleted file mode 100644 index 72d3178a..00000000 --- a/deployments/kube/oc/postgresql/README.md +++ /dev/null @@ -1,46 +0,0 @@ -## Openshift PostgreSQL - -There are several ways of customising the PostgreSQL Pod, here we illustrate only some of them. - -### s2i build - -Creating an extension of the current image using: https://github.com/sclorg/postgresql-container/tree/generated/9.6#extending-image -instructions. -In order to achieve this we will need to create a custom `postgresql-start/` script with the following contents: - -```bash -#!/bin/bash - -psql -U postgres -d $POSTGRESQL_DATABASE -c "CREATE EXTENSION IF NOT EXISTS pgcrypto;" -psql -U $POSTGRESQL_USER -d $POSTGRESQL_DATABASE -c "\i /scripts/db.sql" -``` - -The `/scripts` folder contains the `db.sql` as an configMap e.g.: - -``` -... -- mountPath: /scripts - name: initdb - -... -- name: initdb - configMap: - name: initsql - items: - - key: db.sql - path: db.sql -``` - -After this we can create our own docker image following the instructions presented above: -``` -$ s2i build ~/image-configuration/ postgresql new-postgresql -``` - -### configMap - -> user provided files are preferred over default files in `/usr/share/container-scripts/`- so it is possible to overwrite them. - -Considering the option of overwrite we can create our own custom map the contents of the https://github.com/sclorg/postgresql-container/tree/generated/9.6/root/usr/share/container-scripts/postgresql -in a configMap, by overwritting the `set_passwords.sh` script to contain the commands we need. - -An example of the `Deployment` an `DeploymentConfig` YAML files are present in this directory. diff --git a/deployments/kube/oc/postgresql/db.sql b/deployments/kube/oc/postgresql/db.sql deleted file mode 100644 index 2f1b3c66..00000000 --- a/deployments/kube/oc/postgresql/db.sql +++ /dev/null @@ -1,63 +0,0 @@ -\connect lega - -SET TIME ZONE 'Europe/Stockholm'; - -CREATE TYPE status AS ENUM ('Received', 'In progress', 'Completed', 'Archived', 'Error'); --- CREATE TYPE hash_algo AS ENUM ('md5', 'sha256'); - --- ################################################## --- FILES --- ################################################## -CREATE TABLE IF NOT EXISTS files ( - id SERIAL, PRIMARY KEY(id), UNIQUE (id), - elixir_id TEXT NOT NULL, - inbox_path TEXT NOT NULL, - status status, - vault_path TEXT, - vault_filesize INTEGER, - stable_id TEXT, - header TEXT, -- crypt4gh - created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp(), - last_modified TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp() -); - -CREATE FUNCTION insert_file(inpath files.inbox_path%TYPE, - eid files.elixir_id%TYPE, - sid files.stable_id%TYPE, - status files.status%TYPE) - RETURNS files.id%TYPE AS $insert_file$ - #variable_conflict use_column - DECLARE - file_id files.id%TYPE; - BEGIN - INSERT INTO files (inbox_path,elixir_id,stable_id,status) - VALUES(inpath,eid,sid,status) RETURNING files.id - INTO file_id; - RETURN file_id; - END; -$insert_file$ LANGUAGE plpgsql; - --- ################################################## --- ERRORS --- ################################################## -CREATE TABLE IF NOT EXISTS errors ( - id SERIAL, PRIMARY KEY(id), UNIQUE (id), - file_id INTEGER REFERENCES files (id) ON DELETE CASCADE, - hostname TEXT, - error_type TEXT NOT NULL, - msg TEXT NOT NULL, - from_user BOOLEAN DEFAULT FALSE, - occured_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp() -); - -CREATE FUNCTION insert_error(fid errors.file_id%TYPE, - h errors.hostname%TYPE, - etype errors.error_type%TYPE, - msg errors.msg%TYPE, - from_user errors.from_user%TYPE) - RETURNS void AS $set_error$ - BEGIN - INSERT INTO errors (file_id,hostname,error_type,msg,from_user) VALUES(fid,h,etype,msg,from_user); - UPDATE files SET status = 'Error' WHERE id = fid; - END; -$set_error$ LANGUAGE plpgsql; diff --git a/deployments/kube/oc/postgresql/oc-postgresql-dc.yml b/deployments/kube/oc/postgresql/oc-postgresql-dc.yml deleted file mode 100644 index 0e52cf27..00000000 --- a/deployments/kube/oc/postgresql/oc-postgresql-dc.yml +++ /dev/null @@ -1,102 +0,0 @@ -apiVersion: v1 -kind: DeploymentConfig -metadata: - name: postgresql - labels: - role: database - app: LocalEGA -spec: - replicas: 1 - selector: - app: postgresql - template: - metadata: - labels: - app: postgresql - spec: - containers: - - name: postgresql - image: centos/postgresql-96-centos7 - imagePullPolicy: IfNotPresent - env: - - name: POSTGRESQL_USER - valueFrom: - configMapKeyRef: - name: lega-db-config - key: user - - name: POSTGRESQL_PASSWORD - valueFrom: - secretKeyRef: - name: lega-db-secret - key: postgres_password - - name: POSTGRESQL_DATABASE - valueFrom: - configMapKeyRef: - name: lega-db-config - key: dbname - ports: - - name: postgres - containerPort: 5432 - volumeMounts: - - name: data - mountPath: /var/lib/pgsql/data - - mountPath: /usr/share/container-scripts - name: initdb - livenessProbe: - exec: - command: - - /bin/sh - - '-i' - - '-c' - - pg_isready -h 127.0.0.1 -p 5432 - failureThreshold: 3 - initialDelaySeconds: 30 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - readinessProbe: - exec: - command: - - /bin/sh - - '-i' - - '-c' - - >- - psql -h 127.0.0.1 -U $POSTGRESQL_USER -q -d - $POSTGRESQL_DATABASE -c 'SELECT 1' - failureThreshold: 3 - initialDelaySeconds: 5 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - volumes: - - name: data - persistentVolumeClaim: - claimName: db-storage - - name: initdb - configMap: - name: initsql - items: - - key: set_passwords.sh - path: postgresql/start/set_passwords.sh - - key: db.sql - path: db.sql - - key: common.sh - path: postgresql/common.sh - - key: scl_enable - path: postgresql/scl_enable - - key: openshift-custom-recovery.conf.template - path: postgresql/openshift-custom-recovery.conf.template - - key: openshift-custom-postgresql.conf.template - path: postgresql/openshift-custom-postgresql.conf.template - - key: openshift-custom-postgresql-replication.conf.template - path: postgresql/openshift-custom-postgresql-replication.conf.template - triggers: - - type: ConfigChange - strategy: - type: Rolling - rollingParams: - intervalSeconds: 1 - maxSurge: 25% - maxUnavailable: 25% - timeoutSeconds: 600 - updatePeriodSeconds: 1 diff --git a/deployments/kube/oc/postgresql/oc-postgresql.yml b/deployments/kube/oc/postgresql/oc-postgresql.yml deleted file mode 100644 index bd2440a4..00000000 --- a/deployments/kube/oc/postgresql/oc-postgresql.yml +++ /dev/null @@ -1,94 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: postgresql - labels: - role: database - app: LocalEGA -spec: - replicas: 1 - selector: - matchLabels: - app: postgresql - template: - metadata: - labels: - app: postgresql - role: database - spec: - containers: - - name: postgresql - image: centos/postgresql-96-centos7 - imagePullPolicy: IfNotPresent - env: - - name: POSTGRESQL_USER - valueFrom: - configMapKeyRef: - name: lega-db-config - key: user - - name: POSTGRESQL_PASSWORD - valueFrom: - secretKeyRef: - name: lega-db-secret - key: postgres_password - - name: POSTGRESQL_DATABASE - valueFrom: - configMapKeyRef: - name: lega-db-config - key: dbname - ports: - - name: postgres - containerPort: 5432 - volumeMounts: - - name: data - mountPath: /var/lib/pgsql/data - - mountPath: /usr/share/container-scripts - name: initdb - livenessProbe: - exec: - command: - - /bin/sh - - '-i' - - '-c' - - pg_isready -h 127.0.0.1 -p 5432 - failureThreshold: 3 - initialDelaySeconds: 30 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - readinessProbe: - exec: - command: - - /bin/sh - - '-i' - - '-c' - - >- - psql -h 127.0.0.1 -U $POSTGRESQL_USER -q -d - $POSTGRESQL_DATABASE -c 'SELECT 1' - failureThreshold: 3 - initialDelaySeconds: 5 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - volumes: - - name: data - persistentVolumeClaim: - claimName: db-storage - - name: initdb - configMap: - name: initsql - items: - - key: set_passwords.sh - path: postgresql/start/set_passwords.sh - - key: db.sql - path: db.sql - - key: common.sh - path: postgresql/common.sh - - key: scl_enable - path: postgresql/scl_enable - - key: openshift-custom-recovery.conf.template - path: postgresql/openshift-custom-recovery.conf.template - - key: openshift-custom-postgresql.conf.template - path: postgresql/openshift-custom-postgresql.conf.template - - key: openshift-custom-postgresql-replication.conf.template - path: postgresql/openshift-custom-postgresql-replication.conf.template diff --git a/deployments/kube/oc/postgresql/set_passwords.sh b/deployments/kube/oc/postgresql/set_passwords.sh deleted file mode 100644 index 387a46ef..00000000 --- a/deployments/kube/oc/postgresql/set_passwords.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash - -if [[ ",$postinitdb_actions," = *,simple_db,* ]]; then -psql --command "ALTER USER \"${POSTGRESQL_USER}\" WITH ENCRYPTED PASSWORD '${POSTGRESQL_PASSWORD}';" -fi - -if [ -v POSTGRESQL_MASTER_USER ]; then -psql --command "ALTER USER \"${POSTGRESQL_MASTER_USER}\" WITH REPLICATION;" -psql --command "ALTER USER \"${POSTGRESQL_MASTER_USER}\" WITH ENCRYPTED PASSWORD '${POSTGRESQL_MASTER_PASSWORD}';" -fi - -if [ -v POSTGRESQL_ADMIN_PASSWORD ]; then -psql --command "ALTER USER \"postgres\" WITH ENCRYPTED PASSWORD '${POSTGRESQL_ADMIN_PASSWORD}';" -fi - -psql -U postgres -d $POSTGRESQL_DATABASE -c "CREATE EXTENSION IF NOT EXISTS pgcrypto;" -psql -U $POSTGRESQL_USER -d $POSTGRESQL_DATABASE -c "\i /usr/share/container-scripts/db.sql" diff --git a/deployments/kube/test/.gitignore b/deployments/kube/test/.gitignore deleted file mode 100644 index 5015af38..00000000 --- a/deployments/kube/test/.gitignore +++ /dev/null @@ -1,5 +0,0 @@ -*.bam -*.c4ga -*.c4ga.md5 -*.md5 -mq.env diff --git a/deployments/kube/test/Makefile b/deployments/kube/test/Makefile deleted file mode 100644 index bc599325..00000000 --- a/deployments/kube/test/Makefile +++ /dev/null @@ -1,50 +0,0 @@ -.PHONY: upload submit user - -# folder for the localegarepo -MAIN_REPO=~/LocalEGA -# the RSA keys -SSH_KEY_PRIV=$(MAIN_REPO)/deployments/kube/auto/config/user.key - -USER=ega-box-999 -# make sure you have this file or generate a new one -FILE=HG00458.unmapped.ILLUMINA.bwa.CHS.low_coverage.20130415.bam - -############################## - -DOCKER_PATH=$(MAIN_REPO)/deployments/kube -PGP_PUB=$(DOCKER_PATH)/auto/config/key.1.pub -# should be changed with remote ip, this is the default ip for kubernetes -DEPLOY_IP=$(shell minikube ip) -PGP_EMAIL=local-ega@ega.eu -CEGA_PORT=$(shell kubectl describe svc cega-mq --namespace=testing | grep "NodePort:" | awk '$$2=="amqp" {print substr($$3,1,5)}') -INBOX_PORT=$(shell kubectl describe svc inbox --namespace=testing | grep "NodePort:" | awk '{print substr($$3,1,5)}') -#needs the same password that the Lega-MQ conencted to CEGA-MQ -CEGA_PASWORD=$(shell kubectl get secrets cega-connection --namespace=testing -o 'go-template={{index .data "address"}}' | base64 -d | awk '{print substr($$1,13,32)}') -CEGA_MQ_CONNECTION=amqp://lega:$(CEGA_PASWORD)@$(DEPLOY_IP):$(CEGA_PORT)/lega - -############################## - -all: upload submit - -$(FILE).c4ga: $(FILE) - lega-cryptor encrypt --pk $(PGP_PUB) -i $< -o $@ - -# lega-cryptor encrypt -r Sweden -i $< -o $@ - -upload: $(FILE).c4ga - chmod 400 $(SSH_KEY_PRIV) - cd $( $@ - -$(FILE).md5: $(FILE) - printf '%s' $(shell md5sum $< | cut -d' ' -f1) > $@ - -submit: $(FILE).c4ga $(FILE).c4ga.md5 $(FILE).md5 - @echo publish.py --connection amqp://[redacted]@$(lastword $(subst @, ,$(CEGA_MQ_CONNECTION))) $(USER) dir/$(FILE).c4ga --enc ... - @python $(MAIN_REPO)/extras/publish.py --connection $(subst cega-mq,localhost,$(CEGA_MQ_CONNECTION)) $(USER) $(FILE).c4ga --enc $(shell cat $(FILE).c4ga.md5) --stableID EGAF$(shell cat $(FILE).md5) - -clean: - rm -rf $(FILE).c4ga $(FILE).c4ga.md5 $(FILE).md5 diff --git a/deployments/kube/test/README.md b/deployments/kube/test/README.md deleted file mode 100644 index 48464e71..00000000 --- a/deployments/kube/test/README.md +++ /dev/null @@ -1,48 +0,0 @@ -## Testing script - -### Version 1 - -Testing script is used to replicate upload and submission functionalities from an end user. -If you are running the script after using the "Somewhat Easy" deployment using the `deploy.py` script, -use the same password provided for the CEGA Users RSA key. -your own in the `Makefile`. Also `MAIN_REPO=~/LocalEGA` should reflect the path do the LocalEGA project. - -The actual test: -``` -pip install -r requirements.txt -make upload -make submit -``` - -Other option: `make clean` to remove generate files. - -### Version 2 - -Python version of the script to be used in those scenarios where `sftp` is not restricted. -Can be used with `docker pull blankdots/docker-browsepy:ftp` image. - -``` -pip install -r requirements.txt -python sftp input.file -``` - -Other options available: -```console -╰─$ python sftp.py --help -usage: sftp.py [-h] [--u U] [--uk UK] [--pk PK] [--inbox INBOX] [--cm CM] - input - -Encrypting, uploading to inbox and sending message to CEGA. - -positional arguments: - input Input file to be encrypted. - -optional arguments: - -h, --help show this help message and exit - --u U Username to identify the elixir. - --uk UK User secret private RSA key. - --pk PK Public key file to encrypt file. - --inbox INBOX Inbox address, or service name - --cm CM CEGA MQ broker address - -``` diff --git a/deployments/kube/test/requirements.txt b/deployments/kube/test/requirements.txt deleted file mode 100644 index ed5bb263..00000000 --- a/deployments/kube/test/requirements.txt +++ /dev/null @@ -1,3 +0,0 @@ -pika -paramiko -git+https://github.com/NBISweden/LocalEGA-cryptor.git diff --git a/deployments/kube/test/sftp.py b/deployments/kube/test/sftp.py deleted file mode 100644 index 5114bf41..00000000 --- a/deployments/kube/test/sftp.py +++ /dev/null @@ -1,134 +0,0 @@ -import paramiko -import os -import pika -import secrets -from hashlib import md5 -import json -import string -import uuid -import logging -from legacryptor.crypt4gh import encrypt -import pgpy -import argparse - - -FORMAT = '[%(asctime)s][%(name)s][%(process)d %(processName)s][%(levelname)-8s] (L:%(lineno)s) %(funcName)s: %(message)s' -logging.basicConfig(format=FORMAT, datefmt='%Y-%m-%d %H:%M:%S') -LOG = logging.getLogger(__name__) -LOG.setLevel(logging.INFO) - - -def open_ssh_connection(hostname, user, key_path, key_pass='password', port=2222): - """Open an ssh connection, test function.""" - try: - client = paramiko.SSHClient() - k = paramiko.RSAKey.from_private_key_file(key_path, password=key_pass) - client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) - client.connect(hostname, allow_agent=False, look_for_keys=False, port=port, timeout=0.3, username=user, pkey=k) - LOG.info(f'ssh connected to {hostname}:{port} with {user}') - except paramiko.BadHostKeyException as e: - LOG.error(f'Something went wrong {e}') - raise Exception('BadHostKeyException on ' + hostname) - except paramiko.AuthenticationException as e: - LOG.error(f'Something went wrong {e}') - raise Exception('AuthenticationException on ' + hostname) - except paramiko.SSHException as e: - LOG.error(f'Something went wrong {e}') - raise Exception('SSHException on ' + hostname) - - return client - - -def sftp_upload(hostname, user, file_path, key_path, key_pass='password', port=2222): - """SFTP Client file upload.""" - try: - k = paramiko.RSAKey.from_private_key_file(key_path, password=key_pass) - transport = paramiko.Transport((hostname, port)) - transport.connect(username=user, pkey=k) - LOG.info(f'sftp connected to {hostname}:{port} with {user}') - sftp = paramiko.SFTPClient.from_transport(transport) - filename, _ = os.path.splitext(file_path) - sftp.put(file_path, f'{filename}.c4ga') - LOG.info(f'file uploaded {filename}.c4ga') - except Exception as e: - LOG.error(f'Something went wrong {e}') - raise e - finally: - LOG.debug('sftp done') - transport.close() - - -def submit_cega(connection, user, file_path, c4ga_md5, file_md5=None): - """Submit message to CEGA along with.""" - stableID = ''.join(secrets.choice(string.digits) for i in range(16)) - message = {'user': user, 'filepath': file_path, 'stable_id': f'EGA_{stableID}'} - if c4ga_md5: - message['encrypted_integrity'] = {'checksum': c4ga_md5, 'algorithm': 'md5'} - if file_md5: - message['unencrypted_integrity'] = {'checksum': file_md5, 'algorithm': 'md5'} - - try: - parameters = pika.URLParameters(connection) - connection = pika.BlockingConnection(parameters) - channel = connection.channel() - channel.basic_publish(exchange='localega.v1', routing_key='files', - body=json.dumps(message), - properties=pika.BasicProperties(correlation_id=str(uuid.uuid4()), - content_type='application/json', - delivery_mode=2)) - - connection.close() - LOG.info('Message published to CentralEGA') - except Exception as e: - LOG.error(f'Something went wrong {e}') - raise e - - -def encrypt_file(file_path, pubkey): - """Encrypt file and extract its md5.""" - file_size = os.path.getsize(file_path) - filename, _ = os.path.splitext(file_path) - output_base = os.path.basename(filename) - c4ga_md5 = None - output_file = os.path.expanduser(f'{output_base}.c4ga') - - try: - encrypt(pubkey, open(file_path, 'rb'), file_size, open(f'{output_base}.c4ga', 'wb')) - with open(output_file, 'rb') as read_file: - c4ga_md5 = md5(read_file.read()).hexdigest() - LOG.info(f'File {output_base}.c4ga is the encrypted file with md5: {c4ga_md5}.') - except Exception as e: - LOG.error(f'Something went wrong {e}') - raise e - return (output_file, c4ga_md5) - - -def main(): - """Do the sparkles and fireworks.""" - parser = argparse.ArgumentParser(description="Encrypting, uploading to inbox and sending message to CEGA.") - - parser.add_argument('input', help='Input file to be encrypted.') - parser.add_argument('--u', help='Username to identify the elixir.', default='ega-box-999') - parser.add_argument('--uk', help='User secret private RSA key.', default='/files/user.key') - parser.add_argument('--pk', help='Public key file to encrypt file.', default='/files/key.1.pub') - parser.add_argument('--inbox', help='Inbox address, or service name', default='inbox.lega.svc') - parser.add_argument('--cm', help='CEGA MQ broker address') - - args = parser.parse_args() - - used_file = os.path.expanduser(args.input) - key_pk = os.path.expanduser(args.uk) - pub_key, _ = pgpy.PGPKey.from_file(os.path.expanduser(args.pk)) - - inbox_host = args.inbox - test_user = args.u - connection = args.cm if args.cm else os.environ.get('CEGA_MQ', None) - test_file, c4ga_md5 = encrypt_file(used_file, pub_key) - if c4ga_md5: - sftp_upload(inbox_host, test_user, test_file, key_pk) - submit_cega(connection, test_user, test_file, c4ga_md5) - LOG.info('Should be all!') - - -if __name__ == '__main__': - main() diff --git a/deployments/kube/yml/cega-mq/cm.cega-mq.yml b/deployments/kube/yml/cega-mq/cm.cega-mq.yml deleted file mode 100644 index ab799d4a..00000000 --- a/deployments/kube/yml/cega-mq/cm.cega-mq.yml +++ /dev/null @@ -1,57 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: cega-mq-config -data: - rabbitmq.config: |- - %% -*- mode: erlang -*- - %% - [{rabbit,[{loopback_users, [ ] }, - {disk_free_limit, "1GB"}]}, - {rabbitmq_management, [ {load_definitions, "/etc/rabbitmq/defs.json"} ]} - ]. - defs.json: |- - {"rabbit_version":"3.6.11", - "users":[{"name":"lega","password_hash":"bBclB1yTaQgScFULP47XSj8XiBq45/j3DJ6jx52zLikx20gG","hashing_algorithm":"rabbit_password_hashing_sha256","tags":"administrator"}], - "vhosts":[{"name":"lega"}], - "permissions":[{"user":"lega", "vhost":"lega", "configure":".*", "write":".*", "read":".*"}], - "parameters":[], - "global_parameters":[{"name":"cluster_name", "value":"rabbit@localhost"}], - "policies":[], - "queues":[{"name":"inbox", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{}}, - {"name":"inbox.checksums", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{}}, - {"name":"files", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{}}, - {"name":"completed", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{}}, - {"name":"errors", "vhost":"lega", "durable":true, "auto_delete":false, "arguments":{}}], - "exchanges":[{"name":"localega.v1", "vhost":"lega", "type":"topic", "durable":true, "auto_delete":false, "internal":false, "arguments":{}}], - "bindings":[{"source":"localega.v1","vhost":"lega","destination_type":"queue","arguments":{},"destination":"inbox","routing_key":"files.inbox"}, - {"source":"localega.v1","vhost":"lega","destination_type":"queue","arguments":{},"destination":"inbox.checksums","routing_key":"files.inbox.checksums"}, - {"source":"localega.v1","vhost":"lega","destination_type":"queue","arguments":{},"destination":"files","routing_key":"files"}, - {"source":"localega.v1","vhost":"lega","destination_type":"queue","arguments":{},"destination":"completed","routing_key":"files.completed"}, - {"source":"localega.v1","vhost":"lega","destination_type":"queue","arguments":{},"destination":"errors","routing_key":"files.error"}] - } ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cega-script -data: - cega-mq.sh: |- - #!/bin/bash - - set -e - - # Initialization - rabbitmq-plugins enable --offline rabbitmq_federation - rabbitmq-plugins enable --offline rabbitmq_federation_management - rabbitmq-plugins enable --offline rabbitmq_shovel - rabbitmq-plugins enable --offline rabbitmq_shovel_management - - cp --remove-destination /temp/rabbitmq.config /etc/rabbitmq/rabbitmq.config - cp --remove-destination /temp/defs.json /etc/rabbitmq/defs.json - chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.config - chmod 640 /etc/rabbitmq/rabbitmq.config - chown rabbitmq:rabbitmq /etc/rabbitmq/defs.json - chmod 640 /etc/rabbitmq/defs.json - - exec rabbitmq-server diff --git a/deployments/kube/yml/cega-mq/sts.cega-mq.yml b/deployments/kube/yml/cega-mq/sts.cega-mq.yml deleted file mode 100644 index 6120b632..00000000 --- a/deployments/kube/yml/cega-mq/sts.cega-mq.yml +++ /dev/null @@ -1,61 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: cega-mq - labels: - role: broker - app: cega-mq -spec: - replicas: 1 - serviceName: "cega-mq" - selector: - matchLabels: - app: cega-mq - template: - metadata: - labels: - app: cega-mq - spec: - containers: - - name: cega-mq - image: rabbitmq:3.6.14-management - imagePullPolicy: Always - command: ["/scripts/cega-mq.sh"] - ports: - - name: cega-mq - containerPort: 15672 - protocol: TCP - - containerPort: 5672 - name: amqp - volumeMounts: - - name: cega-mq-entrypoint - mountPath: /scripts - - name: temp - mountPath: /temp - - name: rabbitmq - mountPath: /etc/rabbitmq - volumes: - - name: rabbitmq - persistentVolumeClaim: - claimName: rabbitmq - - name: cega-mq-entrypoint - configMap: - name: cega-mq-entrypoint - defaultMode: 0744 - - name: temp - configMap: - name: cega-mq-config - items: - - key: defs.json - path: defs.json - - key: rabbitmq.config - path: rabbitmq.config - defaultMode: 0744 - volumeClaimTemplates: - - metadata: - name: rabbitmq - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 1Gi # make this bigger in production diff --git a/deployments/kube/yml/cega-mq/svc.cega-mq.yml b/deployments/kube/yml/cega-mq/svc.cega-mq.yml deleted file mode 100644 index d50370b0..00000000 --- a/deployments/kube/yml/cega-mq/svc.cega-mq.yml +++ /dev/null @@ -1,36 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: cega-mq-management - labels: - app: cega-mq -spec: - type: NodePort - ports: - - port: 15672 - targetPort: 15672 - protocol: TCP - name: http - selector: - app: cega-mq ---- -apiVersion: v1 -kind: Service -metadata: - # The required headless service for StatefulSets - name: cega-mq - labels: - app: cega-mq -spec: - type: NodePort - ports: - - port: 5672 - targetPort: 5672 - protocol: TCP - name: amqp - - port: 4369 - name: epmd - - port: 25672 - name: rabbitmq-dist - selector: - app: cega-mq diff --git a/deployments/kube/yml/cega-users/cm.cega.yml b/deployments/kube/yml/cega-users/cm.cega.yml deleted file mode 100644 index cf2aaaa2..00000000 --- a/deployments/kube/yml/cega-users/cm.cega.yml +++ /dev/null @@ -1,153 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: cega-users-config -data: - server.py: |- - #!/usr/bin/env python3.6 - # -*- coding: utf-8 -*- - - ''' - Test server to act as CentralEGA endpoint for users - - :author: Frédéric Haziza - :copyright: (c) 2017, NBIS System Developers. - ''' - - import sys - import os - import asyncio - import ssl - import yaml - from pathlib import Path - from functools import wraps - from base64 import b64decode - - import logging as LOG - - from aiohttp import web - import jinja2 - import aiohttp_jinja2 - - instances = {} - for instance in os.environ.get('LEGA_INSTANCES','').strip().split(','): - instances[instance] = (Path(f'/cega/users/{instance}'), os.environ[f'CEGA_REST_{instance}_PASSWORD']) - default_inst = os.environ.get('DEFAULT_INSTANCE','lega') - - def protected(func): - @wraps(func) - def wrapped(request): - auth_header = request.headers.get('AUTHORIZATION') - if not auth_header: - raise web.HTTPUnauthorized(text=f'Protected access\n') - _, token = auth_header.split(None, 1) # Skipping the Basic keyword - passwd = b64decode(token).decode() - info = instances.get(default_inst) - if info is not None and info[1] == passwd: - request.match_info['lega'] = default_inst - request.match_info['users_dir'] = info[0] - return func(request) - raise web.HTTPUnauthorized(text=f'Protected access\n') - return wrapped - - - @aiohttp_jinja2.template('users.html') - async def index(request): - users={} - for instance, (users_dir, _) in instances.items(): - users[instance]= {} - files = [f for f in users_dir.iterdir() if f.is_file()] - for f in files: - with open(f, 'r') as stream: - users[instance][f.stem] = yaml.load(stream) - return { "cega_users": users } - - @protected - async def user(request): - name = request.match_info['id'] - lega_instance = request.match_info['lega'] - users_dir = request.match_info['users_dir'] - try: - with open(f'{users_dir}/{name}.yml', 'r') as stream: - d = yaml.load(stream) - json_data = { 'password_hash': d.get("password_hash",None), 'pubkey': d.get("pubkey",None), 'expiration': d.get("expiration",None) } - return web.json_response(json_data) - except OSError: - raise web.HTTPBadRequest(text=f'No info for that user {name} in LocalEGA {lega_instance}... yet\n') - - # Unprotected access - async def pgp_public_key(request): - name = request.match_info['id'] - try: - with open(f'/cega/users/pgp/{name}.pub', 'r') as stream: # 'rb' - return web.Response(text=stream.read()) # .hex() - except OSError: - raise web.HTTPBadRequest(text=f'No info about {name} in CentralEGA... yet\n') - - def main(): - - host = sys.argv[1] if len(sys.argv) > 1 else "0.0.0.0" - - # ssl_certfile = Path(CONF.get('keyserver', 'ssl_certfile')).expanduser() - # ssl_keyfile = Path(CONF.get('keyserver', 'ssl_keyfile')).expanduser() - # LOG.debug(f'Certfile: {ssl_certfile}') - # LOG.debug(f'Keyfile: {ssl_keyfile}') - - # sslcontext = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) - # sslcontext.check_hostname = False - # sslcontext.load_cert_chain(ssl_certfile, ssl_keyfile) - sslcontext = None - - loop = asyncio.get_event_loop() - server = web.Application(loop=loop) - - template_loader = jinja2.FileSystemLoader("/cega") - aiohttp_jinja2.setup(server, loader=template_loader) - - # Registering the routes - server.router.add_get( '/' , index, name='root') - server.router.add_get( '/user/{id}', user , name='user') - server.router.add_get( '/pgp/{id}' , pgp_public_key, name='pgp') - - # And ...... cue music! - web.run_app(server, host=host, port=80, shutdown_timeout=0, ssl_context=sslcontext) - - if __name__ == '__main__': - main() - - users.html: |- - - - - - Central EGA - - - -

Central EGA Users

- - {% for instance, lega_users in cega_users.items() %} -

{{ instance }}

-
- {% for username, data in lega_users.items() %} -
{{ username }}
-
password_hash{{ data['password_hash'] }}
-
pubkey{{ data['pubkey'] }}
-
expiration{{ data['expiration'] }}
- {% endfor %} -
- {% endfor %} - - - - ega-box-999.yml: |- - --- - pubkey: diff --git a/deployments/kube/yml/cega-users/deploy.cega.yml b/deployments/kube/yml/cega-users/deploy.cega.yml deleted file mode 100644 index 623d3168..00000000 --- a/deployments/kube/yml/cega-users/deploy.cega.yml +++ /dev/null @@ -1,51 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: cega-users - labels: - role: fake-users -spec: - replicas: 1 - selector: - matchLabels: - app: cega-users - template: - metadata: - labels: - app: cega-users - role: fake-users - spec: - containers: - - name: cega-users - image: nbisweden/ega-base - imagePullPolicy: Always - command: ["python3.6", "/cega/server.py"] - env: - - name: LEGA_INSTANCES - value: lega - - name: CEGA_REST_lega_PASSWORD - valueFrom: - secretKeyRef: - name: cega-creds - key: credentials - ports: - - name: cega-users - containerPort: 80 - protocol: TCP - volumeMounts: - - name: cega-config - mountPath: /cega - volumes: - - name: cega-config - configMap: - name: cega-users-config - items: - - key: users.html - path: users.html - - key: server.py - path: server.py - - key: ega-box-999.yml - path: users/ega-box-999.yml - - key: ega-box-999.yml - path: users/lega/ega-box-999.yml - defaultMode: 0744 diff --git a/deployments/kube/yml/cega-users/svc.cega.yml b/deployments/kube/yml/cega-users/svc.cega.yml deleted file mode 100644 index cffab606..00000000 --- a/deployments/kube/yml/cega-users/svc.cega.yml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: cega-users - labels: - app: cega-users -spec: - type: NodePort - ports: - - port: 80 - targetPort: 80 - protocol: TCP - selector: - app: cega-users diff --git a/deployments/kube/yml/inbox/sts.inbox.yml b/deployments/kube/yml/inbox/sts.inbox.yml deleted file mode 100644 index 274a0a10..00000000 --- a/deployments/kube/yml/inbox/sts.inbox.yml +++ /dev/null @@ -1,47 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: inbox - labels: - role: inbox - app: LocalEGA -spec: - replicas: 1 - serviceName: inbox - selector: - matchLabels: - app: inbox - template: - metadata: - labels: - app: inbox - role: inbox - spec: - containers: - - name: inbox - image: nbisweden/ega-mina-inbox:latest - imagePullPolicy: Always - env: - - name: BROKER_HOST - value: mq.localega.svc.cluster.local - - name: INBOX_PORT - value: "2222" - - name: CEGA_ENDPOINT - value: http://cega-users/user/ - - name: CEGA_ENDPOINT_CREDS - valueFrom: - secretKeyRef: - name: cega-creds - key: credentials - ports: - - name: inbox - containerPort: 2222 - protocol: TCP - volumeMounts: - - name: lega-inbox - mountPath: /ega/inbox - # restartPolicy: Always - volumes: - - name: lega-inbox - persistentVolumeClaim: - claimName: inbox-storage diff --git a/deployments/kube/yml/inbox/svc.inbox.yml b/deployments/kube/yml/inbox/svc.inbox.yml deleted file mode 100644 index 9edbd236..00000000 --- a/deployments/kube/yml/inbox/svc.inbox.yml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: inbox - labels: - app: inbox -spec: - type: NodePort - ports: - - port: 2222 - targetPort: 2222 - protocol: TCP - name: inbox - selector: - app: inbox diff --git a/deployments/kube/yml/ingest/deploy.ingest.yml b/deployments/kube/yml/ingest/deploy.ingest.yml deleted file mode 100644 index 004925c3..00000000 --- a/deployments/kube/yml/ingest/deploy.ingest.yml +++ /dev/null @@ -1,61 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: ingest - labels: - role: ingest - app: LocalEGA -spec: - replicas: 1 - selector: - matchLabels: - app: ingest - template: - metadata: - labels: - app: ingest - role: ingest - spec: - containers: - - name: ingest - image: nbisweden/ega-base:latest - imagePullPolicy: Always - command: ["gosu", "lega", "ega-ingest"] - env: - - name: POSTGRES_PASSWORD - valueFrom: - secretKeyRef: - name: lega-db-secret - key: postgres_password - - name: LEGA_PASSWORD - valueFrom: - secretKeyRef: - name: lega-password - key: password - - name: S3_ACCESS_KEY - valueFrom: - secretKeyRef: - name: s3-keys - key: access_key - - name: S3_SECRET_KEY - valueFrom: - secretKeyRef: - name: s3-keys - key: secret_key - volumeMounts: - - name: inbox - mountPath: /ega/inbox - - name: ingest-conf - mountPath: /etc/ega - restartPolicy: Always - volumes: - - name: ingest-conf - configMap: - name: lega-config - items: - - key: conf.ini - path: conf.ini - defaultMode: 0744 - - name: inbox - persistentVolumeClaim: - claimName: inbox-storage diff --git a/deployments/kube/yml/keys/cm.keyserver.yml b/deployments/kube/yml/keys/cm.keyserver.yml deleted file mode 100644 index 87cf8439..00000000 --- a/deployments/kube/yml/keys/cm.keyserver.yml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: lega-keyserver-config -data: - keys.ini: |- - diff --git a/deployments/kube/yml/keys/deploy.keyserver.yml b/deployments/kube/yml/keys/deploy.keyserver.yml deleted file mode 100644 index 37625655..00000000 --- a/deployments/kube/yml/keys/deploy.keyserver.yml +++ /dev/null @@ -1,68 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: keys - labels: - role: keyserver - app: LocalEGA -spec: - replicas: 1 - selector: - matchLabels: - app: keys - template: - metadata: - labels: - app: keys - role: keyserver - spec: - containers: - - name: keyserver - image: nbisweden/ega-base:latest - imagePullPolicy: Always - command: ["gosu","lega","ega-keyserver","--keys","/etc/ega/keys.ini"] - env: - - name: POSTGRES_PASSWORD - valueFrom: - secretKeyRef: - name: lega-db-secret - key: postgres_password - - name: LEGA_PASSWORD - valueFrom: - secretKeyRef: - name: lega-password - key: password - ports: - - name: lega-keys - containerPort: 8443 - protocol: TCP - volumeMounts: - - name: keyserver-conf - mountPath: /etc/ega - volumes: - - name: keyserver-conf - projected: - sources: - - configMap: - name: lega-config - items: - - key: conf.ini - path: conf.ini - - key: ssl.cert - path: ssl.cert - - key: ssl.key - path: ssl.key - mode: 0744 - - configMap: - name: lega-keyserver-config - items: - - key: keys.ini - path: keys.ini - mode: 0744 - - configMap: - name: keyserver-secret - items: - - key: ega.sec - path: pgp/ega.sec - - key: ega2.sec - path: pgp/ega2.sec diff --git a/deployments/kube/yml/keys/secret.keyserver.yml b/deployments/kube/yml/keys/secret.keyserver.yml deleted file mode 100644 index a0e77261..00000000 --- a/deployments/kube/yml/keys/secret.keyserver.yml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: v1 -kind: Secret -metadata: - name: lega-pass-secret -type: Opaque -data: - pgp_password: ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: keyserver-secret -data: - ega.sec: |- - - ega2.sec: |- - diff --git a/deployments/kube/yml/keys/svc.keyserver.yml b/deployments/kube/yml/keys/svc.keyserver.yml deleted file mode 100644 index 14c56188..00000000 --- a/deployments/kube/yml/keys/svc.keyserver.yml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: keys - labels: - app: keys -spec: - ports: - - port: 8443 - targetPort: 8443 - protocol: TCP - selector: - app: keys diff --git a/deployments/kube/yml/lega-config/cm.lega.yml b/deployments/kube/yml/lega-config/cm.lega.yml deleted file mode 100644 index 78276686..00000000 --- a/deployments/kube/yml/lega-config/cm.lega.yml +++ /dev/null @@ -1,49 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: lega-config -data: - ssl.cert: |- - - ssl.key: |- - - conf.ini: |- - [DEFAULT] - log = console - - [keyserver] - port = 8443 - - [quality_control] - keyserver_endpoint = https://keys.localega.svc.cluster.local:8443/retrieve/%s/private - - [inbox] - location = /ega/inbox/%s - mode = 2750 - - [vault] - driver = S3Storage - url = http://minio.localega.svc.cluster.local:9000 - access_key = fwgUUn5sSdf4mK5t - secret_key = k5ayEvcpv9yN8kH1FZeJXqgshLiU2Byx - #region = lega - - - [outgestion] - # Just for test - keyserver_endpoint = https://keys.localega.svc.cluster.local:8443/retrieve/%s/private - - ## Connecting to Local EGA - [broker] - host = mq.localega.svc.cluster.local - connection_attempts = 30 - # delay in seconds - retry_delay = 10 - - [postgres] - host = db.localega.svc.cluster.local - user = lega - try = 30 - - [eureka] - endpoint = http://cega-eureka.localega.svc.cluster.local:8761 diff --git a/deployments/kube/yml/lega-config/pv.lega.yml b/deployments/kube/yml/lega-config/pv.lega.yml deleted file mode 100644 index d3686c0b..00000000 --- a/deployments/kube/yml/lega-config/pv.lega.yml +++ /dev/null @@ -1,29 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: localega-db-pv - labels: - type: local -spec: - storageClassName: postgres - capacity: - storage: 0.5Gi - accessModes: - - ReadWriteMany - hostPath: - path: "/mnt/data/db" ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: localega-inbox-pv - labels: - type: local -spec: - storageClassName: inbox - capacity: - storage: 0.5Gi - accessModes: - - ReadWriteOnce - hostPath: - path: "/mnt/data/inbox" diff --git a/deployments/kube/yml/lega-config/pvc.lega.yml b/deployments/kube/yml/lega-config/pvc.lega.yml deleted file mode 100644 index e3284d6f..00000000 --- a/deployments/kube/yml/lega-config/pvc.lega.yml +++ /dev/null @@ -1,23 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: inbox-storage -spec: - storageClassName: inbox - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 0.5Gi ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: db-storage -spec: - storageClassName: postgres - accessModes: - - ReadWriteMany - resources: - requests: - storage: 0.5Gi diff --git a/deployments/kube/yml/lega-config/secret.lega.yml b/deployments/kube/yml/lega-config/secret.lega.yml deleted file mode 100644 index d3df50a6..00000000 --- a/deployments/kube/yml/lega-config/secret.lega.yml +++ /dev/null @@ -1,33 +0,0 @@ -apiVersion: v1 -kind: Secret -metadata: - name: lega-db-secret -type: Opaque -data: - postgres_password: ---- -apiVersion: v1 -kind: Secret -metadata: - name: lega-password -type: Opaque -data: - password: ---- -apiVersion: v1 -kind: Secret -metadata: - name: s3-keys -type: Opaque -data: - access_key: - secret_key: ---- -apiVersion: v1 -kind: Secret -metadata: - name: cega-creds -type: Opaque -data: - credentials: - # secret_key: diff --git a/deployments/kube/yml/mq/cm.lega-mq.yml b/deployments/kube/yml/mq/cm.lega-mq.yml deleted file mode 100644 index 59a52265..00000000 --- a/deployments/kube/yml/mq/cm.lega-mq.yml +++ /dev/null @@ -1,116 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: mq-config -data: - rabbitmq.config: |- - %% -*- mode: erlang -*- - %% - [{rabbit,[{loopback_users, [ ] }, - {tcp_listeners, [ 5672 ] }, - {ssl_listeners, [ ] }, - {hipe_compile, false }, - {default_vhost, "/"}, - {default_user, "guest"}, - {default_pass, "guest"}, - {default_permissions, [".*", ".*",".*"]}, - {default_user_tags, [administrator]}, - {disk_free_limit, "1GB"}]}, - {rabbitmq_management, [ { listener, [ { port, 15672 }, { ssl, false }] }, - { load_definitions, "/etc/rabbitmq/defs.json"} ]} - ]. - defs.json: |- - ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: mq-entrypoint -data: - mq.sh: |- - #!/bin/bash - - set -e - set -x - - [[ -z "${CEGA_CONNECTION}" ]] && echo 'Environment CEGA_CONNECTION is empty' 1>&2 && exit 1 - - # Initialization - cp --remove-destination /temp/rabbitmq.config /etc/rabbitmq/rabbitmq.config - cp --remove-destination /temp/defs.json /etc/rabbitmq/defs.json - rabbitmq-plugins enable --offline rabbitmq_federation - rabbitmq-plugins enable --offline rabbitmq_federation_management - rabbitmq-plugins enable --offline rabbitmq_shovel - rabbitmq-plugins enable --offline rabbitmq_shovel_management - - chmod 640 /etc/rabbitmq/rabbitmq.config - chmod 640 /etc/rabbitmq/defs.json - - # Problem of loading the plugins and definitions out-of-orders. - # Explanation: https://github.com/rabbitmq/rabbitmq-shovel/issues/13 - # Therefore: we run the server, with some default confs - # and then we upload the cega-definitions through the HTTP API - - # We cannot add those definitions to defs.json (loaded by the - # management plugin. See /etc/rabbitmq/rabbitmq.config) - # So we use curl afterwards, to upload the extras definitions - # See also https://pulse.mozilla.org/api/ - - # dest-exchange-key is not set for the shovel, so the key is re-used. - - # For the moment, still using guest:guest - cat > /etc/rabbitmq/defs-cega.json <&1 && exit 1 - echo "Central EGA connections loaded" - } & - - exec rabbitmq-server diff --git a/deployments/kube/yml/mq/sts.lega-mq.yml b/deployments/kube/yml/mq/sts.lega-mq.yml deleted file mode 100644 index dd0351ae..00000000 --- a/deployments/kube/yml/mq/sts.lega-mq.yml +++ /dev/null @@ -1,65 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mq - labels: - role: broker - app: LocalEGA -spec: - replicas: 1 - serviceName: mq - selector: - matchLabels: - app: mq - template: - metadata: - labels: - app: mq - role: broker - spec: - containers: - - name: mq - image: rabbitmq:3.6.14-management - imagePullPolicy: Always - command: ["/script/mq.sh"] - env: - - name: CEGA_CONNECTION - value: amqp://:@:5672/ - ports: - - name: lega-mq - containerPort: 15672 - protocol: TCP - - containerPort: 5672 - name: amqp - volumeMounts: - - name: mq-entrypoint - mountPath: /script - - name: rabbitmq - mountPath: /etc/rabbitmq - - name: mq-temp - mountPath: /temp - volumes: - - name: rabbitmq - persistentVolumeClaim: - claimName: rabbitmq - - name: mq-entrypoint - configMap: - name: mq-entrypoint - defaultMode: 0744 - - name: mq-temp - configMap: - name: mq-config - items: - - key: defs.json - path: defs.json - - key: rabbitmq.config - path: rabbitmq.config - defaultMode: 0744 - volumeClaimTemplates: - - metadata: - name: rabbitmq - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 0.5Gi # make this bigger in production diff --git a/deployments/kube/yml/mq/svc.lega-mq.yml b/deployments/kube/yml/mq/svc.lega-mq.yml deleted file mode 100644 index c1064766..00000000 --- a/deployments/kube/yml/mq/svc.lega-mq.yml +++ /dev/null @@ -1,34 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: lega-mq-management - labels: - app: mq -spec: - type: NodePort - ports: - - port: 15672 - targetPort: 15672 - protocol: TCP - name: http - selector: - app: mq ---- -apiVersion: v1 -kind: Service -metadata: - # The required headless service for StatefulSets - name: mq - labels: - app: mq -spec: - ports: - - port: 5672 - targetPort: 5672 - name: amqp - - port: 4369 - name: epmd - - port: 25672 - name: rabbitmq-dist - selector: - app: mq diff --git a/deployments/kube/yml/postgres/cm.postgres.yml b/deployments/kube/yml/postgres/cm.postgres.yml deleted file mode 100644 index 00572e43..00000000 --- a/deployments/kube/yml/postgres/cm.postgres.yml +++ /dev/null @@ -1,79 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: lega-db-config -data: - user: lega - dbname: lega ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: initsql -data: - db.sql: |- - \connect lega - - SET TIME ZONE 'Europe/Stockholm'; - - CREATE TYPE status AS ENUM ('Received', 'In progress', 'Completed', 'Archived', 'Error'); - -- CREATE TYPE hash_algo AS ENUM ('md5', 'sha256'); - - CREATE EXTENSION pgcrypto; - - -- ################################################## - -- FILES - -- ################################################## - CREATE TABLE files ( - id SERIAL, PRIMARY KEY(id), UNIQUE (id), - elixir_id TEXT NOT NULL, - inbox_path TEXT NOT NULL, - status status, - vault_path TEXT, - vault_filesize INTEGER, - stable_id TEXT, - header TEXT, -- crypt4gh - created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp(), - last_modified TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp() - ); - - CREATE FUNCTION insert_file(inpath files.inbox_path%TYPE, - eid files.elixir_id%TYPE, - sid files.stable_id%TYPE, - status files.status%TYPE) - RETURNS files.id%TYPE AS $insert_file$ - #variable_conflict use_column - DECLARE - file_id files.id%TYPE; - BEGIN - INSERT INTO files (inbox_path,elixir_id,stable_id,status) - VALUES(inpath,eid,sid,status) RETURNING files.id - INTO file_id; - RETURN file_id; - END; - $insert_file$ LANGUAGE plpgsql; - - -- ################################################## - -- ERRORS - -- ################################################## - CREATE TABLE errors ( - id SERIAL, PRIMARY KEY(id), UNIQUE (id), - file_id INTEGER REFERENCES files (id) ON DELETE CASCADE, - hostname TEXT, - error_type TEXT NOT NULL, - msg TEXT NOT NULL, - from_user BOOLEAN DEFAULT FALSE, - occured_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp() - ); - - CREATE FUNCTION insert_error(fid errors.file_id%TYPE, - h errors.hostname%TYPE, - etype errors.error_type%TYPE, - msg errors.msg%TYPE, - from_user errors.from_user%TYPE) - RETURNS void AS $set_error$ - BEGIN - INSERT INTO errors (file_id,hostname,error_type,msg,from_user) VALUES(fid,h,etype,msg,from_user); - UPDATE files SET status = 'Error' WHERE id = fid; - END; - $set_error$ LANGUAGE plpgsql; diff --git a/deployments/kube/yml/postgres/deploy.postgres.yml b/deployments/kube/yml/postgres/deploy.postgres.yml deleted file mode 100644 index 0a1b54ac..00000000 --- a/deployments/kube/yml/postgres/deploy.postgres.yml +++ /dev/null @@ -1,72 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: db - labels: - role: database - app: LocalEGA -spec: - replicas: 1 - selector: - matchLabels: - app: db - template: - metadata: - labels: - app: db - role: database - spec: - containers: - - name: postgresql - image: postgres:latest - imagePullPolicy: Always - env: - - name: POSTGRES_USER - valueFrom: - configMapKeyRef: - name: lega-db-config - key: user - - name: POSTGRES_PASSWORD - valueFrom: - secretKeyRef: - name: lega-db-secret - key: postgres_password - - name: POSTGRES_DB - valueFrom: - configMapKeyRef: - name: lega-db-config - key: dbname - ports: - - name: postgres - containerPort: 5432 - volumeMounts: - - mountPath: /docker-entrypoint-initdb.d - name: initdb - readOnly: true - livenessProbe: - exec: - command: - - pg_isready - - -h - - localhost - - -U - - postgres - initialDelaySeconds: 30 - timeoutSeconds: 5 - readinessProbe: - exec: - command: - - pg_isready - - -h - - localhost - - -U - - postgres - initialDelaySeconds: 5 - timeoutSeconds: 1 - volumes: - - name: data - persistentVolumeClaim: - claimName: db-storage - - name: initdb - configMap: - name: initsql diff --git a/deployments/kube/yml/postgres/svc.postgres.yml b/deployments/kube/yml/postgres/svc.postgres.yml deleted file mode 100644 index 84e13fb0..00000000 --- a/deployments/kube/yml/postgres/svc.postgres.yml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: db - labels: - app: db -spec: - ports: - - port: 5432 - targetPort: 5432 - protocol: TCP - selector: - app: db diff --git a/deployments/kube/yml/s3/sc.minio.yml b/deployments/kube/yml/s3/sc.minio.yml deleted file mode 100644 index 470cceeb..00000000 --- a/deployments/kube/yml/s3/sc.minio.yml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: miniosc -provisioner: kubernetes.io/no-provisioner -volumeBindingMode: WaitForFirstConsumer diff --git a/deployments/kube/yml/s3/sts.minio.yml b/deployments/kube/yml/s3/sts.minio.yml deleted file mode 100644 index ac790abe..00000000 --- a/deployments/kube/yml/s3/sts.minio.yml +++ /dev/null @@ -1,58 +0,0 @@ -apiVersion: apps/v1beta1 -kind: StatefulSet -metadata: - name: minio - labels: - app: LocalEGA -spec: - serviceName: minio - replicas: 1 - template: - metadata: - labels: - app: minio - spec: - containers: - - name: minio - env: - - name: MINIO_ACCESS_KEY - valueFrom: - secretKeyRef: - name: s3-keys - key: access_key - - name: MINIO_SECRET_KEY - valueFrom: - secretKeyRef: - name: s3-keys - key: secret_key - image: minio/minio:latest - imagePullPolicy: Always - args: - - server - - "--config-dir=/data/config" - - /data/storage - # - http://minio.localega.svc.cluster.local/data - ports: - - containerPort: 9000 - livenessProbe: - httpGet: - path: /minio/health/live - port: 9000 - initialDelaySeconds: 120 - periodSeconds: 20 - # These volume mounts are persistent. Each pod in the PetSet - # gets a volume mounted based on this field. - volumeMounts: - - name: data - mountPath: /data - # These are converted to volume claims by the controller - # and mounted at the paths mentioned above. - volumeClaimTemplates: - - metadata: - name: data - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 10Gi diff --git a/deployments/kube/yml/s3/svc.minio.yml b/deployments/kube/yml/s3/svc.minio.yml deleted file mode 100644 index 2a17fb7d..00000000 --- a/deployments/kube/yml/s3/svc.minio.yml +++ /dev/null @@ -1,28 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: minio - labels: - app: minio -spec: - clusterIP: None - ports: - - port: 9000 - name: minio - selector: - app: minio ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service - labels: - app: minio -spec: - type: LoadBalancer - ports: - - port: 9000 - targetPort: 9000 - name: web - selector: - app: minio diff --git a/deployments/kube/yml/verify/sts.verify.yml b/deployments/kube/yml/verify/sts.verify.yml deleted file mode 100644 index 6288c4ee..00000000 --- a/deployments/kube/yml/verify/sts.verify.yml +++ /dev/null @@ -1,57 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: verify - labels: - role: verify - app: LocalEGA -spec: - replicas: 1 - serviceName: verify - selector: - matchLabels: - app: verify - template: - metadata: - labels: - app: verify - role: verify - spec: - containers: - - name: verify - image: nbisweden/ega-base:latest - imagePullPolicy: Always - command: ["gosu", "lega", "ega-verify"] - env: - - name: POSTGRES_PASSWORD - valueFrom: - secretKeyRef: - name: lega-db-secret - key: postgres_password - - name: LEGA_PASSWORD - valueFrom: - secretKeyRef: - name: lega-password - key: password - - name: S3_ACCESS_KEY - valueFrom: - secretKeyRef: - name: s3-keys - key: access_key - - name: S3_SECRET_KEY - valueFrom: - secretKeyRef: - name: s3-keys - key: secret_key - volumeMounts: - - name: verify-conf - mountPath: /etc/ega - restartPolicy: Always - volumes: - - name: verify-conf - configMap: - name: lega-config - items: - - key: conf.ini - path: conf.ini - defaultMode: 0744