-
Notifications
You must be signed in to change notification settings - Fork 1
Sprint Goals
January 17th (Thursday), 10:30 CEST
-
Milestone 4:
- Ensure that the deployed provides the required functionality (Juha, Stefan, Jocke)
-
Data API:
- RES microservice doesn't support POSIX filesystem anymore (Juha, Stefan)
- Create development environment for the Data API (Martin)
- Publish more integration tests done at EBI (Anand, Alexander)
- Implement unit tests for RES microservice REST controller (Anand)
-
Deployment:
- Staging environment (Amgad)
-
Data-in:
- Integrate Inbox with S3 backend (Dmytro)
- Merge the code updates done in CRG to the repo (Fred, Oscar)
- Merge the renaming PR (Fred, Johan, Dmytro)
- Speed up the Travis pipeline in EGA-Archive/LocalEGA repo (Fred)
December 20th (Thursday), 10:30 CEST
-
Nortwin
- implement the solution which fulfills the use case requirements (Juha, Stefan, Teemu, Jocke)
-
M4
- Set up deployment of EGA-DATA-API and LocalEGA together - Jocke, Stefan
- Do a (live) video demo at Sprint Review of the setup 🤞
-
Development environment
- PR splitting for ega-archive/LocalEGA (Fred)
- Scoping the Robustness tests extend the list at https://github.com/NBISweden/LocalEGA/wiki/Testing-LocalEGA - (optional depending on availability)
-
Data-out
- Push existing code on performance testing - continue the testing (optional depending on availability)
-
Index files lifecycle
- Create a comprehensive list of different scenarios (Jordi, Fred, Oscar)
-
Deployment
- Address TSD deployment blocker - Implementing something for Inbox that will dump file in S3 (Amgad, Dymtro)
-
Security
- Organise a meeting to discuss the security requirements
December 7 (Friday), 10:30 CEST
-
Development environment
- Organise a working group that takes care of merging the PR to EGA-Archive master branch (Fred, Stefan, Johan, Dmytro, Jocke - optional)
-
Nordic Twin Cancer Tryggve use case
- Organise a hackathon at CSC and implement the solution which fulfills the use case requirements (Juha, Stefan, Teemu)
-
Deployment
- Integrate Mediator components to LocalEGA Docker Swarm deployment (Dmytro, Amgad)
- Continue the deployment work at CSC (Stefan, Juha)
-
Data-out
- Update the performance tests (Anand)
-
Security
- Organise a meeting to discuss the security requirements (Juha)
-
Index files lifecycle
- Create a comprehensive list of different scenarios (Jordi, Fred, Oscar)
November 22, 10:30 CEST
-
Development environment
- Continue designing continuous integration that takes into account all of our deployment options (Juha, Dmytro)
- Plan how to move development to EGA-archive GitHub organisation (what is needed, how to merge codebases, etc) (All)
-
Deployment
- Develop the proxy component RabbitMQ-http for TSD deployment (Amgad, Dmytro)
- Continue the deployment testing in production-like environment at CSC (Stefan, Juha, Teemu)
-
Data-in
- Continue correlation ID tracking (Johan, Fred)
- Plan what is the lifetime of index files (Jordi, Fred, Oscar)
-
Data-out
- Integrate Data API to new database schema (Anand)
- Separate htsget functionality from DataEdge (Fred, Oscar)
- Implement permissions at the file level (Fred, Oscar)
-
REMS integration
- Record the video that show the functionality
-
Security:
- Continue writing security requirements document (Jordi, Fred, Oscar)
- Organise a meeting to discuss about current ideas (All)
- Produce first draft of security review document (Martin)
-
NorTwinCan use case
- Start implementing the solution (Juha, Stefan, Teemu):
November 8th, 10:30 CEST
-
Development environment
- Design continuous integration that takes into account all of our deployment options (Juha, Dmytro)
-
Deployment
- Integrate Data API Helm charts to Local EGA deployment (Jocke, Stefan)
- Update the documentation (especially to match current environment variables) (Jocke, Stefan)
- Share lessons learned at CSC and EBI (Stefan, Juha, Anand)
- Develop the proxy component RabbitMQ-http for TSD deployment (Amgad, Dmytro)
- Continue the deployment testing in production-like environment at CSC (Stefan, Juha, Teemu)
-
Data-in:
- Agree on database schema (Anand, Jordi, Fred)
- Continue correlation ID tracking (Johan, Fred)
- Design the new interface for Key service and agree the endpoints with all parties (Juha, Stefan)
-
Data-out:
- If database will be merged, integrate Data API to new db schema
- Finnish OpenAPI specifications for Data-Edge, RES and services
- Explore a possibility to separate htsget part from Data-edge servie
-
Security:
- Continue security requirements planning document (Jordi, Fred) and focus on: environment variables and Docker secrets (Fred, Stefan), network topology (Juha, Fred)
-
REMS integration:
- Record a video that show REMS integration to Data API (data authorisation and access) (Teemu, Stefan, Juha)
October 25, 10:30 CEST
-
Deployment
- Continue the Helm chart deployment work and integrate Data API Helm chars to our ones (Jocke, Fred)
- Update the documentation (especially to match current environment variables) (Jocke, others)
- Continue the deployment testing in production-like environment at CSC (Stefan, Juha, Teemu)
- Start deployment work at TSD and work on the component that enables the communication between the components at TSD (Dmytro, Amgad; Fred to provide test CEGA credentials and access to test CEGA MQ broker)
-
Data ingestion
- Design the new interface for Key service and agree the endpoints with all parties (Juha, Stefan)
- Agree on database schema (Anand, Jordi, Fred)
- (If previous is done) Fix the integration tests that database schema changes caused that (Fred, Stefan)
- Continue correlation ID tracking (Johan)
-
Data API
- Create OpenAPI specification for RES and Data Edge service (Jordi et al.)
- Continue production deployment at EBI (Anand)
-
Security
- Start drafting security requirements for Local EGA (baseline) (Jordi)
October 11, 10:30 CEST
-
Deployment:
- Helm chart for LocalEGA
- Add Data API Helm charts
- Test the deployment setup that we have in testing in new OpenShift cluster(s)
-
Data Ingestion:
- Define Keyserver OpenAPI specification
- One Database for Data Ingestion and Data API - waiting for PR o schema
- Update the LocalEGA database schema
- includes OpenAPI for FileDatabase service
- correlation id and logging and follow what happens
-
Data API:
- Production Deployment at EBI
- Hystrix annotations to functions - demo Sprint Review
- OpenAPI for RES Service and Data Edge Service
-
Stitching:
- Add rest Data API of the services to docker-compose
- Crypt4GH can be used together with the Data Ingestion
September 20, 10:30 CEST
-
Deployment:
- Helm charts for LocalEGA deployment
- Draw a scheme about the Deployment at NBIS
- Write a design document about deployment - we are using this document: LocalEGA Deployment Planning
- Martin up to speed
-
Data Ingestion:
- Clean up comments and documentation
- Stable IDs
- Update the LocalEGA database schema, synchronize with EBI
-
Use one database for LocalEGA
- Define FileDatabase interface with OpenAPI - worth considering if we have time
-
Data API:
- Continue with Data API deployment in EBI
- Deployment in Production Environment
- Integrate ELK stack for Data API
- Add Hystrix Annotation to all relevant functions
- htsget API in DataEdge: error message when index file is missing
- Integrate Data API with LocalEGA
- Documentation on the Data API part
- Continue with Data API deployment in EBI
-
Start stitching a general docker-compose for LocalEGA (Data Ingestion and Data API)
September 6, 10:30 CEST - https://cscfi.zoom.us/j/335469331
-
Deployment:
- Helm charts for LocalEGA deployment on the LocalEGA-deploy-k8s repository
- Automated deployment of Data In in SE environment
- Draw a scheme about the Deployment at NBIS
- add Information to the LocalEGA Deployment Document about NBIS Plans
- Integrate Data out to current deployment swarm and kubernetes/openshift
-
Data API
- Continue with Data API deployment in EBI
- Integrate Data API with LocalEGA
- Start Documentation on the Data API part
August 24 10:30 CET
- Deployment
- Move deployments to separate repositories (including Terraform)
- Continue OpenShift deployment planning and test deployment in production-like environment (especially verify network configuration)
- Get Jocke up to speed
- REMS
- Implement the plan how to use REMS for authorising the data use
- Data API
- Continue with Data API deployment in EBI
- Integrate ELK stack for Data API
- Integrate Data API to ingestion and make sure that it works with Crypt4GH (replicate what we had in the demo)
August 9th 10:30 CET
- Make Local EGA containers work in Docker Swarm and deploy to test environment in Norway
- Kubernetes deployment
- Work together with CSC's cloud team with open questions related to production environment
- Finnish testing in Rahti (use separate images)
- Data API
- Create OpenAPI specification about Key service interface
- Kubernetes deployment in EBI
- Finnish database schema planning
- Integrate new Key service to Local EGA ingestion
- Make it possible to disable Netflix OSS
- Debugging more human
- GitHub
- Travis integration for all repos to Local EGA (NBIS) Slack
- Create a picture how REMS is connected to Local EGA
July 19th 09:00
- Continue Deployment
- test the functionality of the deployed solution (if it is still there)
- Deploy SUNET
- Fuse Inbox alternative (cron job)
- (EGA) Data out API
- make it possible to disable Netflix OSS
- debugging more human
- bring the code to ega-data-api repo and create hierarchical Maven project
- database schema planning
- Fix the Calendar
- make it suitable for UK time
- (Bonus) Write a design document about deployment https://github.com/NBISweden/LocalEGA/issues/314
July 5th 09:00
- Quality control: Finnish the PR
- Deployment
- Study possibilities for adding alternative to FUSE-based notification mechanism in ingestion to something else
- Continue deployment work in Rahti and in SUNET
- Draw a picture about LEGA storage architecture
June 21st 09:00
-
Deployment
- Separate production deployment from development (including documentation)
- Make deployment work in SUNET (deploying the cluster, providing the configuration files, shared volumes, network, etc)
- Make deployment work in CSC (Rahti or cPouta)
- Explore deployment options for TSD
-
Integrate Local EGA cryptor and S3 interface to the ingestion
-
Start implementing the "GA4GH cryptor model" to Local EGA ingestion
-
Write unit tests for Local EGA cryptor
-
Create mockups of Data Out API
May 31st 09:00
-
Config server
- Implement support for Spring config server
- Get rid of mounting files inside containers
-
Deployment
- Make containers work without the root permissions
- Get rid of capabilities and privileged devices in docker-compose file
-
Implement first version of GA4GH Crypt encryption/decryption
-
Make a release, write release notes and tag a version
-
(Implement support for storing files using S3 interface)
May 16th 09:00
-
Deployment:
- Compare different deployment methods (e.g. Docker Swarm, Kubernetes, OpenStack; also in term of configs)
- Investigate network separation with Docker
-
Tests: continue with integration/unit testing
-
Documenting the communication between LEGA <-> CEGA (e.g. messages)
-
Make a release, write release notes and tag a version
April 26 09:00
-
Tests
- Review the test list and implement missing ones (unit tests and integration tests)
- Fix issues with existing test (e.g. with big files)
-
Deployment
- Provide a mock of S3 interface for Docker
- Create a plan for implementing S3 interface
- Compare different deployment methods
-
LEGA crypto
- Stabilise the code (e.g. memory usage)
- Stabilise the key infrastructure (e.g. no plain text passphrases)
- Write unit tests
-
Make a release, write release notes and tag a version
April 12 09:00
-
Prepare a demo setup
- Fix the problem in EGA Data API
- Push CEGA to implement submission trigger from LEGA so we can demo the user interface
- Plan and prepare the file upload
- Figure out where to run the demo
- Prepare the FUSE client access
- (Plan and prepare file encryption for the demo (e.g. LEGA "cryptor"))
- Record a backup video
-
Restructure repositories:
- add unit tests in same language as LEGA implementation
- move integration tests to another
- move deployments to another repository and document them
March 22 09:00
-
Connect Vault of LEGA ingestion to data out
- Study how Vault volume could be shared between data in and data out (due Tuesday 13th)
- Use Central EGA stable ids for ingested files
- Implement the solution
-
Use data in key service in data out
- Study how to share Docker-Compose network (due Tuesday 13th)
- Integrate data out to data in key service
-
Rounding up for the demo
- Update documentation
- Merge all existing code
- Unit tests for PGP script
March 8 09:00
-
Read encrypted data from LEGA Vault
- Integrate LEGA ingestion micro services with EGA Data API (mount LEGA Vault)
- Read unencrypted test file from anywhere (LEGA Vault)
- Combine two above steps
-
Data in: CEGA Submission portal
- Make a list of changes related to adding LEGA specificities to CEGA Submission portal
- Trigger new submission from CEGA Submission portal
- Show file status in user interface
-
Wrap up
- GnuPG alternative
- EGA Data API bootstrap script (reviewed and merged before Friday 29th)
Feb 22 09:00
- Implement "DATA OUT"
- Bootstrap script
- Fix jwt token authentication
- Decrypt file from our Vault
- Decrypt file from "EGA data api database"
- Connect EGA data api database to LEGA vault
- Bootstrap script
- Improve robustness
- Microservice "discovery/handling/monitoring/..."
- Including error handling & service registration
- Alternative to GnuPG
- Microservice "discovery/handling/monitoring/..."
Uppsala E10:2309 https://sunet.zoom.us/j/214422954
Feb 8 09:00
- Implement "DATA OUT"
- Meeting with Alexander Senf
- Bootstrap script
- Decrypt file from our Vault
- Complete Skeikampenrunt tur
Jan 25 09:00
- Implement "DATA OUT"
- Demonstrate reading archived files in vault using EGA DATA API
- Documentation/Roadmap
- Progress CEGA interface side
- Define tests to implement for DATA OUT
Jan 11 09:00
- Implement deployment in Openshift
- Research "DATA OUT"
- Integrate EGA DATA API in "DATA OUT"
- Adding business logic monitoring tool
- Rework the test list
Dec 14 09:00
- Demonstrate Deployment on OpenStack with Terraform
- Adding business logic monitoring tool
- Implement deployment in Openshift
- Research "DATA OUT"
- Integrate EGA DATA API in "DATA OUT"
- Make tests configurable
- Rework the test list
November 30 09:00
- Running the terraform branch up to date
- Merge Juhas branch into NBISweden
- Remove blockers for Openshift issue #182 and #183
- Investigate how to integrate EGA DATA API
- Settle the information about Error handling
- General Documentation
- Testing file ingestions F.1 - F.5
E10:3309
Join from PC, Mac, Linux, iOS or Android: https://sunet.zoom.us/j/214422954
November 16 09:00
- Running the terraform branch up to date
- Preparing information about communication between CEGA&LEGA
- Testing authentication
- Data OUT (HACKATON)
- Merge Juhas branch into NBISweden
E10:3309
Web browser: https://vconf.kth.se/. Set SciLifeLab as conference room. Pincode: 8800;
SIP/H323: connect to [email protected]
November 2 09:00
- Juha find out decision information about Openshift
- Dymtro find out decision information about Jenkins
- Figure out if we can use SUNET for the production system in Sweden.
- Error/monitor messaging from ingestion process.
- Write contribution guidelines
- Fix KNOX testing environment.
E10:3309
Web browser: https://vconf.kth.se/. Set SciLifeLab as conference room. Pincode: 8800;
SIP/H323: connect to [email protected]
October 19 at 09:00
- Nanjiang create list of lots of tests
- and show a limited demo
- Jonas show a budget of hardware
- Juha find out decision information about Openshift
- Dymtro find out decision information about Jenkins
- Decide on what CI to use.
- Convert "The Oscar List" into doable tasks.
October 5 at 09:00
- EGA auth - Fred
- Review of documentation of docker compose.
- Continue set up tests in OpenShift (docker)
- Start sketching hardware requirements - Jonas
- Document Installation procedures
Sprint review 5 October, 9:00
- Clean up provisioning with Terraform (on SSC)
- User management
- Proper C code review
- Expiry date support
- Automatic home directory creation
- Continue set up tests in OpenShift (docker)
- Test ingestion on "big data"
- Start sketching hardware requirements
- Document
- Installation procedures
Sprint review 21 September, 9:00
- Clean up provisioning with Terraform (on SSC)
- User management
- proper C code review
- Expiry date support
- Investigate if ssh keys can be stored in database
- Document
- Code (prio 1)
- Installation procedures
- General architecture
- Continue set up tests in OpenShift (docker)
- Test ingestion on "big data"
- Start sketching hardware requirements
Sprint review 7 September, 9:00
- Deploy on SSC with Terraform
- Test ingestion on "big data"
- Document
- Code (prio 1)
- Installation procedures
- General architecture
- Continue set up tests in OpenShift
- Start sketching hardware requirements
Sprint review 24 August, 9:00
- Provisioning: docker-compose
- -Deployment: Ansible-
- Documentation
- SFTP user creation with MQ
Sprint review 14 July, 9:00
- Deployment
- Ansible? docker-compose?...others?
- fixing bugs
- SFTP user creation with MQ
Sprint review 21 June, 9:00
- Error handling
- Start hooking up AAI
- submission account creation
- ingestion trigger
- ingestion confirmation dialogue
Sprint review 12 June, 14:00
- Deployment in VMs
- AAI auth for every part
- Error handling strategy
- File Naming
- Vault: file naming and the database update (requires stable database schemas)
- back and forth to CentralEGA for "global" naming
- demonstrate a submission all the way
- Lock inbox instead of moving to staging area
- deploy a real demo with fake data all the way
- Start setting up deployment strategy
- First iteration of database schema
- (Fix KNOX)
- Vault: file naming and the database update (requires stable database schemas)
- back and forth to CentralEGA for "global" naming
- demonstrate a submission all the way
- finish the re-encryption pipeline
- Lock inbox instead of moving to staging area
- deploy a real demo with fake data all the way
- Start setting up deployment strategy