This document contains the installation and configuration information required to deploy the OGC Resource Server.
In order to connect the OGC Resource Server with AWS S3, PostgreSQL, RabbitMQ, DX Catalogue Server, DX AAA Server, DX Auditing Server etc please refer Configurations. It contains appropriate information which shall be updated as per the deployment.
In this section we explain about the dependencies and their scope. It is expected that the dependencies are met before starting the deployment of OGC Resource Server.
Software Name | Purpose |
---|---|
PostGIS | For storing information related to geo spatial meta data, processes, feature collections, tiles, STAC assets, coverages, resources and users |
AWS S3 | To serve map tiles as well as STAC asset files |
RabbitMQ | To publish auditing related data to auditing server via RabbitMQ exchange |
DX Authentication Authorization and Accounting (AAA) Server | Used to download certificate for JWT token decoding and to get user info |
DX Catalogue Server | Used to fetch the list of resource and provider related information |
DX Auditing Server | Used for logging and auditing the access for metering purposes |
- Make a config file based on the template in example-config/config-example.json
- Set up AWS S3 for serving tiles and STAC assets
- Set up PostGIS for storing information related to geo spatial data
- Set up RabbitMQ for publishing the auditing data
- Set up the database using Flyway
- AWS S3 is used to serve map tiles as well as STAC asset files
- An S3 bucket can be set up by following the AWS S3 documentation, after which the S3 bucket name, region name, access key and secret key can be added to the config.
- PostGIS is an extension for PostgreSQL that adds support for geographic objects, allowing users to store and query spatial data.
- To setup PostgreSQL refer setup and installation instructions available here
Table Name | Purpose |
---|---|
collections_details | To store metadata about collections, including title, description, and bounding box |
processes_table | To store details of processes including input, output, and execution modes |
ri_details | To store information related to resource instance (RI) details, including role and access type |
roles | To store user roles, such as provider, consumer, or delegate |
jobs_table | To store job details, including status, type, progress, and timestamps, related to different processes |
collection_type | To store types associated with collections, based on the type column from collections_details |
tilematrixset_metadata | To store metadata for tile matrix sets, including scale, cell size, and matrix dimensions |
tilematrixsets_relation | To store the relation between collections and tile matrix sets |
collection_supported_crs | To store the Coordinate Reference Systems (CRS) supported by specific collections |
crs_to_srid | To map Coordinate Reference Systems (CRS) to Spatial Reference Identifiers (SRID) |
stac_collections_assets | To store assets linked to collections, such as thumbnails, data, and metadata, including their size, type, and role |
collection_coverage | To store the coverage schema and associated hrefs related to collections, helping define the spatial/temporal extent of collections |
- Auditing is done using the DX Auditing Server which uses Postgres for storing the audit logs for OGC Resource Server
- The schema for auditing table in PostgreSQL is present here - postgres auditing table schema
Table Name | Purpose | DB |
---|---|---|
auditing_ogc | To store audit logs for operations in the OGC Resource Server | PostgreSQL |
- RabbitMQ is used to push the logs which is consumed by the auditing server
- To setup RabbitMQ refer the setup and installation instructions available here
- After deployment of RabbitMQ, we need to ensure that there are certain prerequisites met. Incase if it is not met, please login to RabbitMQ management portal using the RabbitMQ management UI and create a the following
Type | Name | Details |
---|---|---|
vHost | IUDX-INTERNAL | Create a vHost in RabbitMQ |
Exchange Name | Type of exchange | features | Details |
---|---|---|---|
auditing | direct | durable | Create an exchange in vHost IUDX-INTERNAL to allow audit information to be published |
Exchange Name | Queue Name | vHost | routing key | Details |
---|---|---|---|---|
auditing | direct | durable | # | Create a queue in vHost IUDX-INTERNAL to allow audit information to be consumed. Ensure that the queue is binded to the auditing exchange |
- Database flyway migrations help in updating the schema, permissions, grants, triggers etc., with the latest version
- Each flyway schema file is versioned with the format
V<majorVersion>_<minorVersion>__<description>.sql
, ex :V1_1__init-tables.sql
to manage the database schema and handle migrations - The migration files are located at src/main/resources/db/migrations. The following pre-requisites are needed before running
flyway
:- An admin user - a database user who has created schema/table privileges for the database. It can be the super user.
- A normal user - this is the database user that will be configured to make queries from the server
(e.g.
CREATE USER ogc WITH PASSWORD 'randompassword';
)
flyway.conf must be updated with the required data.
flyway.url
- the database connection URLflyway.user
- the username of the admin userflyway.password
- the password of the admin userflyway.schemas
- the name of the schema under which the tables are createdflyway.placeholders.ogcUser
- the username of the server user
Please refer here for more information about Flyway config parameters.
After this, the info
command can be run to test the config. Then, the migrate
command can be run to set up the database. At the /ogc-resource-server
directory, run
mvn flyway:info -Dflyway.configFiles=flyway.conf
mvn flyway:migrate -Dflyway.configFiles=flyway.conf
- Install Java 11 and maven
- Set Environment variables
export LOG_LEVEL=INFO
- Use the maven exec plugin based starter to start the server
mvn clean compile exec:java@ogc-resource-server
- The server will be up on port 8080. To change the port, add
httpPort:<desired_port_number>
to the config in theApiServerVerticle
module. See configs/config-example.json for an example.
- Install Java 11 and maven
- Set Environment variables
export LOG_LEVEL=INFO
- Use maven to package the application as a JAR
mvn clean package -Dmaven.test.skip=true
- 2 JAR files would be generated in the
target/
directoryogc-resource-server-dev-0.0.1-SNAPSHOT-fat.jar
- non-clustered vert.x and does not contain micrometer metrics
- Install docker and docker-compose
- Clone this repo
- Build the images
./docker/build.sh
- Modify the
docker-compose.yml
file to map the config file you just created - Start the server in production (prod) or development (dev) mode using docker-compose
docker-compose up prod
- The server will be up on port 8080. To change the port, add
httpPort:<desired_port_number>
to the config in theApiServerVerticle
module. See example-config/config-example.json for an example
A client SDK generated using OpenAPI Generator is located at client-sdk. To generate a version of the SDK derived from the latest version of the OpenAPI spec at https://geoserver.dx.ugix.org.in, download the OpenAPI Generator JAR file and run:
java -jar openapi-generator-cli.jar generate -i <URL> -g python --additional-properties removeOperationIdPrefix=true,removeOperationIdPrefixDelimiter=-,removeOperationIdPrefixCount=6 -o client-sdk --global-property models,modelTests=false,apis,apiTests=false,supportingFiles=README.md:requirements.txt:setup.py:setup.cfg:openapi_client:api_client.py:api_response.py:exceptions.py:__init__.py:configuration.py:py.types:rest.py
where <URL>
can be:
https://geoserver.dx.ugix.org.in/api
for OGC APIshttps://geoserver.dx.ugix.org.in/stac/api
for STAC APIshttps://geoserver.dx.ugix.org.in/metering/api
for metering/auditing APIs
- For asynchronous logging, logging messages to the console in a specific format, Apache's log4j 2 is used
- For log formatting, adding appenders, adding custom logs, setting log levels, log4j2.xml could be updated : link
- Please find the reference to log4j 2 : here
- Run the server through either docker, maven or redeployer
- Run the unit tests and generate a surefire report
mvn clean test-compile surefire:test surefire-report:report
- Jacoco reports are stored in
./target/
Integration tests for the OGC Resource Server are handled using Rest Assured. These tests ensure the server functionality by interacting with the system through HTTP requests and responses. Follow the steps below to run the integration tests:
-
Install Prerequisites
- Docker (Ensure you have Docker installed and running on your system)
- Maven (Ensure Maven is installed and configured correctly on your system)
-
Update Docker Compose Configuration
- Before running the tests, update the necessary values in the docker-compose.test.yml file. Ensure correct paths are set for the configuration files, and all the required volumes and environment variables are properly configured
-
Build Docker Images
- First, build the Docker images required for running the tests:
sudo docker build -t s3-mock-modded -f docker/s3_mock_modded_to_443.dockerfile . sudo docker build -t ogc-test -f docker/test.dockerfile .
-
Run Docker Compose
- Start the Docker containers needed for testing:
sudo docker compose -f docker-compose.test.yml up integ-test
-
Run Integration Tests
- Use Maven to run the integration tests. The
mvn verify
command will execute all the tests within the project, skipping unit tests and shaded JAR creation:mvn verify -DskipUnitTests=true -DskipBuildShadedJar=true -DintTestHost=localhost -DintTestPort=8443 -Pit
- Use Maven to run the integration tests. The
-
Test Reports
- Maven will generate reports after running the integration tests. You can check the results under the
./target/
directory
- Maven will generate reports after running the integration tests. You can check the results under the
- Compliant testing ensures that the OGC Resource Server correctly implements OGC standards, which are essential for achieving reliability and interoperability between systems
- These tests validate the server's adherence to OGC specifications, helping ensure seamless integration with other geospatial systems
- Successfully passing these tests confirms that the server is compliant with OGC standards, which provides confidence in the server’s capabilities, ensuring it meets the high standards required for modern geospatial applications
- For more details and to review the compliance test results for our server, visit: OGC Resource Server Compliance Test Reports
- JMeter is for used performance testing, load testing of the application
- Please find the reference to JMeter : here
- Command to generate HTML report at
target/jmeter
rm -r -f target/jmeter && jmeter -n -t jmeter/<file-name>.jmx -l target/jmeter/sample-reports.csv -e -o target/jmeter/
- For security testing, Zed Attack Proxy(ZAP) Scanning is done to discover security risks, vulnerabilities to help us address them
- A report is generated to show vulnerabilities as high risk, medium risk, low risk and false positive
- Please find the reference to ZAP : here