diff --git a/.github/workflows/publish-docker-image.yml b/.github/workflows/publish-docker-image.yml index c9c2974..085555c 100644 --- a/.github/workflows/publish-docker-image.yml +++ b/.github/workflows/publish-docker-image.yml @@ -36,7 +36,7 @@ jobs: uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7 with: images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} - tags: type=raw,value=3.3 + tags: type=raw,value=3.4 # This step uses the `docker/build-push-action` action to build the image, based on your repository's `Dockerfile`. If the build succeeds, it pushes the image to GitHub Packages. # It uses the `context` parameter to define the build's context as the set of files located in the specified path. For more information, see "[Usage](https://github.com/docker/build-push-action#usage)" in the README of the `docker/build-push-action` repository. # It uses the `tags` and `labels` parameters to tag and label the image with the output from the "meta" step. diff --git a/API.md b/API.md index 980467d..96a010f 100644 --- a/API.md +++ b/API.md @@ -1,6 +1,6 @@ **EcoSonar API** -New Postman Collection available with all endpoints : +New Postman Collection available with all endpoints : [![Run in Postman](https://run.pstmn.io/button.svg)](https://app.getpostman.com/run-collection/9592977-29c7010f-0efd-4063-b76a-5b0f455b1829?action=collection%2Ffork&collection-url=entityId%3D9592977-29c7010f-0efd-4063-b76a-5b0f455b1829%26entityType%3Dcollection%26workspaceId%3Df7ed92ee-00aa-4dc1-95aa-9f7d2da44e68) @@ -8,247 +8,685 @@ Swagger User Interface available at the link : `[ECOSONAR-API-URL]/swagger/` Locally, available at this address : `http://localhost:3002/swagger/` ----- +--- + +## **EcoSonar URL Configuration - GET URLs FROM PROJECT** -**EcoSonar URL Configuration - GET URLs FROM PROJECT** ----- ![GET URLs FROM PROJECT](./images/get-urls-from-project.webp) -* **URL** +- **URL** `/api/all?projectName=` -* **Method:** +- **Method:** `GET` - -* **URL Params** - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** - **Required:** - - `PROJECT_NAME=[string]` + `PROJECT_NAME=[string]` -* **Data Params** +- **Data Params** None -* **Success Response:** +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** `[ - "url1", - "url2", - "url3"]` - -* **Error Response:** +"url1", +"url2", +"url3"]` + +- **Error Response:** When you don't have any URLs assigned to a project into EcoSonar - * **Code:** 400 BAD REQUEST
- **Content:** `{ - "error": "Your project has no url assigned into EcoSonar. You must at least add one url if you want to analyse ecodesign practices." - }` +- **Code:** 400 BAD REQUEST
+ **Content:** `{ +"error": "Your project has no url assigned into EcoSonar. You must at least add one url if you want to analyse ecodesign practices." +}` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar URL Configuration - INSERT URLs IN PROJECT** -**EcoSonar URL Configuration - INSERT URLs IN PROJECT** ----- ![INSERT URLs IN PROJECT](./images/insert-urls-in-project.webp) -* **URL** +- **URL** `/api/insert` -* **Method:** +- **Method:** `POST` - -* **URL Params** - None +- **URL Params** + + None -* **Data Params** +- **Data Params** PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. `{ - "projectName" : "PROJECT_NAME", - "urlName": ["url1", "url2"] - }` + "projectName" : "PROJECT_NAME", + "urlName": ["url1", "url2"] +}` + +- **Success Response:** -* **Success Response:** + - **Code:** 200
- * **Code:** 200
- -* **Error Response:** +- **Error Response:** When you have validation errors in the list of urls you want to insert (url invalid or duplicated), with the error index corresponding to the index url - * **Code:** 400 BAD REQUEST
- **Content:** `{ +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": [ "Url has an invalid syntax", "URL was duplicated or already inserted" ] }` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar URL Configuration - DELETE URL IN PROJECT** -**EcoSonar URL Configuration - DELETE URL IN PROJECT** ----- ![DELETE URL IN PROJECT](./images/delete-url-in-project.webp) You can delete one url at a time. -* **URL** +- **URL** `/api/delete` -* **Method:** +- **Method:** `DELETE` - -* **URL Params** - None +- **URL Params** -* **Data Params** + None + +- **Data Params** PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. `{ - "projectName" : "PROJECT_NAME", - "urlName" : "url_to_delete" - }` + "projectName" : "PROJECT_NAME", + "urlName" : "url_to_delete" +}` + +- **Success Response:** -* **Success Response:** + - **Code:** 200
- * **Code:** 200
- -* **Error Response:** +- **Error Response:** When the url can't be found in your project - * **Code:** 400 BAD REQUEST
- **Content:** `{ - "error": "url_to_delete in PROJECT_NAME not found" - }` +- **Code:** 400 BAD REQUEST
+ **Content:** `{ + "error": "url_to_delete in PROJECT_NAME not found" +}` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
-**EcoSonar URL Configuration - GET CRAWLER RESULT** ----- -![GET CRAWLER RESULT](./images/get-crawler-result.webp) +## **EcoSonar URL Configuration - DELETE PROJECT** -* **URL** +![DELETE PROJECT](./images/delete-project.webp) + + +- **URL** + + `/api/project` + +- **Method:** + + `DELETE` + +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** + + `PROJECT_NAME=[string]` + +- **Data Params** + +None + +- **Success Response:** + + - **Code:** 200
+ +- **Error Response:** + +EcoSonar API is not able to request to the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar URL Configuration - LAUNCH CRAWLER** + +![LAUNCH CRAWLER](./images/launch-crawler.webp) + +- **URL** `/api/crawl` -* **Method:** +- **Method:** `POST` - -* **URL Params** + +- **URL Params** None -* **Data Params** - - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. - homepage_url is the home page of your website from where the crawler will start finding all pages within your website +- **Data Params** + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + homepage_url is the home page of your website from where the crawler will start finding all pages within your website + save is a boolean value : if true, results will be saved in datatabse and pages will be audited by EcoSonar. If false, they will be saved in a temp collection to be reviewed by a user before being applied in EcoSonar configuration. -` -{ +`{ "projectName": "PROJECT_NAME", "mainUrl": "homepage_url" -} -` + "saveUrls": "save" +}` + +- **Success Response:** + + - **Code:** 202
+ +## **EcoSonar URL Configuration - GET CRAWLER RESULT** + +![GET CRAWLER RESULT](./images/get-crawler-result.webp) + +- **URL** + + `/api/crawl` + +- **Method:** + + `GET` + +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. -* **Success Response:** + **Required:** - * **Code:** 200
+ `PROJECT_NAME=[string]` + +- **Data Params** + None + +- **Success Response:** + + - **Code:** 200
**Content:** `[ - "url1", - "url2", - "url3"]` - -* **Error Response:** +"url1", +"url2", +"url3"]` + +- **Error Response:** + +EcoSonar API failed to get crawler results: + +- **Code:** 400 BAD REQUEST
+ **Content:** `No crawled urls were saved for this project` + +OR + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar Login Configuration - SAVE LOGIN FOR PROJECT** + +![SAVE LOGIN FOR PROJECT](./images/save-login-for-project.webp) + +- **URL** + + `/api/login/insert?projectName=` + +- **Method:** + + `POST` + +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** + + `PROJECT_NAME=[string]` -EcoSonar API failed to launch crawler: +- **Data Params** + `{ + "login": { + "authentication_url": "", + "steps": [] + } +}` + +- **Success Response:** + + - **Code:** 201
+ +- **Error Response:** + +EcoSonar API is not able to save into the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar Login Configuration - GET LOGIN FOR PROJECT** + +![GET LOGIN FOR PROJECT](./images/get-login-for-project.webp) + +- **URL** + + `/api/login/find?projectName=` + +- **Method:** + + `GET` + +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** + + `PROJECT_NAME=[string]` + +- **Data Params** + + None + +- **Success Response:** + + - **Code:** 200
+ **Content:** `{ + "authentication_url": "", + "steps": [] +}` + +- **Error Response:** + +When you don't have any login registered for your project into EcoSonar + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ + "error": "Your project does not have login saved into database." +}` + +OR + +EcoSonar API is not able to request to the MongoDB Database : + +- **Code:** 500 Internal Server Error
- * **Code:** 500 Internal Server Error
+## **EcoSonar Login Configuration - DELETE LOGIN FOR PROJECT** + +![DELETE LOGIN FOR PROJECT](./images/delete-login-for-project.webp) + +- **URL** + + `/api/login?projectName=` + +- **Method:** + + `DELETE` + +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** + + `PROJECT_NAME=[string]` + +- **Data Params** + + None + +- **Success Response:** + + - **Code:** 200
+ +- **Error Response:** + +When you don't have any login registered for your project into EcoSonar and you want still to delete it + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ + "error": "Project not found" +}` + +OR + +EcoSonar API is not able to request to the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar Proxy Configuration - SAVE PROXY FOR PROJECT** + +![SAVE PROXY FOR PROJECT](./images/save-proxy-for-project.webp) + +- **URL** + + `/api/proxy/insert?projectName=` + +- **Method:** + + `POST` + +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** + + `PROJECT_NAME=[string]` + +- **Data Params** + `{ + "proxy": { + "ipAddress": "", + "port": "" + } +}` + +- **Success Response:** + + - **Code:** 201
+ +- **Error Response:** + +EcoSonar API is not able to save into the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar Proxy Configuration - GET PROXY FOR PROJECT** + +![GET PROXY FOR PROJECT](./images/get-proxy-for-project.webp) + +- **URL** + + `/api/proxy/find?projectName=` + +- **Method:** + + `GET` + +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** + + `PROJECT_NAME=[string]` + +- **Data Params** + + None + +- **Success Response:** + + - **Code:** 200
+ **Content:** `{ + "ipAddress": "", + "port": "" +}` + +- **Error Response:** + +When you don't have any proxy registered for your project into EcoSonar + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ + "error": "Your project does not have proxy configuration saved into database." +}` + +OR + +EcoSonar API is not able to request to the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar Proxy Configuration - DELETE PROXY FOR PROJECT** + +![DELETE PROXY FOR PROJECT](./images/delete-proxy-for-project.webp) + +- **URL** + + `/api/proxy?projectName=` + +- **Method:** + + `DELETE` + +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** + + `PROJECT_NAME=[string]` + +- **Data Params** + + None + +- **Success Response:** + + - **Code:** 200
+ +EcoSonar API is not able to request to the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar USER FLOW Configuration - SAVE USER FLOW for URL** + +![SAVE USER FLOW for URL](./images/save-user-flow-for-url.webp) + +- **URL** + + `/api/user-flow/insert` + +- **Method:** + + `POST` + +- **URL Params** + +None + +- **Data Params** + +`{ + "url": "", + "userFlow": { + "steps": [ + ] + } +}` + +- **Success Response:** + + - **Code:** 200
+ **Content:** `{ + "steps": [ + ] +}` + +- **Error Response:** + +You want to add user flow to unexisting url : + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ + "error": "Url not found" +}` + +OR + +EcoSonar API is not able to request to the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar USER FLOW Configuration - GET USER FLOW for URL** + +![GET USER FLOW for URL](./images/get-user-flow-for-url.webp) + +- **URL** + + `/api/user-flow/find` + +- **Method:** + + `GET` + +- **URL Params** + +None + +- **Data Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + +`{ + "url": "", + "projectName: "PROJECT_NAME" +}` + +- **Success Response:** + + - **Code:** 200
+ **Content:** `{ + "steps": [ + ] +}` + +- **Error Response:** + +When you don't have any user flow registered for the url into EcoSonar + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ + "error": "Your project does not have user flow saved into database." +}` + +OR + +EcoSonar API is not able to request to the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar USER FLOW Configuration - DELETE USER FLOW FOR URL** + +![DELETE USER FLOW FOR URL](./images/delete-user-flow-for-url.webp) + +- **URL** + + `/api/user-flow` + +- **Method:** + + `DELETE` + +- **URL Params** + +None + +- **Data Params** + +`{ + "url": "" +}` + +- **Success Response:** + + - **Code:** 200
+ +EcoSonar API is not able to request to the MongoDB Database : + +- **Code:** 500 Internal Server Error
+ +## **EcoSonar LAUNCH ANALYSIS** -**EcoSonar LAUNCH ANALYSIS** ----- ![LAUNCH ANALYSIS](./images/launch-analysis.webp) EcoSonar analysis is launched through this API call either directly with a curl command or Postman request or through a Sonarqube Analysis. API call is done asynchronously to avoid performance issue ( ~ 3 seconds to analyse one page) -* **URL** +- **URL** `/api/greenit/insert` -* **Method:** +- **Method:** `POST` - -* **URL Params** - None +- **URL Params** -* **Data Params** + None + +- **Data Params** PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. `{ - "projectName" : "PROJECT_NAME" - }` + "projectName" : "PROJECT_NAME" +}` -* **Success Response:** +- **Success Response:** - * **Code:** 202
+ - **Code:** 202
+ +## **EcoSonar ANALYSIS - RETRIEVE ANALYSIS PER PROJECT** -**EcoSonar ANALYSIS - RETRIEVE ANALYSIS PER PROJECT** ----- ![RETRIEVE ANALYSIS PER PROJECT](./images/retrieve-analysis-per-project.webp) -* **URL** +- **URL** `/api/project?projectName=` -* **Method:** +- **Method:** `GET` - -* **URL Params** - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. +- **URL Params** - **Required:** - - `PROJECT_NAME=[string]` + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. -* **Data Params** + **Required:** - None + `PROJECT_NAME=[string]` -* **Success Response:** +- **Data Params** - * **Code:** 200
+ None + +- **Success Response:** + + - **Code:** 200
**Content:** `{ "allowW3c": "true", "deployments": { @@ -351,57 +789,57 @@ EcoSonar analysis is launched through this API call either directly with a curl } } }` - -* **Error Response:** -When no analysis has been done yet on your project +- **Error Response:** - * **Code:** 400 BAD REQUEST
- **Content:** `{ +When no analysis has been done yet on your project + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "No analysis found for PROJECT_NAME" }` - OR +OR When an error occured when generating the report - * **Code:** 400 BAD REQUEST
- **Content:** `{ +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "Error during generation of PROJECT_NAME analysis" }` EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - RETRIEVE ANALYSIS PER URL** -**EcoSonar ANALYSIS - RETRIEVE ANALYSIS PER URL** ----- ![RETRIEVE ANALYSIS PER URL](./images/retrieve-analysis-per-url.webp) -* **URL** +- **URL** `/api/greenit/url` -* **Method:** +- **Method:** `POST` - -* **URL Params** - None +- **URL Params** -* **Data Params** + None + +- **Data Params** PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. `{ - "projectName" : "PROJECT_NAME", - "urlName": "url_to_retrieve" - }` + "projectName" : "PROJECT_NAME", + "urlName": "url_to_retrieve" +}` -* **Success Response:** +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** `{ "deployments": { "greenit": [ @@ -503,160 +941,158 @@ EcoSonar API is not able to request to the MongoDB Database : } } }` - -* **Error Response:** -When no analysis has been done yet on your url +- **Error Response:** - * **Code:** 400 BAD REQUEST
- **Content:** `{ +When no analysis has been done yet on your url + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "No lighthouse and greenit analysis found for url url_to_retrieve in project PROJECT_NAME" }` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - GET PROJECT SCORES** -**EcoSonar ANALYSIS - GET PROJECT SCORES** ----- ![GET PROJECT SCORES](./images/get-project-scores.webp) Retrieve current scores from EcoIndex, Lighthouse Performance and Accessibility and W3C Validator for the project -* **URL** +- **URL** `/api/ecosonar/scores?projectName=` -* **Method:** +- **Method:** `GET` - -* **URL Params** - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. +- **URL Params** - **Required:** - - `PROJECT_NAME=[string]` + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** -* **Data Params** + `PROJECT_NAME=[string]` - None +- **Data Params** + + None -* **Success Response:** +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** ` - { - "ecoIndex": 0, - "perfScore": 0, - "accessibilityScore": 0, - "w3cScore": 0 - }` +{ + "ecoIndex": 0, + "perfScore": 0, + "accessibilityScore": 0, + "w3cScore": 0 +}` -* **Error Response:** +- **Error Response:** When no analysis found for the project - * **Code:** 400 BAD REQUEST
- **Content:** `{ +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "No Analysis found for project PROJECT_NAME" }` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - GET AVERAGE OF ALL SCORES FOR PROJECTS REGISTERED IN ECOSONAR AT A DEFINED DATE** -**EcoSonar ANALYSIS - GET AVERAGE OF ALL SCORES FOR PROJECTS REGISTERED IN ECOSONAR AT A DEFINED DATE** ----- ![GET AVERAGE OF ALL SCORES FOR PROJECTS REGISTERED IN ECOSONAR AT A DEFINED DATE](./images/get-average-all-scores-projects.webp) Retrieve all EcoSonar projects average for all scores (EcoIndex, Google Lighthouse and W3C Validator). You can retrieve the scores at a date defined or for last analysis made if no date defined -* **URL** +- **URL** `/api/ecosonar/info` or `/api/ecosonar/info?date=` -* **Method:** +- **Method:** `GET` - -* **URL Params** - DATE is optional : if no date defined, will look at latest analysis otherwise search for the latest analysis made before that date. +- **URL Params** + + DATE is optional : if no date defined, will look at latest analysis otherwise search for the latest analysis made before that date. - **Optional:** - - `DATE=[string]` with format YYYY-MM-DD + **Optional:** -* **Data Params** + `DATE=[string]` with format YYYY-MM-DD - None +- **Data Params** -* **Success Response:** + None + +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** ` - { - "nbProjects": 0, - "ecoIndex": 0, - "perfScore": 0, - "accessibilityScore": 0, - "w3cScore": 0 - }` +{ + "nbProjects": 0, + "ecoIndex": 0, + "perfScore": 0, + "accessibilityScore": 0, + "w3cScore": 0 +}` -* **Error Response:** +- **Error Response:** If date format is wrong - * **Code:** 400 BAD REQUEST
- **Content:** `{ +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": 'Bad date format: YYYY-MM-DD' }` - OR +OR EcoSonar API is not able to request to the MongoDB Database or other internal error: - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - GET ALL PROJECTS SCORES FROM DATE DEFINED** -**EcoSonar ANALYSIS - GET ALL PROJECTS SCORES FROM DATE DEFINED** ----- ![GET ALL PROJECTS SCORES FROM DATE DEFINED](./images/get-all-scores-projects.webp) Retrieve all EcoSonar projects and return the scores for each of them at the date defined, if date not filled it would be the latest analysis. -* **URL** +- **URL** `/api/project/all` or `/api/ecosonar/info?date=` or `/api/ecosonar/info?filterName=` - or + or `/api/ecosonar/info?date=&filterName=` - -* **Method:** +- **Method:** `POST` - -* **URL Params** - DATE is optional : if no date defined, will look at latest analysis otherwise search for the latest analysis made before that date. - FILTER-NAME is optional : retrieve projects whose name contains the string 'filterName' (case insensitive) if filled +- **URL Params** + + DATE is optional : if no date defined, will look at latest analysis otherwise search for the latest analysis made before that date. + FILTER-NAME is optional : retrieve projects whose name contains the string 'filterName' (case insensitive) if filled + **Optional:** - **Optional:** - - `DATE=[string]` with format YYYY-MM-DD - `FILTER-NAME=[string]` + `DATE=[string]` with format YYYY-MM-DD + `FILTER-NAME=[string]` -* **Data Params** +- **Data Params** CATEGORY in "filterScore" can take the following enum : ecoIndex, perfScore, accessScore, w3cScore. "score" is a value from 0 to 100, it will be the threshold for the CATEGORY. @@ -665,97 +1101,98 @@ Retrieve all EcoSonar projects and return the scores for each of them at the dat "order" can take the value "asc" or "desc" if you want to sort your projects according to the type. `{ - "filterScore" : { - "cat": "CATEGORY", - "score": 0, - "select": "upper" - }, - "sortBy": { - "type": "CATEGORY", - "order": "asc" - } - }` + "filterScore" : { + "cat": "CATEGORY", + "score": 0, + "select": "upper" + }, + "sortBy": { + "type": "CATEGORY", + "order": "asc" + } +}` -* **Success Response:** +- **Success Response:** - * **Code:** 200
- **Content:** + - **Code:** 200
+ **Content:** `{ - "nbProjects": 0, - "projects": { - "PROJECT": { - "ecoIndex": 0, - "perfScore": 0, - "accessScore": 0, - "w3cScore": 0, - "nbUrl": 0 - },` - -* **Error Response:** + "nbProjects": 0, + "projects": { + "PROJECT": { + "ecoIndex": 0, + "perfScore": 0, + "accessScore": 0, + "w3cScore": 0, + "nbUrl": 0 +},` + +- **Error Response:** If date format is wrong - * **Code:** 400 BAD REQUEST
- **Content:** `{ +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": 'Bad date format: YYYY-MM-DD' }` - OR +OR EcoSonar API is not able to request to the MongoDB Database or other internal error: - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - RETRIEVE ECOSONAR AUDIT IN EXCEL FORMAT FOR PROJECT** -**EcoSonar ANALYSIS - RETRIEVE ECOSONAR AUDIT IN EXCEL FORMAT FOR PROJECT** ----- Retrieve audits from GreenIt-Analysis, Google Lighthouse and W3C Validator aggregated per project in an Excel format. -* **URL** +- **URL** `/api/export` -* **Method:** +- **Method:** `POST` - -* **URL Params** - None +- **URL Params** + + None -* **Data Params** +- **Data Params** PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. `{ - "projectName" : "PROJECT_NAME" - }` + "projectName" : "PROJECT_NAME" +}` -* **Success Response:** +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** Excel file with the exported audit for the project -* **Error Response:** +- **Error Response:** When an error occured during file generation - * **Code:** 400 BAD REQUEST
- **Content:** `{ +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "Export Audit is not possible because urls were not inserted into project or analysis for project could not be retrieved" }` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - SAVE PROCEDURE FOR THE PROJECT** -**EcoSonar ANALYSIS - SAVE PROCEDURE FOR THE PROJECT** ----- ![SAVE PROCEDURE FOR THE PROJECT](./images/save-procedure-for-the-project.webp) Procedure in Ecosonar are the configuration chosen by delivery teams to sort the EcoSonar recommandations related to ecodesign. You have 3 different configurations available in EcoSonar: + - `scoreImpact` : best practices will be sorted by descending order of implementation (best practices not implemented returned first) - `quickWins` : best practices will be sorted by ascending order of difficulty (best practices easy to implement returned first) - `highestImpact` : best practices will be sorted by order of impact to improve EcoSonar scores (best practices most efficient returned first) @@ -764,124 +1201,124 @@ You have 3 different configurations available in EcoSonar: `/api/procedure` -* **Method:** +- **Method:** `POST` - -* **URL Params** - None +- **URL Params** -* **Data Params** + None + +- **Data Params** PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. selected_procedure can take 3 values : `scoreImpact`, `quickWins`, `highestImpact` `{ - "projectName" : "PROJECT_NAME", - "selectedProcedure": "selected_procedure" - }` + "projectName" : "PROJECT_NAME", + "selectedProcedure": "selected_procedure" +}` -* **Success Response:** +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** `{ - "procedure": "quickWins" - }` - -* **Error Response:** + "procedure": "quickWins" +}` + +- **Error Response:** When no procedure have been registered for your project - * **Code:** 400 BAD REQUEST
- **Content:** `{ +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "Procedure is not defined in project PROJECT_NAME" }` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - RETRIEVE PROCEDURE SAVED FOR THE PROJECT** -**EcoSonar ANALYSIS - RETRIEVE PROCEDURE SAVED FOR THE PROJECT** ----- ![RETRIEVE PROCEDURE SAVED FOR THE PROJECT](./images/retrieve-procedure-saved-for-the-project.webp) Procedure in Ecosonar are the configuration chosen by delivery teams to sort the EcoSonar recommandations related to ecodesign. This request will return you the procedure chosen for this project. -* **URL** +- **URL** `/api/procedure?projectName=` -* **Method:** +- **Method:** `GET` - -* **URL Params** - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** - **Required:** - - `PROJECT_NAME=[string]` + `PROJECT_NAME=[string]` -* **Data Params** +- **Data Params** - None + None -* **Success Response:** +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** `{ - "procedure": "quickWins" - }` - -* **Error Response:** + "procedure": "quickWins" +}` + +- **Error Response:** When no procedure have been registered for your project - * **Code:** 400 BAD REQUEST
- **Content:** `{ +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "Procedure is not defined in project PROJECT_NAME" }` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - RETRIEVE BEST PRACTICES PER PROJECT** -**EcoSonar ANALYSIS - RETRIEVE BEST PRACTICES PER PROJECT** ----- ![RETRIEVE BEST PRACTICES PER PROJECT](./images/retrieve-best-practices-per-project.webp) Retrieve audits from GreenIt-Analysis and Google Lighthouse aggregated per project. -* **URL** +- **URL** `/api/bestPractices/project?projectName=` -* **Method:** +- **Method:** `GET` - -* **URL Params** - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. +- **URL Params** + + PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. + + **Required:** - **Required:** - - `PROJECT_NAME=[string]` + `PROJECT_NAME=[string]` -* **Data Params** +- **Data Params** - None + None -* **Success Response:** +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** `{ "ecodesign": { "printStyleSheet": { @@ -925,52 +1362,52 @@ Retrieve audits from GreenIt-Analysis and Google Lighthouse aggregated per proje } } }` - -* **Error Response:** -When no analysis has been done yet on your project +- **Error Response:** - * **Code:** 400 BAD REQUEST
- **Content:** `{ +When no analysis has been done yet on your project + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "No analysis found for PROJECT_NAME" }` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
+- **Code:** 500 Internal Server Error
+ +## **EcoSonar ANALYSIS - RETRIEVE BEST PRACTICES PER URL** -**EcoSonar ANALYSIS - RETRIEVE BEST PRACTICES PER URL** ----- ![RETRIEVE BEST PRACTICES PER URL](./images/retrieve-best-practices-per-url.webp) Retrieve audits from GreenIt-Analysis and Google Lighthouse per url audited. -* **URL** +- **URL** `/api/bestPractices/url` -* **Method:** +- **Method:** `POST` - -* **URL Params** - None +- **URL Params** + + None -* **Data Params** +- **Data Params** PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. `{ - "projectName" : "PROJECT_NAME", - "urlName": "url_to_retrieve" - }` + "projectName" : "PROJECT_NAME", + "urlName": "url_to_retrieve" +}` -* **Success Response:** +- **Success Response:** - * **Code:** 200
+ - **Code:** 200
**Content:** `{ "ecodesign": { "printStyleSheet": { @@ -1014,363 +1451,82 @@ Retrieve audits from GreenIt-Analysis and Google Lighthouse per url audited. } } }` - -* **Error Response:** -When no analysis has been done yet on your project +- **Error Response:** - * **Code:** 400 BAD REQUEST
- **Content:** `{ +When no analysis has been done yet on your project + +- **Code:** 400 BAD REQUEST
+ **Content:** `{ "error": "No analysis found for url url_to_retrieve into project PROJECT_NAME" }` - OR +OR EcoSonar API is not able to request to the MongoDB Database : - * **Code:** 500 Internal Server Error
- -**EcoSonar Login Configuration - SAVE LOGIN AND PROXY FOR PROJECT** ----- -![SAVE LOGIN AND PROXY FOR PROJECT](./images/save-login-and-proxy-for-project.webp) - -* **URL** - - `/api/login/insert?projectName=` - -* **Method:** - - `POST` - -* **URL Params** - - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. - - **Required:** - - `PROJECT_NAME=[string]` - -* **Data Params** - `{ - "login": { - "authentication_url": "", - "steps": [] - }, - "proxy": { - "ipAddress": "", - "port": "" - } -}` - -* **Success Response:** - - * **Code:** 201
- -* **Error Response:** - -EcoSonar API is not able to save into the MongoDB Database : +- **Code:** 500 Internal Server Error
- * **Code:** 500 Internal Server Error
+## **EcoSonar Infos - GET VERSION** -**EcoSonar Login Configuration - GET LOGIN FOR PROJECT** ----- -![GET LOGIN FOR PROJECT](./images/get-login-for-project.webp) +![GET VERSION](./images/get-version.webp) -* **URL** +- **URL** - `/api/login/find?projectName=` + `/api/version` -* **Method:** +- **Method:** `GET` - -* **URL Params** - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. - - **Required:** - - `PROJECT_NAME=[string]` - -* **Data Params** +- **URL Params** None -* **Success Response:** - - * **Code:** 200
- **Content:** `{ - "authentication_url": "", - "steps": [] -}` - -* **Error Response:** - -When you don't have any login registered for your project into EcoSonar - - * **Code:** 400 BAD REQUEST
- **Content:** `{ - "error": "Your project does not have login saved into database." -}` - - OR - -EcoSonar API is not able to request to the MongoDB Database : - - * **Code:** 500 Internal Server Error
- -**EcoSonar Login Configuration - GET PROXY FOR PROJECT** ----- -![GET PROXY FOR PROJECT](./images/get-proxy-for-project.webp) - -* **URL** - - `/api/proxy/find?projectName=` - -* **Method:** - - `GET` - -* **URL Params** - - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. - - **Required:** - - `PROJECT_NAME=[string]` - -* **Data Params** +- **Data Params** None -* **Success Response:** - - * **Code:** 200
- **Content:** `{ - "ipAddress": "", - "port": "" -}` - -* **Error Response:** - -When you don't have any proxy registered for your project into EcoSonar +- **Success Response:** - * **Code:** 400 BAD REQUEST
+ - **Code:** 200
**Content:** `{ - "error": "Your project does not have proxy configuration saved into database." + "version": "X.X" }` - OR - -EcoSonar API is not able to request to the MongoDB Database : - - * **Code:** 500 Internal Server Error
+- **Error Response:** -**EcoSonar Login Configuration - DELETE LOGIN FOR PROJECT** ----- -![DELETE LOGIN FOR PROJECT](./images/delete-login-for-project.webp) +- **Code:** 400 BAD REQUEST
-* **URL** +## **EcoSonar Infos - GET BEST PRACTICES DOCUMENTATION** - `/api/login?projectName=` +![GET BEST PRACTICES DOCUMENTATION](./images/get-best-practices-documentation.webp) -* **Method:** +- **URL** - `DELETE` - -* **URL Params** + `/api/best-practices-rules` - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. +- **Method:** - **Required:** - - `PROJECT_NAME=[string]` + `GET` -* **Data Params** +- **URL Params** None -* **Success Response:** - - * **Code:** 200
- -* **Error Response:** - -When you don't have any login registered for your project into EcoSonar and you want still to delete it - - * **Code:** 400 BAD REQUEST
- **Content:** `{ - "error": "Project not found" -}` - - OR - -EcoSonar API is not able to request to the MongoDB Database : - - * **Code:** 500 Internal Server Error
- - -**EcoSonar Login Configuration - DELETE PROXY FOR PROJECT** ----- -![DELETE PROXY FOR PROJECT](./images/delete-proxy-for-project.webp) - -* **URL** - - `/api/proxy?projectName=` - -* **Method:** - - `DELETE` - -* **URL Params** - - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. - - **Required:** - - `PROJECT_NAME=[string]` - -* **Data Params** +- **Data Params** None -* **Success Response:** - - * **Code:** 200
- -EcoSonar API is not able to request to the MongoDB Database : - - * **Code:** 500 Internal Server Error
- -**EcoSonar USER FLOW Configuration - GET USER FLOW for URL** ----- -![GET USER FLOW for URL](./images/get-user-flow-for-url.webp) - -* **URL** - - `/api/user-flow/find` - -* **Method:** - - `GET` - -* **URL Params** - -None - -* **Data Params** - - PROJECT_NAME should match to the Project Key defined in your Sonarqube Project. - -` -{ - "url": "", - "projectName: "PROJECT_NAME" -} -` - -* **Success Response:** - - * **Code:** 200
- **Content:** `{ - "steps": [ - ] -}` - -* **Error Response:** - -When you don't have any user flow registered for the url into EcoSonar - - * **Code:** 400 BAD REQUEST
- **Content:** `{ - "error": "Your project does not have user flow saved into database." -}` - - OR - -EcoSonar API is not able to request to the MongoDB Database : - - * **Code:** 500 Internal Server Error
- -**EcoSonar USER FLOW Configuration - SAVE USER FLOW for URL** ----- -![SAVE USER FLOW for URL](./images/save-user-flow-for-url.webp) - -* **URL** - - `/api/user-flow/insert` - -* **Method:** - - `POST` - -* **URL Params** - -None - -* **Data Params** - -` -{ - "url": "", - "userFlow": { - "steps": [ - ] - } -} -` - -* **Success Response:** - - * **Code:** 200
- **Content:** `{ - "steps": [ - ] -}` - -* **Error Response:** - -You want to add user flow to unexisting url : +- **Success Response:** - * **Code:** 400 BAD REQUEST
+ - **Code:** 200
**Content:** `{ - "error": "Url not found" + "greenitDocs": {}, + "lighthousePerformanceDocs": {}, + "lighthouseAccessbilityDocs": {} }` - OR - -EcoSonar API is not able to request to the MongoDB Database : - - * **Code:** 500 Internal Server Error
- -**EcoSonar USER FLOW Configuration - DELETE USER FLOW FOR URL** ----- -![DELETE USER FLOW FOR URL](./images/delete-user-flow-for-url.webp) - -* **URL** - - `/api/user-flow` - -* **Method:** - - `DELETE` - -* **URL Params** - -None - -* **Data Params** - -` -{ - "url": "" -} -` - -* **Success Response:** - - * **Code:** 200
- -EcoSonar API is not able to request to the MongoDB Database : +- **Error Response:** - * **Code:** 500 Internal Server Error
\ No newline at end of file +- **Code:** 400 BAD REQUEST
\ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md index 9a63def..262090b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,12 +4,33 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). +## Version 3.4 , 11/01/2024 + +### Added + +- Delete a project in EcoSonar +- Retrieve EcoSonar version with a new endpoint +- Retrieve best practices documentation with a new endpoint + +### Removed + +### Changed + +- Change the way to handle the crawler. The crawler request is now asynchronous and you will have two saving options. Either, you can save in a temporary table or you can save directly in the URL configuration table as to be audited by EcoSonar. +- Seperate save login and save proxy as two independent endpoints +- Made easier to launch locally EcoSonar by adding the MongoDB database setup in the Docker Compose file +- Fix bug : insert login without having a procedure +- Fix security vulnerability : upgrade SonarQube dependency to version 9.4 in EcoSonar SonarQube plugin + +--- + ## Version 3.3 , 07/11/2023 ### Added -- Integrate new EcoCode features including : - - additional rules for Python and PHP - - new languages covered : Javascript, Typescript, Android and iOS + +- Integrate new EcoCode features including : + - additional rules for Python and PHP + - new languages covered : Javascript, Typescript, Android and iOS - Implement Swagger User Interface for a more friendly user interface of the API - Automatically push a new Docker Image as a Github package for each new commit in the 'main' branch of the Github repository - Add new API Endpoints to retrieve projects scores average at a selected date with filter and sorting configuration @@ -17,6 +38,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). ### Removed ### Changed + - Fix some security vulnerabilities --- @@ -24,6 +46,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). ## Version 3.2 , 10/08/2023 ### Added + - Include Ecocode documentation into EcoSonar website - Update best practices documentation - Add MongoDB Community Server connection as a potential database for EcoSonar @@ -31,6 +54,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). ### Removed ### Changed + - BUG FIX: user journey flow not working when some CSS selectors are hidden in the page --- @@ -38,52 +62,57 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). ## Version 3.1 , 27/03/2023 ### Added + - Integrate EcoCode functionalities : https://www.ecocode.io/. -EcoCode is a SonarQube plugin developed by a French Open-Source Community `Green Code Initiative` (https://github.com/green-code-initiative) that will add new code smells related to Ecodesign when realizing a SonarQbe audit. Languages covered now are Java, PHP and Python. + EcoCode is a SonarQube plugin developed by a French Open-Source Community `Green Code Initiative` (https://github.com/green-code-initiative) that will add new code smells related to Ecodesign when realizing a SonarQbe audit. Languages covered now are Java, PHP and Python. - EcoSonar audit can now be exported into an Excel File to be able to share with external people the current status of the website - Ability to retrieve EcoSonar current Scores: EcoIndex, Lighthouse Performance & Accessibility and W3C Validator - - Request can be called in a CI/CD Pipeline to prevent production deployment if one of the scores is below a threshold + - Request can be called in a CI/CD Pipeline to prevent production deployment if one of the scores is below a threshold - Ability to sort EcoSonar Recommandation for EcoDesign Part following 3 different configurations: - - `scoreImpact` : best practices will be sorted by descending order of implementation (best practices not implemented returned first) - - `quickWins` : best practices will be sorted by ascending order of difficulty (best practices easy to implement returned first) - - `highestImpact` : best practices will be sorted by order of impact to improve EcoSonar scores (best practices most efficient returned first) + - `scoreImpact` : best practices will be sorted by descending order of implementation (best practices not implemented returned first) + - `quickWins` : best practices will be sorted by ascending order of difficulty (best practices easy to implement returned first) + - `highestImpact` : best practices will be sorted by order of impact to improve EcoSonar scores (best practices most efficient returned first) Goal of this feature is to help delivery teams tackle recommendations according to their priorities. ### Removed ### Changed + - BUG FIX: When an analysis from one of our tool failed, best practices were saved with default value (0) that could lead to reduce the effective score from the website for that best practice. - BUG FIX : App should not crashed if an invalid url has been inserted into the user flow configuration in the `navigate` step - BUG FIX : Getting User flow should be made with parameters : url and projectName if either the same url has been saved severed times into several EcoSonar projects. - Update EcoSonar dependencies - Improve EcoSonar Ecodesign and Accessibily Rate - --- + --- ## Version 3.0 , 09/12/2022 ### Added + - W3C Validator Audit available for public pages: - - Retrieve all errors to improve ecodesign and accessibility levels of web application - - Scoring methodology: if an error has been resolved, it will increase your w3c score + - Retrieve all errors to improve ecodesign and accessibility levels of web application + - Scoring methodology: if an error has been resolved, it will increase your w3c score - Environment Configuration For Sonarqube Plugin to ease deployments - API Configuration of Login Credentials, dedicated Credentials per project and possibility to save them into database (if security allows it) - API Configuration of Proxy Configuration per project - API Configuration of User Flow Configuration, saved into the database (instead of yaml files) - ### Removed ### Changed + - Best Practices is divided into 2 seperated sections : Ecodesign and accessibility (previously it was by audit tool) - BUG FIX: set a default browser viewport in case of unresponsive website to have the right user flow - BUG FIX: inserting the analysis to the wrong url if one url of the batch fails - BUG FIX: possiiblity to create a browser for one url audit if user flow is enabled to allow a better cookie management (however performance of the API will decrease - more time to audit all pages) --- + --- + ## Version 2.3 , 04/10/2022 ### Added @@ -96,9 +125,11 @@ EcoCode is a SonarQube plugin developed by a French Open-Source Community `Green - Retrieve best practices per URL and per project ### Removed + - Removing greenhouse gas emissions and water consumption because calculation may not be accurate to some IT Systems ### Changed + - Upgrade EcoIndex Calculation : https://github.com/cnumr/GreenIT-Analysis/issues/61 - Upgrading EcoSonar URL Configuration and Best practices pages to resolve some accessibility issues - Resolve bug fix on "Optimize Bitmap Images" @@ -106,7 +137,9 @@ EcoCode is a SonarQube plugin developed by a French Open-Source Community `Green - Resolve bug fix on "Image Downloaded and not displayed" --- + --- + ## Version 2.2 , 03/08/2022 ### Added @@ -115,11 +148,12 @@ EcoCode is a SonarQube plugin developed by a French Open-Source Community `Green ### Removed - ### Changed --- + --- + ## Version 2.1 , 26/07/2022 ### Added @@ -128,11 +162,12 @@ EcoCode is a SonarQube plugin developed by a French Open-Source Community `Green ### Removed - ### Changed + - we have updated the way to register Lighthouse analysis : /!\ new version is not compatible with previous one, you should delete all your analysis before adding this new version. --- + ## Version 2.0 , 13/07/2022 ### Added @@ -146,16 +181,16 @@ EcoCode is a SonarQube plugin developed by a French Open-Source Community `Green - Fixing some bugs - Keep an history of every GreenIt and Lighthouse audits made - ### Removed - ### Changed -- /!\ V2 Data Model is no longer compatible with V1 version, sorry about that ... -You will need to remove analysis saved under bestPractices collection in order to have a working API. + +- /!\ V2 Data Model is no longer compatible with V1 version, sorry about that ... + You will need to remove analysis saved under bestPractices collection in order to have a working API. - A New Sonarqube Plugin Version has been set (2.0.0), please make sure to delete previous one (1.0.0) before launching your Sonarqube instance --- + ## Version 1.0 , 12/04/2022 ### Added @@ -168,8 +203,6 @@ You will need to remove analysis saved under bestPractices collection in order t ### Removed - ### Changed - --- diff --git a/EcoSonar-API/README.md b/EcoSonar-API/README.md index e652e1b..abf217a 100644 --- a/EcoSonar-API/README.md +++ b/EcoSonar-API/README.md @@ -1,25 +1,21 @@ # EcoSonar API EcoSonar API is an audit aggregator that will use the following open-source audit tools: + - GreenIT-Analysis CLI (https://github.com/cnumr/GreenIT-Analysis-cli) -- Google Lighthouse with a npm package (https://github.com/GoogleChrome/lighthouse/blob/HEAD/docs/readme.md#using-programmatically) +- Google Lighthouse with a npm package (https://github.com/GoogleChrome/lighthouse/blob/HEAD/docs/readme.md#using-programmatically) - W3C Validator with a npm package (https://www.npmjs.com/package/html-validator). This Audit is using right now an external API to audit websites thus can only audit public pages. By default, W3C Validator is disabled for those reasons. However, if you agree to use this external API, please check this section [Enable W3C validator Analysis](#w3c-validator) -Once the EcoSonar audit is triggered, it will launch the three analysis and store them into a MongoDB Database. -Then, the API can allow you to retrieve pre-formatted audit results using json format. A custom Sonarque Plugin has been created to display the audit directly within the Sonarqube instance. The API can also be used with any other interface that can handle json formats. - -API Documentation : https://github.com/Accenture/EcoSonar/blob/main/API.md - -Swagger User Interface available at the link : `[ECOSONAR-API-URL]/swagger/` - -Locally, available at this address : `http://localhost:3002/swagger/` +Once the EcoSonar audit is triggered, it will launch the three analysis and store them into a MongoDB Database. +Then, the API can allow you to retrieve pre-formatted audit results using json format. A custom SonarQube Plugin has been created to display the audit directly within the Sonarqube instance. The API can also be used with any other interface that can handle json formats. # Summary + - [To start with](#to-start-with) - [MongoDB Database](#mongodb-database) - [Installation](#installation) - [Create a MongoDB Database](#mongodb-creation) - - [Create a MongoDB Community Server](#mongodb-server) + - [Create a MongoDB Community Server](#mongodb-server) - [Create a MongoDB Atlas Database](#mongodb-atlas) - [Create MongoDB Collections](#mongodb-collections) - [Node.js](#nodejs) @@ -29,56 +25,44 @@ Locally, available at this address : `http://localhost:3002/swagger/` - [Prerequisites](#prerequisites-docker) - [Installation](#installation-api) - [Our advice for Server Deployment](#docker-deployment) - - [Add Environment setup](#mongo-setup) + - [Add Environment setup](#env-setup) - [Database configuration](#database-env-var) - [CORS Setup](#cors) - [Enable W3C validator Analysis](#w3c-validator) - [Setup User flow](#user-flow) - [API Endpoints](#api-endpoints) -- [Authentication Configuration](#auth) - - [When you have a simple login flow](#simple-login) - - [EcoSonar V2.3 and below](#old-version-login) - - [CSS Selectors](#css-slectors) - - [EcoSonar V3.0 and above](#new-version-login) - - [More complicated Login flows](#complicated-login) - - [EcoSonar V2.3 and below](#old-version-login-complicated) - - [EcoSonar V3.0 and above](#new-version-login-complicated) -- [Proxy Configuration](#proxy) - - [EcoSonar V2.3 and below](#old-version-proxy) - - [EcoSonar V3.0 and above](#new-version-proxy) -- [User Flow](#user-flow) - - [User Flow Creation](#creation) - - [First method : using Chrome Recorder](#chrome-recorder) - - [Second method : creating your own User Flow JSON](#custom-user-flow) - - [User Flow Integration](#integration) - - [EcoSonar V2.3 and below](#old-version-user-flow) - - [EcoSonar V3.0 and above](#new-version-user-flow) - - [User Flow Verification](#verification) - [Usage Rights](#usage-rights) + # To start with To use the tool, you must first check the prerequisites and complete the installation steps. For this, two different ways to use it: + - Either through a manual installation of Node.js - Either through Docker In both cases, it will be necessary to set up a new MongoDB database. + ## MongoDB Database + ### Installation If the MongoDB database is already created, you can skip this step and retrieve the relevant information to connect to the database (username, password, cluster, database name). + #### Create a MongoDB Database + You will need to choose the most adequate MongoDB database according to your infrastructure. -By default, we have implemented connection with +By default, we have implemented connection with + - MongoDB Community Server : https://www.mongodb.com/try/download/community - MongoDB Atlas : https://www.mongodb.com/atlas - Azure CosmosDB : https://azure.microsoft.com/en-us/products/cosmos-db/#overview @@ -86,77 +70,94 @@ By default, we have implemented connection with For any other MongoDB Database, you will need to set up a new database connection in the file `EcoSonar-API/configuration/database.js`. + ##### Create a MongoDB Community Server 1. First you need to install MongoDB Server Community and it is recommended also to install MongoDB Compass for visualization purposes. You can select the following default setup: - ![MongoDB Server Installation](../images/mongodb-install.webp) +![MongoDB Server Installation](../images/mongodb-install.webp) 2. Once installation on your laptop is over, you can open MongoDB Compass. You can create a new connection with the default settings: - ![MongoDB Database Creation](../images/mongodb-connstring.webp) +![MongoDB Database Creation](../images/mongodb-connstring.webp) 3. Once you are connected, create a database called ‘EcoSonar’. You might also be required to set at least one collection during database initialization. If so, create collection called ‘bestpractices’. The other collections will be created automatically when you will first launch the API connected to the database. - ![MongoDB Database Creation](../images/mongodb-dbcreate.webp) +![MongoDB Database Creation](../images/mongodb-dbcreate.webp) + ##### Create a MongoDB Atlas Database 1. Open MongoDB Cloud : https://www.mongodb.com/fr-fr/cloud 2. Create an account 3. Click on "build a database" --> choose free one - - In "cloud provider & region" choose the closest region in which EcoSonar API is deployed - - In "cluster" put the name of our database (here "EcoSonar") + - In "cloud provider & region" choose the closest region in which EcoSonar API is deployed + - In "cluster" put the name of our database (here "EcoSonar") 4. Click on "Create cluster" 5. Click on "connect" 6. Authorize access 7. Create a username and a password 8. Create a connection with application - - node.js - - version 4.0 or later - - close + - node.js + - version 4.0 or later + - close + ##### Create MongoDB Collections -EcoSonar database will contain the following MongoDB collections: +EcoSonar database will contain the following MongoDB collections: + - bestpractices - greenits - lighthouses - projects - urlsprojects - w3cs +- tempurlsprojects Collections are created automatically when the project is first launched. However, if you chose Azure CosmoDB for MongoDB Database as database, then you will need to create the following collections with related indexes before starting the project otherwise it will fail. Please find below the different indexes that needs to be added for each collection: + 1. bestpractices : `_id`, `idAnalysisBestPractices`, `dateAnalysisBestPractices` 2. greenits : `_id`, `idGreenAnalysis`, `dateGreenAnalysis` 3. lighthouses : `_id`, `idLighthouseAnalysis`, `dateLighthouseAnalysis` 4. projects : `_id` 5. urlsprojects : `_id`, `idKey` 6. w3cs : `_id`, `idW3cAnalysis`, `dateW3cAnalysis` +7. tempurlsprojects: `_id`, `idKey` -## Node.js + +## Option 1 : launch Node.js server + ### Prerequisites - - Node.js https://nodejs.org/fr/ (at least v16) + +- Node.js https://nodejs.org/fr/ (at least v16) + ### Installation -1. Retrieve source code : + +1. Retrieve source code : + ``` git clone https://github.com/Accenture/EcoSonar ``` + 2. Go into the Folder EcoSonar-API 3. Install npm packages : + ``` npm install ``` + 4. Launch the API + ``` npm start ``` @@ -164,26 +165,34 @@ npm start API can be reached at: http://localhost:3000 -## Docker + +## Option 2 : launch a Docker container + ### Prerequisites - - Docker Desktop for Windows (Note : a licence is now required if you need to use Docker Desktop at an Enterprise Level) - - Docker Installed if you are using Mac or Linux + +- Docker (Note : for Windows, a licence is now required if you need to use Docker Desktop at an Enterprise Level) + ### Installation -1. Retrieve source code : - ``` - git clone https://github.com/Accenture/EcoSonar - ``` +1. Retrieve source code : + +``` +git clone https://github.com/Accenture/EcoSonar +``` + 2. Go into the Folder EcoSonar-API -3. Build Docker image : - ``` - docker build -t imageName . - ``` +3. Build Docker image : + +``` +docker build -t imageName . +``` + 4. Launch a Docker Server : + ``` docker container run -d -p 3000:3000 imageName ``` @@ -191,24 +200,31 @@ docker container run -d -p 3000:3000 imageName API can be reached at: http://localhost:3000 + #### Our advice for Server Deployment + Instead, we recommend setting up a CI/CD pipeline with the following steps: + 1. Build the Docker image 2. Push the Docker image into the Docker Registry 3. Stop the server 4. Deploy the server using the newly imported image and correct API configuration 5. Start the server - + + ### Add Environment setup -If you want to run locally the EcoSonar API, you can add an `.env` file at the root of the project, it will contain the local environment variables of the project. -Then choose among the variables below the ones required and add it into `.env` file. +You will need to set up some environment variables to run the API. +Locally, you can add an `.env` file in the folder `EcoSonar-API`, it will contain the local environment variables of the project. +Then choose among the variables below the ones required and add it into `.env` file or to the application settings of your deployed server. + #### Database configuration ##### MongoDB Community Server + ``` ECOSONAR_ENV_DB_TYPE = 'MongoDB' ECOSONAR_ENV_CLUSTER = 'localhost' or '127.0.0.1' @@ -217,59 +233,78 @@ ECOSONAR_ENV_DB_NAME = 'EcoSonar' ``` ##### MongoDB Atlas + ``` ECOSONAR_ENV_DB_TYPE= 'MongoDB_Atlas' ECOSONAR_ENV_CLUSTER = #cluster ECOSONAR_ENV_DB_NAME = 'EcoSonar' ECOSONAR_ENV_USER = #user -ECOSONAR_ENV_CLOUD_PROVIDER= 'local' (the password will be retrieved from the environment variables) +ECOSONAR_ENV_CLOUD_PROVIDER= 'local' (the database password will be retrieved from the environment variables) ECOSONAR_ENV_PASSWORD = #password ``` ###### Azure CosmosDB + ``` ECOSONAR_ENV_DB_TYPE= 'CosmosDB' ECOSONAR_ENV_CLUSTER = #cluster ECOSONAR_ENV_DB_PORT = #port ECOSONAR_ENV_DB_NAME = 'EcoSonar' ECOSONAR_ENV_USER = #user -ECOSONAR_ENV_CLOUD_PROVIDER= 'AZURE' (the password will be retrieved from the Azure Key Vault) or ‘local’ (the password will be retrieved from the environment variables) -ECOSONAR_ENV_PASSWORD = #password (if ECOSONAR_ENV_CLOUD_PROVIDER=’local’) -ECOSONAR_ENV_KEY_VAULT_NAME= #keyVaultName (if ECOSONAR_ENV_CLOUD_PROVIDER=’AZURE’) -ECOSONAR_ENV_SECRET_NAME = #keyVaultSecretName (if ECOSONAR_ENV_CLOUD_PROVIDER=’AZURE’) +ECOSONAR_ENV_CLOUD_PROVIDER= 'AZURE' (the database password will be retrieved from the Azure Key Vault) or ‘local’ (the database password will be retrieved from the environment variables) +ECOSONAR_ENV_PASSWORD = #password (required only if ECOSONAR_ENV_CLOUD_PROVIDER=’local’) +ECOSONAR_ENV_KEY_VAULT_NAME= #keyVaultName (required only if ECOSONAR_ENV_CLOUD_PROVIDER=’AZURE’) +ECOSONAR_ENV_SECRET_NAME = #keyVaultSecretName (required only if ECOSONAR_ENV_CLOUD_PROVIDER=’AZURE’) ``` -##### Other database configuration possible + +##### Other database or password manager configuration possible If you are not using the same MongoDB database than us, you can develop your own. Please check to the `EcoSonar-API/configuration/database.js` to set up a different connection string to your database and `EcoSonar-API/configuration/retrieveDatabasePasswordFromCloud.js` for another password manager solution. We would be very happy if you want to share this new set up in a Pull Request in the Github Repository to enrich the community. + #### CORS Setup + To improve API Security, CORS options need to be configured to allow any other application to send requests to the API. To configure it, you can add the following environment variable in your Application Configuration to allow requests coming from your frontend interface: + +``` +ECOSONAR_ENV_SONARQUBE_SERVER_URL = url of the Sonarqube Server instantiated or any other frontend interface used to retrieve the audits ``` -ECOSONAR_ENV_SONARQUBE_SERVER_URL = url of the Sonarqube Server instantiated or any other frontend interface + +For local development purposes only, you can add additional URL authorize to reach out to the API using the variable `ECOSONAR_ENV_LOCAL_DEV_SERVER_URL`. You can declare several urls but you have to add `;` between each. + +``` +ECOSONAR_ENV_LOCAL_DEV_SERVER_URL = #URL1;#URL2 ``` + #### Enable W3C validator Analysis + W3C Validator needs to make a request to an external API to audit your url. It means that only 'public' pages can be audited right now. We have raised an issue to the team in charge of W3C Auditor to be able to audit also pages protected by authentication. To be continued... In the environment variable, you can set the following parameter to request an audit through the external API or not: + ``` -ECOSONAR_ENV_ALLOW_EXTERNAL_API = take `true`or `false` +ECOSONAR_ENV_ALLOW_EXTERNAL_API = `true`or `false` ``` -#### Setup User flow + +#### Setup User flow + If your projects require to set up a user flow to access some of your web pages, you should then enable this parameter to run audits on dedicated browser to ensure cookies are correctly configured. However, it will increase the audit time of your project. + ``` -ECOSONAR_ENV_USER_JOURNEY_ENABLED = take `true`or `false` +ECOSONAR_ENV_USER_JOURNEY_ENABLED = `true`or `false` ``` + # API Endpoints 1. With Postman @@ -282,327 +317,12 @@ For Swagger User Interface : `[ECOSONAR-API-URL]/swagger/` Locally, available at this address : `http://localhost:3002/swagger/` -3. Additional documentation +3. Additional API documentation https://github.com/Accenture/EcoSonar/blob/main/API.md - -# Authentication Configuration - -In order to audit pages that can be accessed only through an authentication service (intranet pages for example), -you need to add authentication credentials into EcoSonar API to allow auditing dedicated pages. - - -## When you have a simple login flow : username, password and click on a button - - -### EcoSonar V2.3 and below -To implement that, you can create a YAML file login.yaml at the root of the folder `EcoSonar-API` and use the following format -if the CSS selector of you input field is `input[name=username]` or `input[type=email]`, password field `input[name=password]`, `input[type=password]`, `input[id=password]` and button `button[type=submit]` : - -``` -authentication_url: authenticationPage -username: yourUsername -password: yourPassword -``` -or if one of the CSS Selector does not match the default CSS Selectors : - -``` -authentication_url:authenticationPage -username: yourUsername -password: yourPassword -loginButtonSelector: CSS_Selector_Button -usernameSelector: CSS_Selector_Login -passwordSelector: CSS_Selector_Password -``` - - -#### CSS Selectors - -CSS Selectors are patterns in HTML code to apply some style (doc ). For exemple, to find the css selector of  loginButtonSelector: -Go to the login page of your website -Right click on the login button -Select inspect -Choose css selectors you want (class, type, name, id, ....) - -More Information : - -documentation: https://github.com/cnumr/GreenIT-Analysis-cli/blob/072987f7d501790d1a6ccc4af6ec06937b52eb13/README.md#commande -code: https://github.com/cnumr/GreenIT-Analysis-cli/blob/072987f7d501790d1a6ccc4af6ec06937b52eb13/cli-core/analysis.js#L198 - - -### EcoSonar V3.0 and above - -You can directly configure your login credentials at a project level in the API. -Be careful your login credentials will then be saved into the database, please check with your security team if you are allowed to do so. - -You can use the Endpoint "Save Login and Proxy" and enter the following body: - -``` -{ - "login": { - "authentication_url": "authenticationPage", - "username": "yourUsername", - "password": "yourPassword" - } -} -``` -or - -``` -{ - "login": { - "authentication_url": "authenticationPage", - "username": "yourUsername", - "password": "yourPassword", - "loginButtonSelector": "CSS_Selector_Button", - "usernameSelector": "CSS_Selector_Login", - "passwordSelector": "CSS_Selector_Password" - } -} -``` - -## More complicated Login flows - -When the Username and password are not in the same page, or you need other user inputs to complete authentication - - -### EcoSonar V2.3 and below -If the authentication of the website required steps or a page change, you must follow these requirements: - -1. Create a YAML file login.yaml at the root of the repo -2. Add authentication_url key and value is required -3. Add steps key is required -4. Fill steps part as follow - -To choose you authentification_url, you can either set it to the page in which you need to perform the authentification steps or pick the page that can only be accessed after being authenticated. - -(To help you to create steps, you can use on Google Chrome Tool "Recorder". (inspector -> recorder -> start a new recording) and save json file, then you can extract steps type, target, selectors) - -Each step is a description of an action made by a regular user: -- "click" -> Click on a button for example: "submit" or "next" -type: "click" (required) -selector: CSS Selector of the field or button (required) -- "change" -> to fill a field like username or password -type: "change" (required) -selector: CSS Selector of the field or button (required) -value: value of the password or username (required) -/!\ CSS Selectors with "aria" label are not read by EcoSonar. - -Example of login.yaml file. to access into an account - -``` -authentication_url: authenticationPage -steps: -  -   type: "click" -      selectors: -          - "#input-email" -  -   type: "change" -      value: "my email" -      selectors: -          - "#input-email" -  -   type: "click" -      selectors: -        - "#lookup-btn" -  -   type: "change" -      value: "my password" -      selectors: -          - "#input-password" -  -   type: "click" -      selectors: -        - "#signin-button" -``` - - -### EcoSonar V3.0 and above - -You can use directly to configure your login credentials at a project level in the API. - -You can use the Endpoint "Save Login and Proxy" and enter the following body: - -``` -{ - "login": { - "authentication_url": "authenticationPage", - "steps" : [ ....] - } -} -``` - - -# Proxy Configuration - -For some websites, you may need to configure a proxy in order to access it. -You need to seperate the analysis that are made with or without a proxy into several EcoSonar projects. - - -## EcoSonar V2.3 and below -To implement that, you can create a YAML file proxy.yaml at the root of the repo. -Please find below the configuration format : - -``` -ipaddress: ipAddress -port: port -projectName: (optional) - - PROJECT_NAME_1 - - PROJECT_NAME_2 -``` - -ipaddress : IP Address of your proxy -port : port of your proxy - -projectName : list of EcoSonar Projects (corresponding to Sonarqube projectKey) that needs a proxy to audit pages registered. If no projectName has been added but proxy.yaml file exists, then proxy will be applied by default to all your projects. - - -## EcoSonar V3.0 and above - -You can directly configure your login credentials at a project level in the API. - -You can use the Endpoint "Save Login and Proxy" and enter the following body: - -``` -{ - "proxy": { - "ipAddress": "ipAddress", - "port" : "port" - } -} -``` - - -# User Flow - -In order to audit some pages, sometimes you may need to go through a user flow to get access to that page (for exemple fill in a form). Otherwise, if you don't have the context, the page can not be accessed. -We have added this functionality into EcoSonar. - - -## User Flow Creation - - -### First method : using Chrome Recorder - -If your business allows to use Chrome Browser, we hightly recommend you to use this method. -Chrome has a native panel called "Recorder" that allows you to record, replay and measure user flows (https://developer.chrome.com/docs/devtools/recorder/). - - ![Chrome Recorder](../images/chrome-recorder.webp) - - To access this panel, please click right on your browser, select Inspect, then choose Recorder in the DevTools Panel. - -To start recording a user flow, you can then click on button "Start new recording", choose a name then click on "Start a new recording". - - ![Start Chrome Recorder](../images/chrome-start-recorder.webp) - - Then the Chrome browser is going to register every interaction that is made with the page and save it into the user flow. - -For example, we want to audit this page : http://www.ecometer.org/job?url=https%3A%2F%2Fwww.accenture.com%2Ffr-fr. It is only accessible if you are launching an analysis of the website with Ecometer : -1. You need to navigate to the page : http://www.ecometer.org/ -2. You need to change the input to have your URL. -3. You need to click on the button "Analyse" to launch the analysis. - - ![Chrome Recorder User flow](../images/chrome-recorder-result.webp) - -Chrome Recorder is going to register the user flow by saving every step/interaction. - -To make sure your user flow is correct and can be used through Ecosonar, please use "Replay" button and start from initial page to make sure the User flow automation is set up correctly. You should have the result as your previous manual configuration. - -/!\ Be Careful "click" steps are not duplicated in your userflow (same element triggered) otherwise it could not have the expected behaviour. You can remove step in the Recorder by clicking on the 3 dots. - -Once you have validated your userflow, you can export this User Flow using the export button and choose JSON. - - ![Chrome Recorder User flow export](../images/save-chrome-recorder.webp) - - -### Second method : creating your own User Flow JSON - -If you are not allowed to use Chrome Browser, you can edit manually the user flow JSON file created by Chrome Recorder. -It should have the following format : -``` -{​​​​​​​​ - "steps": [ - {​​​​ - "type": "navigate", - "url": "http://www.ecometer.org/", - }​​​​​​​​​​​​​​​​​​, - {​​​​​​​​​​​​​​​​​​​​​​ - "type": "click", - "selectors": [ - [ - "body > div.container.begin > div > form > input.url" - ] - ], - }​​​​​​​​​​​​​​​​​​​​​​, - {​​​​​​​​​​​​​​​​​​​ - "type": "change", - "value": "https://www.accenture.com/fr-fr", - "selectors": [ - [ - "body > div.container.begin > div > form > input.url" - ] - ], - }​​​​​​​​​​​​​​​​​​​, - {​​​​​​​​​​​​​​​​​​​ - "type": "click", - "selectors": [ - [ - "body > div.container.begin > div > form > input.button" - ] - ], - ] - }​​​​​​​​​​​​​​​​​​​, - { - "type": "scroll", - "distancePercentage": 50 - }, - ] -}​​​​​​​​​​​​​​​​​​​​​​​​​​​​​ -``` - -We are handling into EcoSonar 4 kind of browser interactions : -1. Navigate to a URL -It should have "type" = "navigate" and "url" the url you want to go to -2. Change an input field -"type" = "change", "value" : value to be set in the input field, "selectors" : list of CSS Selectors to find the right input field -3. Click on a button -"type" = "click", "selectors" : list of CSS Selectors to find the right button -4. Scroll in the page and stop at a certain percentage at the page. -It will scroll down each 100px until the scroll limit has been reached. For example, the page is 1080px and we want to stop at the middle of the page (so distancePercentage = 50 %), it will iterate every 100 pixels until the windows has scrolled 540 px. -"type" = "scroll" and "distancePercentage" = value between 0 and 100 - - -## User Flow Integration - - -### EcoSonar v2.3 and below - -Once you have been able to define the JSON file matching to your user flow, you can followed instructions: - -1. Create a folder "userJourney" if it does not exists yet at the root of the folder `EcoSonar-API`. -2. Paste JSON file created in the folder "userJourney" and rename it with the URL you wish to audit. Please remove the following special character `:` `?` `:` `/` from the URL in order to save the JSON. To retrieve the user flow we are matching it with the URL registered through EcoSonar URL Configuration. This step is not to forget otherwise EcoSonar won't be auditing the right page. -3. Deploy EcoSonar-API with all relevant user flows. -4. Launch a new EcoSonar audit to verify there are no technical errors in the logs application. Correct them if needed. - - -### EcoSonar v3.0 and above - -With version 3.0, you can directly configure the user flow in the API provided (no longer need to reboot the instance) -You can use the Endpoint "Save User Flow" and enter the following body: - -``` -{ - "url": "urlToAudit, - "userFlow": { - "steps": [ ....] - } -} -``` - - -## User Flow Verification - -To verify pages you audit are the correct ones, we suggest you to use both Chrome extensions : Green-IT Analysis (https://chrome.google.com/webstore/detail/greenit-analysis/mofbfhffeklkbebfclfaiifefjflcpad?hl=fr) and Google Lighthouse (https://chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk?hl=fr) and compare results from these extensions to the EcoSonar audits. There should be almost identical. -If that is not the case, do not hesitate to contact us to help you. - + # Usage Rights -This tool uses an API that does not allow its use for commercial purposes. \ No newline at end of file +This tool uses an API that does not allow its use for commercial purposes. diff --git a/EcoSonar-API/dataBase/bestPracticesRepository.js b/EcoSonar-API/dataBase/bestPracticesRepository.js index 6cf1c65..7ae9387 100644 --- a/EcoSonar-API/dataBase/bestPracticesRepository.js +++ b/EcoSonar-API/dataBase/bestPracticesRepository.js @@ -6,16 +6,14 @@ const BestPracticesRepository = function () { /** * Insert best practices * @param {Array} reports array containing the result of greenIt analysis (including metrics and best practices) - * @param {Array} urlIdList array of urls ID - * @param {String} projectName name of the project */ - this.insertBestPractices = async function (arrayToInsert) { - if (arrayToInsert.length > 0) { arrayToInsert = await checkValues(arrayToInsert) } + this.insertBestPractices = async function (reports) { + if (reports.length > 0) { reports = checkValues(reports) } return new Promise((resolve, reject) => { - if (arrayToInsert.length > 0) { + if (reports.length > 0) { bestpractices - .insertMany(arrayToInsert) + .insertMany(reports) .then(() => { resolve() }) @@ -33,8 +31,7 @@ const BestPracticesRepository = function () { /** * deletion of one or more analysis of best practices on the table bestPractices - * @param {name of the project} projectNameReq - * @returns + * @param {string} projectNameReq */ this.delete = async function (projectNameReq) { let empty = false @@ -73,8 +70,8 @@ const BestPracticesRepository = function () { /** * find All analysis of best practices for a project on the table bestPractices - * @param {name of the project} projectNameReq - * @returns + * @param {string} projectNameReq + * @returns {Array} best practices reports for the last analysis run on project */ this.findAll = async function (projectNameReq) { let hasNoUrl = false @@ -118,9 +115,9 @@ const BestPracticesRepository = function () { /** * find analysis of best practices for an URL on the table bestPractices - * @param {name of the project} projectName - * @param {url} urlName - * @returns + * @param {string} projectName + * @param {string} urlName + * @returns {Array} best practices reports for the last analysis run on URL */ this.find = async function (projectName, urlName) { let hasNoUrl = false @@ -153,24 +150,42 @@ const BestPracticesRepository = function () { } /** + * Deletion of all best practices analysis for a project + * @param {string} urlIdKeyList list of id key representing url saved + */ + this.deleteProject = async function (urlIdKeyList) { + return new Promise((resolve, reject) => { + bestpractices.deleteMany({ idUrl: { $in: urlIdKeyList } }) + .then((result) => { + console.log(`DELETE URLS PROJECT - On best practices ${result.deletedCount} objects removed`) + resolve() + }) + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error) + reject(new SystemError()) + }) + }) + } +} + +/** * * @param {Array} arrayToInsert * @param {Array} urlIdList - * @param {String} projectName + * @param {string} projectName * @returns an array cleaned of analysis containing undefined and NaN to avoid mongoose rejecting every GreenIt Best Practices insertion * This function check if best practices exists for each url of the report (arrayToInsert), if true then also update urlIdList array to match */ - async function checkValues (arrayToInsert) { - const arrayToInsertSanitized = [] - for (const analysis of arrayToInsert) { - if (analysis.bestPractices) { - arrayToInsertSanitized.push(analysis) - } else { - console.log(`BEST PRACTICES INSERT - Best practices for url ${analysis.url} cannot be inserted due to presence of NaN or undefined values`) - } +function checkValues (arrayToInsert) { + const arrayToInsertSanitized = [] + for (const analysis of arrayToInsert) { + if (analysis.bestPractices) { + arrayToInsertSanitized.push(analysis) + } else { + console.log(`BEST PRACTICES INSERT - Best practices for url ${analysis.url} cannot be inserted due to presence of NaN or undefined values`) } - return arrayToInsertSanitized } + return arrayToInsertSanitized } const bestPracticesRepository = new BestPracticesRepository() diff --git a/EcoSonar-API/dataBase/greenItRepository.js b/EcoSonar-API/dataBase/greenItRepository.js index 7efd697..9e9676f 100644 --- a/EcoSonar-API/dataBase/greenItRepository.js +++ b/EcoSonar-API/dataBase/greenItRepository.js @@ -5,17 +5,16 @@ const formatGreenItAnalysis = require('../services/format/formatGreenItAnalysis' const GreenItRepository = function () { /** - * insertion of one or more analysis - * @param {arrayToInsert} reports - * @returns + * insertion of one or more greenit analysis + * @param {Array} reports reports to add */ - this.insertAll = async function (arrayToInsert) { - if (arrayToInsert.length > 0) { arrayToInsert = await checkValues(arrayToInsert) } + this.insertAll = async function (reports) { + if (reports.length > 0) { reports = await checkValues(reports) } return new Promise((resolve, reject) => { - if (arrayToInsert.length > 0) { + if (reports.length > 0) { greenits - .insertMany(arrayToInsert) + .insertMany(reports) .then(() => { resolve() }) @@ -33,8 +32,8 @@ const GreenItRepository = function () { } /** - * find all EcoIndex analysis - * @returns + * find all GreenIT analysis saved in EcoSonar + * @returns greenit reports */ this.findAllAnalysis = async function () { return new Promise((resolve, reject) => { @@ -42,17 +41,18 @@ const GreenItRepository = function () { .then((res) => { resolve(res) }) - .catch(() => { + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error) reject(new SystemError()) }) }) } /** - * find analysis for one url : OK - * @param {project Name} projectNameReq - * @param {url Name} urlNameReq - * @returns + * find last greenit analysis for one url + * @param {string} projectNameReq project name + * @param {string} urlNameReq URL of the page analyzed + * @returns last greenit analysis of a url */ this.findAnalysisUrl = async function (projectNameReq, urlNameReq) { let res @@ -112,10 +112,9 @@ const GreenItRepository = function () { } /** - * find analysis for one Project - * @param {project Name} projectNameReq - * @param {all deployment = true} alldeployment - * @returns + * find last greenit analysis for one Project + * @param {string} projectNameReq name of the project + * @returns last greenit analysis of a project */ this.findAnalysisProject = async function (projectNameReq) { let stringErr = null @@ -160,9 +159,9 @@ const GreenItRepository = function () { } /** - * find EcoIndex from last analysis for one Project - * @param {project Name} projectNameReq - * @returns + * find scores from last analysis for one Project + * @param {string} projectNameReq name of the project + * @returns scores of last analysis */ this.findScoreProject = async function (projectNameReq) { let stringErr = null @@ -206,6 +205,24 @@ const GreenItRepository = function () { } }) } + + /** + * Deletion of all greenIt analysis for a project + * @param {string} urlIdKeyList list of id key representing url saved + */ + this.deleteProject = async function (urlIdKeyList) { + return new Promise((resolve, reject) => { + greenits.deleteMany({ idUrlGreen: { $in: urlIdKeyList } }) + .then((result) => { + console.log(`DELETE URLS PROJECT - On GreenIt ${result.deletedCount} objects removed`) + resolve() + }) + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error) + reject(new SystemError()) + }) + }) + } } /** @@ -219,8 +236,13 @@ async function checkValues (arrayToInsert) { if (!Object.values(analysis).includes(undefined) || !Object.values(analysis).includes(NaN)) { arrayToInsertSanitized.push(analysis) } else { - const urlInfos = await urlsprojects.find({ idKey: analysis.idUrlGreen }) - console.log(`GREENIT INSERT - Url ${urlInfos[0].urlName} cannot be inserted due to presence of NaN or undefined values`) + await urlsprojects.find({ idKey: analysis.idUrlGreen }) + .then((result) => { + console.warn(`GREENIT INSERT - Url ${result[0].urlName} cannot be inserted due to presence of NaN or undefined values`) + }) + .catch((error) => { + console.error(error) + }) } } return arrayToInsertSanitized diff --git a/EcoSonar-API/dataBase/lighthouseRepository.js b/EcoSonar-API/dataBase/lighthouseRepository.js index 9961776..30f9ae7 100644 --- a/EcoSonar-API/dataBase/lighthouseRepository.js +++ b/EcoSonar-API/dataBase/lighthouseRepository.js @@ -5,9 +5,8 @@ const formatLighthouseAnalysis = require('../services/format/formatLighthouseAna const LighthouseRepository = function () { /** - * insertion of one or more analysis on the table lighthouses - * @param {list of url analysis} lighthouseMetricsReports - * @returns + * insertion of one or more lighthouse analysis on the table lighthouses + * @param {Array} lighthouseMetricsReports lightouse reports */ this.insertAll = function (lighthouseMetricsReports) { return new Promise((resolve, reject) => { @@ -33,8 +32,8 @@ const LighthouseRepository = function () { } /** - * find all Lighthouse analysis - * @returns + * find all Lighthouse analysis saved in EcoSonar + * @returns all ligthouse reports */ this.findAllAnalysis = async function () { return new Promise((resolve, reject) => { @@ -42,17 +41,18 @@ const LighthouseRepository = function () { .then((res) => { resolve(res) }) - .catch(() => { + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error.message) reject(new SystemError()) }) }) } /** - * find analysis for one url in a project - * @param {project Name} projectNameReq - * @param {url Name} urlNameReq - * @returns + * find last analysis for one url in a project + * @param {string} projectNameReq name of the project + * @param {string} urlNameReq url id key representing the url saved in database + * @returns last ligthouse analysis for url */ this.findAnalysisUrl = async function (projectNameReq, urlNameReq) { let urlMatching @@ -144,9 +144,9 @@ const LighthouseRepository = function () { } /** - * find analysis for one Project - * @param {project Name} projectNameReq - * @returns + * find last lighthouse analysis for one Project + * @param {string} projectNameReq project name + * @returns last lighthouse analysis for the project */ this.findAnalysisProject = async function (projectNameReq) { let stringErr = null @@ -222,8 +222,8 @@ const LighthouseRepository = function () { /** * find Lighthouse Scores for one Project - * @param {project Name} projectNameReq - * @returns + * @param {string} projectNameReq project name + * @returns ligthouse score for last analysis in the project */ this.findScoreProject = async function (projectNameReq) { let stringErr = null @@ -284,6 +284,24 @@ const LighthouseRepository = function () { } }) } + + /** + * Deletion of all lighthouses analysis for a project + * @param {Array} urlIdKeyList list of id key representing urls saved + */ + this.deleteProject = async function (urlIdKeyList) { + return new Promise((resolve, reject) => { + lighthouses.deleteMany({ idUrlLighthouse: { $in: urlIdKeyList } }) + .then((result) => { + console.log(`DELETE URLS PROJECT - On Lighthouse ${result.deletedCount} objects removed`) + resolve() + }) + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error) + reject(new SystemError()) + }) + }) + } } const lighthouseRepository = new LighthouseRepository() diff --git a/EcoSonar-API/dataBase/models/tempurlsproject.js b/EcoSonar-API/dataBase/models/tempurlsproject.js new file mode 100644 index 0000000..079a84a --- /dev/null +++ b/EcoSonar-API/dataBase/models/tempurlsproject.js @@ -0,0 +1,19 @@ +const mongoose = require('mongoose') +const Schema = mongoose.Schema + +const tempUrlsProjectSchema = new Schema({ + idKey: { + type: String, + required: true, + unique: true + }, + projectName: { + type: String, + required: true, + unique: true + }, + urlsList: [String] +}) + +const tempurlsProject = mongoose.model('tempurlsprojects', tempUrlsProjectSchema) +module.exports = tempurlsProject diff --git a/EcoSonar-API/dataBase/projectsRepository.js b/EcoSonar-API/dataBase/projectsRepository.js index 951ae5f..8f1f0c5 100644 --- a/EcoSonar-API/dataBase/projectsRepository.js +++ b/EcoSonar-API/dataBase/projectsRepository.js @@ -4,8 +4,9 @@ const urlsProject = require('./models/urlsprojects') const ProjectsRepository = function () { /** - * get all projects in database - * @returns an array with the projectName for all projects founded + * get all projects in database that match a regexp + * @param {string} filterName regexp for the project name + * @returns an array with the projectName for all projects found */ this.findAllProjectsNames = async function (filterName) { let query = {} @@ -17,7 +18,8 @@ const ProjectsRepository = function () { .then((res) => { resolve(res) }) - .catch(() => { + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error.message) reject(new SystemError()) }) }) @@ -25,8 +27,8 @@ const ProjectsRepository = function () { /** * add a new procedure for a project - * @param {projectName} : the name of the project - * @param {procedure} : the procedure to add + * @param {string} projectName the name of the project + * @param {string} procedure the procedure to add */ this.createProcedure = function (projectName, procedure) { return new Promise((resolve, reject) => { @@ -45,10 +47,8 @@ const ProjectsRepository = function () { /** * update the procedure of a project - * @param {projectName} : the name of the project - * @param {selectedProcedure} : the new procedure to update - * @param {loginCredentials} : the login credentials to be set when analysing the project - * @returns + * @param {string} projectName the name of the project + * @param {string} selectedProcedure the new procedure to update */ this.updateProjectProcedure = async function (projectName, selectedProcedure) { return new Promise((resolve, reject) => { @@ -69,16 +69,13 @@ const ProjectsRepository = function () { /** * Create login Configuration to be saved in the project - * @param {projectName} projectName is the name of the project - * @param {procedure} procedure is the procedure to be saved in a specified enumeration - * @param {loginCredentials} loginCredentials is the login credentials to be set when analysing the project - * @param {proxy} proxy is the proxy configuration to be set when analysing the project - * @returns + * @param {string} projectName is the name of the project + * @param {JSON} loginCredentials is the login credentials to be set when analysing the project */ - this.createLoginConfiguration = async function (projectName, loginCredentials, proxy) { + this.createLoginConfiguration = async function (projectName, loginCredentials) { const loginMap = (loginCredentials !== undefined && loginCredentials !== null) ? new Map(Object.entries(loginCredentials)) : {} return new Promise((resolve, reject) => { - projects.create({ projectName, login: loginMap, proxy }) + projects.create({ projectName, login: loginMap }) .then(() => { resolve() }) .catch((error) => { console.error('PROJECTS REPOSITORY - login creation failed') @@ -89,18 +86,34 @@ const ProjectsRepository = function () { }) } + /** + * Create proxy configuration to be saved in the project + * @param {string} projectName is the name of the project + * @param {string} proxy is the proxy configuration to be set when analysing the project + */ + this.createProxyConfiguration = async function (projectName, proxy) { + return new Promise((resolve, reject) => { + projects.create({ projectName, proxy }) + .then(() => { resolve() }) + .catch((error) => { + console.error('PROJECTS REPOSITORY - proxy creation failed') + console.error('\x1b[31m%s\x1b[0m', error) + const systemError = new SystemError() + reject(systemError) + }) + }) + } + /** * Update login Configuration to be saved in the project - * @param {projectName} projectName is the name of the project - * @param {procedure} procedure is the procedure to be saved in a specified enumeration - * @param {loginCredentials} loginCredentials is the login credentials to be set when analysing the project - * @param {proxy} proxy is the proxy configuration to be set when analysing the project - * @returns + * @param {string} projectName is the name of the project + * @param {string} procedure is the procedure to be saved in a specified enumeration + * @param {JSON} loginCredentials is the login credentials to be set when analysing the project */ - this.updateLoginConfiguration = async function (projectName, procedure, loginCredentials, proxy) { + this.updateLoginConfiguration = async function (projectName, procedure, loginCredentials) { const loginMap = new Map(Object.entries(loginCredentials)) return new Promise((resolve, reject) => { - projects.updateOne({ projectName }, { login: loginMap, proxy, procedure }) + projects.updateOne({ projectName }, { login: loginMap, procedure }) .then(() => { resolve() }) .catch((error) => { console.error('PROJECTS REPOSITORY - login update failed') @@ -111,10 +124,29 @@ const ProjectsRepository = function () { }) } + /** + * Update proxy configuration to be saved in the project + * @param {string} projectName is the name of the project + * @param {string} procedure is the procedure to be saved in a specified enumeration + * @param {JSON} proxy is the proxy configuration to be set when analysing the project + */ + this.updateProxyConfiguration = async function (projectName, procedure, proxy) { + return new Promise((resolve, reject) => { + projects.updateOne({ projectName }, { proxy, procedure }) + .then(() => { resolve() }) + .catch((error) => { + console.error('PROJECTS REPOSITORY - proxy update failed') + console.error('\x1b[31m%s\x1b[0m', error) + const systemError = new SystemError() + reject(systemError) + }) + }) + } + /** * find project settings in the table projects - * @param {projectName} projectName - * @returns + * @param {string} projectNameReq name of the project + * @returns project settings */ this.getProjectSettings = async function (projectName) { let systemError = null @@ -139,8 +171,9 @@ const ProjectsRepository = function () { /** * Deletion of login credentials for project - * @param {name of the project} projectNameReq - * @returns + * @param {string} projectNameReq name of the project + * @param {string} procedureRegistered procedure registered for the project + * @param {JSON} proxyRegistered proxy registered for the project */ this.deleteLoginCredentials = async function (projectNameReq, procedureRegistered, proxyRegistered) { let systemError = null @@ -166,8 +199,9 @@ const ProjectsRepository = function () { /** * Deletion of proxy configuration for project - * @param {name of the project} projectNameReq - * @returns + * @param {string} projectNameReq name of the project + * @param {string} procedureRegistered procedure registered for the project + * @param {JSON} loginRegistered login registered for the project */ this.deleteProxyConfiguration = async function (projectNameReq, procedureRegistered, loginRegistered) { let systemError = null @@ -190,6 +224,23 @@ const ProjectsRepository = function () { } }) } + + /** + * Deletion of one project based on his name + * @param {string} projectNameReq name of the project + */ + this.deleteProjectPerProjectName = async function (projectNameReq) { + return new Promise((resolve, reject) => { + projects.deleteOne({ projectName: projectNameReq }) + .then(() => { + console.log(`DELETE URLS PROJECT - project ${projectNameReq} deleted`) + resolve() + }).catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error) + reject(new SystemError()) + }) + }) + } } const projectsRepository = new ProjectsRepository() diff --git a/EcoSonar-API/dataBase/tempurlsProjectRepository.js b/EcoSonar-API/dataBase/tempurlsProjectRepository.js new file mode 100644 index 0000000..7dfcda3 --- /dev/null +++ b/EcoSonar-API/dataBase/tempurlsProjectRepository.js @@ -0,0 +1,79 @@ +const uniqid = require('uniqid') +const tempurlsproject = require('./models/tempurlsproject') +const SystemError = require('../utils/SystemError') + +const TempUrlsProjectRepository = function () { + /** + * insertion of urls crawled for the project in the collection temporaryurlsProject + * @param {string} projectName project name + * @param {Array} urls urls crawled to be saved + */ + this.create = async function (projectName, urls) { + return new Promise((resolve, reject) => { + tempurlsproject.create({ + idKey: uniqid(), + projectName, + urlsList: urls + }) + .then(() => { resolve() }) + .catch((err) => { + console.error('\x1b[31m%s\x1b[0m', err) + reject(new SystemError()) + }) + }) + } + + /** + * update of urls crawled for the project in the collection temporaryurlsProject + * @param {string} projectName project name + * @param {Array} urls urls crawled to be saved + */ + this.updateUrls = async function (projectName, urls) { + return new Promise((resolve, reject) => { + tempurlsproject.updateOne({ projectName }, { urlsList: urls }) + .then(() => { resolve() }) + .catch((err) => { + console.error('\x1b[31m%s\x1b[0m', err) + reject(new SystemError()) + }) + }) + } + + /** + * get urls crawled for the project in the collection temporaryurlsProject + * @param {string} projectName project name + * @param {Array} urls urls crawled to be saved + * @returns list of urls crawled + */ + this.findUrls = async function (name) { + return new Promise((resolve, reject) => { + tempurlsproject.findOne({ projectName: name }) + .then((result) => { resolve(result) }) + .catch((err) => { + console.error('\x1b[31m%s\x1b[0m', err) + reject(new SystemError()) + }) + }) + } + + /** + * deletion of all temporary urls for the project + * @param {string} projectName name of the project + */ + this.deleteProject = async function (projectName) { + return new Promise((resolve, reject) => { + tempurlsproject.deleteOne({ projectName }) + .then((result) => { + console.log(`DELETE URLS PROJECT - On tempurlsproject project ${projectName} removed`) + resolve() + }) + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error) + reject(new SystemError()) + }) + }) + } +} + +const tempurlsProjectRepository = new TempUrlsProjectRepository() +module.exports = tempurlsProjectRepository diff --git a/EcoSonar-API/dataBase/urlsProjectRepository.js b/EcoSonar-API/dataBase/urlsProjectRepository.js index 212c22c..81fca3a 100644 --- a/EcoSonar-API/dataBase/urlsProjectRepository.js +++ b/EcoSonar-API/dataBase/urlsProjectRepository.js @@ -8,9 +8,9 @@ const SystemError = require('../utils/SystemError') const UrlsProjectRepository = function () { /** - * insertion of one or more url on the table urlsProject : OK - * @param {*} values - * @returns + * insertion of one or more url on the table urlsProject + * @param {string} projectName project Name + * @param {Array} urlList urls to be saved */ this.insertAll = async function (projectName, urlList) { const urlsProjects = [] @@ -48,7 +48,7 @@ const UrlsProjectRepository = function () { } /** - * deletion on the table urlsProject : OK + * deletion on the table urlsProject * @param {id url} key */ this.delete = async function (projectNameReq, urlNameReq) { @@ -84,14 +84,33 @@ const UrlsProjectRepository = function () { } /** - * display all urls of a project : OK - * @param {name of the project} projectName + * deletion of all urls of a project + * @param {Array} urlIdKeyList list of id key representing url saved */ - this.findAll = async function (projectNameReq, insert) { + this.deleteProject = async function (urlIdKeyList) { + return new Promise((resolve, reject) => { + urlsprojects.deleteMany({ idKey: { $in: urlIdKeyList } }) + .then((result) => { + console.log(`DELETE URLS PROJECT - On urlsprojects ${result.deletedCount} objects removed`) + resolve() + }) + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error) + reject(new SystemError()) + }) + }) + } + + /** + * list all urls of a project + * @param {string} projectName name of the project + * @param {boolean} getUrlNameOnly retrieve only parameter url from the collection + */ + this.findAll = async function (projectNameReq, getUrlNameOnly) { return new Promise((resolve, reject) => { let res try { - if (insert) { + if (getUrlNameOnly) { res = urlsprojects.find({ projectName: projectNameReq }) } else { res = urlsprojects.find({ projectName: projectNameReq }, { urlName: 1 }) @@ -107,9 +126,8 @@ const UrlsProjectRepository = function () { /** * Insert or Update user flow for a specific url - * @param {urlObject} urlsProject previously registered - * @param {userFlow} user flow to be saved - * @returns + * @param {JSON} urlObject urlProject previously registered + * @param {JSON} userFlow user flow to be saved */ this.insertUserFlow = async function (urlObject, userFlow) { const userflowMap = new Map(Object.entries(userFlow)) @@ -126,8 +144,9 @@ const UrlsProjectRepository = function () { /** * find user flow for url to be audited - * @param {urlName} url to find user flow - * @returns + * @param {string} projectName project name + * @param {string} url url name + * @returns user flow for the project and url defined */ this.getUserFlow = async function (projectName, urlName) { let systemError = null @@ -150,8 +169,7 @@ const UrlsProjectRepository = function () { /** * Deletion of user flow for a specified url - * @param {urlName} url to delete user flow - * @returns + * @param {string} urlName url to delete user flow */ this.deleteUserFlow = async function (projectName, urlName) { let systemError = null diff --git a/EcoSonar-API/dataBase/w3cRepository.js b/EcoSonar-API/dataBase/w3cRepository.js index cb31c21..9264acc 100644 --- a/EcoSonar-API/dataBase/w3cRepository.js +++ b/EcoSonar-API/dataBase/w3cRepository.js @@ -7,8 +7,7 @@ const formatW3cAnalysis = require('../services/format/formatW3cAnalysis') const W3cRepository = function () { /** * Insert the w3c analysis for a project - * @param {reportsW3c} reportW3c is a list a the report for the w3c analysis - * @returns + * @param {reportsW3c} reportW3c is a list of the report for the w3c analysis */ this.insertAll = function (reportsW3c) { return new Promise((resolve, reject) => { @@ -34,8 +33,8 @@ const W3cRepository = function () { } /** - * find all w3c analysis - * @returns + * find all w3c analysis saved in EcoSonar + * @returns list of w3c analysis */ this.findAllAnalysis = async function () { return new Promise((resolve, reject) => { @@ -43,17 +42,18 @@ const W3cRepository = function () { .then((res) => { resolve(res) }) - .catch(() => { + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error.message) reject(new SystemError()) }) }) } /** - * find analysis for one url in a project - * @param {project Name} projectNameReq - * @param {url Name} urlNameReq - * @returns + * find last w3c analysis for one url in a project + * @param {string} projectNameReq project Name + * @param {string} urlNameReq url Name + * @returns last w3c analysis for the URL */ this.findAnalysisUrl = async function (projectNameReq, urlNameReq) { let urlMatching @@ -129,9 +129,9 @@ const W3cRepository = function () { } /** - * find w3c analysis for one Project - * @param {project Name} projectName - * @returns + * find last w3c analysis for one Project + * @param {string} projectName project Name + * @returns last w3c analysis for the project */ this.findAnalysisProject = async function (projectName) { let error = null @@ -203,10 +203,10 @@ const W3cRepository = function () { } /** - * find analysis of w3c best practices for an URL on the table w3cs - * @param {name of the project} projectName - * @param {url} urlName - * @returns + * find last analysis of w3c best practices for an URL on the table w3cs + * @param {string} projectName name of the project + * @param {string} urlName url + * @returns last w3c analysis */ this.find = async function (projectName, urlName) { let hasNoUrl = false @@ -245,8 +245,7 @@ const W3cRepository = function () { /** * Deletion of one or more w3c analysis on the w3cs collection - * @param {name of the project} projectNameReq - * @returns + * @param {string} projectNameReq name of the project */ this.delete = async function (projectNameReq) { let empty = false @@ -282,6 +281,24 @@ const W3cRepository = function () { } }) } + + /** + * Deletion of all w3c analysis for a project + * @param {string} urlIdKeyList list of id key representing url saved + */ + this.deleteProject = async function (urlIdKeyList) { + return new Promise((resolve, reject) => { + w3cs.deleteMany({ idUrlW3c: { $in: urlIdKeyList } }) + .then((result) => { + console.log(`DELETE URLS PROJECT - On W3C ${result.deletedCount} objects removed`) + resolve() + }) + .catch((error) => { + console.error('\x1b[31m%s\x1b[0m', error.message) + reject(new SystemError()) + }) + }) + } } const w3cRepository = new W3cRepository() diff --git a/EcoSonar-API/package.json b/EcoSonar-API/package.json index dbaa9dc..e75d895 100644 --- a/EcoSonar-API/package.json +++ b/EcoSonar-API/package.json @@ -1,6 +1,6 @@ { "name": "ecosonar-api", - "version": "3.2", + "version": "3.4", "description": "Ecodesign and accessibility tool to help developpers minimize carbon footprint of their web-application", "repository": { "type": "git", diff --git a/EcoSonar-API/routes/app.js b/EcoSonar-API/routes/app.js index 625fe8c..671524b 100644 --- a/EcoSonar-API/routes/app.js +++ b/EcoSonar-API/routes/app.js @@ -15,6 +15,8 @@ const asyncMiddleware = require('../utils/AsyncMiddleware') const swaggerUi = require('swagger-ui-express') const swaggerSpec = require('../swagger') const projectService = require('../services/projectService') +const packageJson = require('../package.json') +const bestPracticesServices = require('../services/bestPracticesService') dotenv.config() @@ -32,8 +34,10 @@ const sonarqubeServerUrl = process.env.ECOSONAR_ENV_SONARQUBE_SERVER_URL || '' const whitelist = [sonarqubeServerUrl] if (process.env.ECOSONAR_ENV_CLOUD_PROVIDER === 'local') { - const localServer = process.env.ECOSONAR_ENV_LOCAL_DEV_SERVER_URL || '' - whitelist.push(localServer) + const localServers = process.env.ECOSONAR_ENV_LOCAL_DEV_SERVER_URL?.split(';') || [] + for (const localServer of localServers) { + whitelist.push(localServer) + } } const corsOptions = { @@ -130,7 +134,7 @@ app.get('/api/all', asyncMiddleware(async (req, res, _next) => { app.post('/api/insert', asyncMiddleware(async (req, res, _next) => { const projectName = req.body.projectName const urlsList = req.body.urlName - console.log('INSERT URLS PROJECT - insert urls into project ' + projectName) + console.log(`INSERT URLS PROJECT - insert urls into project ${projectName}`) urlConfigurationService.insert(projectName, urlsList) .then(() => { console.log('INSERT URLS PROJECT - insert succeeded') @@ -198,17 +202,17 @@ app.delete('/api/delete', asyncMiddleware(async (req, res, _next) => { * post: * tags: * - "Login Configuration" - * summary: "Save Login and Proxy For Project" - * description: Insert login credentials and proxy configuration for a project. + * summary: "Save Login For Project" + * description: Insert login credentials for a project. * parameters: * - name: projectName * in: query * description: The name of the project * required: true * type: string - * - name: login and proxy + * - name: login * in: body - * description: The login credentials and proxy settings + * description: The login credentials settings * required: true * schema: * type: object @@ -222,6 +226,52 @@ app.delete('/api/delete', asyncMiddleware(async (req, res, _next) => { * type: array * items: * type: object + * responses: + * 201: + * description: Success. + * 500: + * description: System error. + */ +app.post('/api/login/insert', asyncMiddleware(async (req, res, _next) => { + const projectName = req.query.projectName + const loginCredentials = req.body.login + console.log('INSERT LOGIN CREDENTIALS - insert credentials into project ' + projectName) + loginProxyConfigurationService.insertLoginCredentials(projectName, loginCredentials) + .then(() => { + console.log('INSERT LOGIN CREDENTIALS - insert succeeded') + return res.status(201).send() + }) + .catch((error) => { + if (error instanceof SystemError) { + return res.status(500).send() + } + console.log('INSERT LOGIN CREDENTIALS - insertion failed') + return res.status(400).json({ error }) + }) +})) + +// API CRUD PROXY Configuration +/** + * @swagger + * /api/proxy/insert: + * post: + * tags: + * - "Proxy Configuration" + * summary: "Save Proxy For Project" + * description: Insert proxy configuration for a project. + * parameters: + * - name: projectName + * in: query + * description: The name of the project + * required: true + * type: string + * - name: proxy + * in: body + * description: The proxy settings + * required: true + * schema: + * type: object + * properties: * proxy: * type: object * properties: @@ -232,26 +282,23 @@ app.delete('/api/delete', asyncMiddleware(async (req, res, _next) => { * responses: * 201: * description: Success. - * 400: - * description: Insertion failed. * 500: * description: System error. */ -app.post('/api/login/insert', asyncMiddleware(async (req, res, _next) => { +app.post('/api/proxy/insert', asyncMiddleware(async (req, res, _next) => { const projectName = req.query.projectName - const loginCredentials = req.body.login const proxyConfiguration = req.body.proxy - console.log('INSERT LOGIN - PROXY CREDENTIALS - insert credentials into project ' + projectName) - loginProxyConfigurationService.insert(projectName, loginCredentials, proxyConfiguration) + console.log('INSERT PROXY - insert proxy credentials into project ' + projectName) + loginProxyConfigurationService.insertProxyConfiguration(projectName, proxyConfiguration) .then(() => { - console.log('INSERT LOGIN CREDENTIALS - insert succeeded') + console.log('INSERT PROXY CREDENTIALS - insert succeeded') return res.status(201).send() }) .catch((error) => { if (error instanceof SystemError) { return res.status(500).send() } - console.log('INSERT LOGIN CREDENTIALS - insertion failed') + console.log('INSERT PROXY CREDENTIALS - insertion failed') return res.status(400).json({ error }) }) })) @@ -300,7 +347,7 @@ app.get('/api/login/find', asyncMiddleware(async (req, res, _next) => { * /api/proxy/find: * get: * tags: - * - "Login Configuration" + * - "Proxy Configuration" * summary: "Get Proxy For Project" * description: Find proxy configuration for a project. * parameters: @@ -340,7 +387,7 @@ app.get('/api/proxy/find', asyncMiddleware(async (req, res, _next) => { * delete: * tags: * - "Login Configuration" - * summary: "Delete Login for Project" + * summary: "Delete Login For Project" * description: Delete login credentials for a project. * parameters: * - name: projectName @@ -378,8 +425,8 @@ app.delete('/api/login', asyncMiddleware(async (req, res, _next) => { * /api/proxy: * delete: * tags: - * - "Login Configuration" - * summary: "Delete Proxy for Project" + * - "Proxy Configuration" + * summary: "Delete Proxy For Project" * description: Delete proxy configuration for a project. * parameters: * - name: projectName @@ -419,7 +466,7 @@ app.delete('/api/proxy', asyncMiddleware(async (req, res, _next) => { * post: * tags: * - "User Flow Configuration" - * summary: "Save User Flow for URL" + * summary: "Save User Flow For URL" * description: Insert new user flow for a url in a project. * parameters: * - name: userFlow @@ -473,7 +520,7 @@ app.post('/api/user-flow/insert', asyncMiddleware(async (req, res, _next) => { * post: * tags: * - "User Flow Configuration" - * summary: "Get User Flow for URL" + * summary: "Get User Flow For URL" * description: Find user flow for a URL. * parameters: * - name: userFlow @@ -518,7 +565,7 @@ app.post('/api/user-flow/find', asyncMiddleware(async (req, res, _next) => { * delete: * tags: * - "User Flow Configuration" - * summary: "Delete User Flow for URL" + * summary: "Delete User Flow For URL" * description: Delete user flow for a URL * parameters: * - name: userFlow @@ -938,7 +985,7 @@ app.post('/api/bestPractices/url', asyncMiddleware(async (req, res, _next) => { * post: * tags: * - "URL Configuration" - * summary: "Get Crawler Result" + * summary: Launch crawling website * description: Crawl the given website to find all pages related. * parameters: * - name: crawledUrl @@ -952,24 +999,59 @@ app.post('/api/bestPractices/url', asyncMiddleware(async (req, res, _next) => { * type: string * mainUrl: * type: string + * saveUrls: + * type: boolean * responses: - * 200: - * description: Success. - * 500: - * description: System error. + * 202: + * description: Crawler started. */ app.post('/api/crawl', asyncMiddleware(async (req, res, _next) => { const projectName = req.body.projectName const mainUrl = req.body.mainUrl + const saveUrls = req.body.saveUrls console.log(`CRAWLER - Running crawler from ${mainUrl}`) - crawlerService.crawl(projectName, mainUrl) + crawlerService.launchCrawl(projectName, mainUrl, saveUrls) + console.log('CRAWLER - Crawler started') + return res.status(202).send() +})) + +/** + * @swagger + * /api/crawl: + * get: + * tags: + * - "URL Configuration" + * summary: "Get URLs crawled" + * description: Get all URLs already crawled for the project + * parameters: + * - name: projectName + * in: query + * description: project name + * required: true + * type: string + * responses: + * 200: + * description: Success. + * 400: + * description: No urls crawled saved. + * 500: + * description: System error. + */ +app.get('/api/crawl', asyncMiddleware(async (req, res, _next) => { + const projectName = req.query.projectName + console.log(`CRAWLER - Retrieve all urls crawled for ${projectName}`) + crawlerService.retrieveCrawledUrl(projectName) .then((results) => { - console.log(`CRAWLER - ${results.length} URL retrieved`) + console.log(`CRAWLER - ${results.length} URLs retrieved for project ${projectName}`) return res.status(200).json(results) }) - .catch(() => { - console.log('CRAWLER - Crawler has encountered an error') - return res.status(500).send() + .catch((error) => { + if (error instanceof SystemError) { + console.log(`CRAWLER - Urls for ${projectName} retrieving has encountered an error`) + return res.status(500).send() + } + console.log(`CRAWLER - No Urls saved for ${projectName}`) + return res.status(400).json(error.message) }) })) @@ -1076,4 +1158,91 @@ app.post('/api/export', asyncMiddleware(async (req, res, _next) => { return res.status(400).json({ error: error.message }) }) })) + +/** + * @swagger + * /api/version: + * get: + * tags: + * - "EcoSonar Infos" + * summary: "Get version of Ecosonar" + * description: Retrieve the version of Ecosonar used. + * responses: + * 200: + * description: Success. + * 400: + * description: EcoSonar version could not be retrieved. + */ +app.get('/api/version', asyncMiddleware(async (_req, res, _next) => { + try { + console.log('GET VERSION - Version of Ecosonar retrieved') + return res.status(200).json({ version: packageJson.version }) + } catch (error) { + console.log('GET VERSION - Version of Ecosonar could not be retrieved') + return res.status(400).json({ error: error.message }) + } +})) + +/** + * @swagger + * /api/best-practices-rules: + * get: + * tags: + * - "EcoSonar Infos" + * summary: "Get all practices documentation" + * description: Retrieve documentation for all best practices in EcoSonar. + * responses: + * 200: + * description: Success. + * 400: + * description: Documentation could not be retrieved. + */ +app.get('/api/best-practices-rules', asyncMiddleware(async (req, res, _next) => { + console.log('GET BEST PRACTICES - Best practices rules to be retrieved') + + bestPracticesServices.getAllBestPracticesRules() + .then((bestPracticesRules) => { + console.log('GET BEST PRACTICES - Best practices rules has been retrieved') + return res.status(200).send(bestPracticesRules) + }) + .catch((error) => { + console.log('GET BEST PRACTICES - Best practices rules could not be retrieved') + return res.status(400).json({ error: error.message }) + }) +})) + +/** + * @swagger + * /api/project: + * delete: + * tags: + * - "URL Configuration" + * summary: "Delete Project " + * description: Delete project and all related urls & analysis + * parameters: + * - name: projectName + * in: query + * description: The name of the project + * required: true + * type: string + * responses: + * 200: + * description: Success. + * 500: + * description: System error. + */ +app.delete('/api/project', asyncMiddleware(async (req, res, _next) => { + const projectName = req.query.projectName + console.log(`DELETE PROJECT - Delete project ${projectName}`) + projectService.deleteProject(projectName) + .then(() => { + console.log(`DELETE PROJECT - Project ${projectName} deletion succeeded`) + return res.status(200).send() + }) + .catch(() => { + console.log(`DELETE PROJECT - Project ${projectName} deletion failed`) + return res.status(500).send() + }) +})) + module.exports = app diff --git a/EcoSonar-API/services/bestPracticesService.js b/EcoSonar-API/services/bestPracticesService.js new file mode 100644 index 0000000..637cf1f --- /dev/null +++ b/EcoSonar-API/services/bestPracticesService.js @@ -0,0 +1,24 @@ +const greenItData = require('../utils/bestPractices/greenItData.json') +const lighthouseAccessibilityData = require('../utils/bestPractices/lighthouseAccessibilityData.json') +const lighthousePerformanceData = require('../utils/bestPractices/lighthousePerformanceData.json') + +class BestPracticesServices { } + +BestPracticesServices.prototype.getAllBestPracticesRules = async function () { + const allBestPracticesRules = { + greenitDocs: {}, + lighthousePerformanceDocs: {}, + lighthouseAccessbilityDocs: {} + } + try { + allBestPracticesRules.greenitDocs = greenItData + allBestPracticesRules.lighthousePerformanceDocs = lighthousePerformanceData + allBestPracticesRules.lighthouseAccessbilityDocs = lighthouseAccessibilityData + return allBestPracticesRules + } catch (error) { + return error + } +} + +const bestPracticesServices = new BestPracticesServices() +module.exports = bestPracticesServices diff --git a/EcoSonar-API/services/crawler/crawlerService.js b/EcoSonar-API/services/crawler/crawlerService.js index 5c34b57..344ae00 100644 --- a/EcoSonar-API/services/crawler/crawlerService.js +++ b/EcoSonar-API/services/crawler/crawlerService.js @@ -2,8 +2,9 @@ const cheerio = require('cheerio') const puppeteer = require('puppeteer') const authenticationService = require('../authenticationService') const urlConfigurationService = require('../urlConfigurationService') +const tempUrlsProjectRepository = require('../../dataBase/tempurlsProjectRepository') -class CrawlerService {} +class CrawlerService { } /** * VARIABLES @@ -18,8 +19,10 @@ let seenUrls * * @param {*} projectName the name of the project * @param {*} mainUrl the main url used to start crawling + * @param {*} savedAsPermanent boolean value that mentions if urls crawled should be saved permanently or temporary + * launch crawling of the website and save the urls crawled according to context */ -CrawlerService.prototype.crawl = async function (projectName, mainUrl) { +CrawlerService.prototype.launchCrawl = async function (projectName, mainUrl, savedAsPermanent) { let crawledUrls = [] let projectUrls = [] seenUrls = [] @@ -49,7 +52,7 @@ CrawlerService.prototype.crawl = async function (projectName, mainUrl) { await authenticationService.loginIfNeeded(browser) await recursiveCrawl(mainUrl, browser, crawledUrls) } catch (error) { - console.log(error) + console.error(error) } finally { browser.close() } // Get all the URL already registered in the project to avoid crawling them again @@ -57,7 +60,7 @@ CrawlerService.prototype.crawl = async function (projectName, mainUrl) { .then((result) => { projectUrls = result }).catch(() => { - console.log(`CRAWLER - Project ${projectName} has 0 url`) + console.log('An error occured when retrieving urls saved to be audited for the project or no urls were saved') }) // Removing mainUrl and aliases from return list if already exist in the project if (projectUrls.includes(mainUrl) || projectUrls.includes(mainUrl + '/') || projectUrls.includes(mainUrl.slice(0, -1)) || crawledUrls.includes(mainUrl + '/')) { @@ -71,7 +74,7 @@ CrawlerService.prototype.crawl = async function (projectName, mainUrl) { crawledUrls = crawledUrls.filter((url) => (!projectUrls.includes(url) && !projectUrls.includes(url + '/') && !projectUrls.includes(url.slice(0, -1)))) - return crawledUrls + saveUrlsCrawled(projectName, crawledUrls, savedAsPermanent) } /** @@ -144,14 +147,66 @@ async function recursiveCrawl (url, browser, crawledUrls) { /** * - * @param {string} link any link found by the scrawler + * @param {string} projectName project name + * @param {string} urlsList list of urls crawled + * @param {string} savedAsPermanent boolean if urls crawled should be saved as temporary or permanent + * Save the urls crawled as temporary or permanent + */ +async function saveUrlsCrawled (projectName, urlsList, savedAsPermanent) { + if (savedAsPermanent) { + urlConfigurationService.insert(projectName, urlsList) + .then(() => { + console.log('CRAWLER - Crawled URLs are saved and added to the project') + }) + .catch((error) => { + console.error(error) + console.error('CRAWLER - Crawled URLs could not be saved') + }) + } else { + let temporaryUrlsAlreadySaved = null + await tempUrlsProjectRepository.findUrls(projectName) + .then((result) => { + temporaryUrlsAlreadySaved = result + }) + .catch((error) => { + console.error(error) + console.error('CRAWLER - Crawled URLs could not be saved') + }) + + if (temporaryUrlsAlreadySaved === null) { + await tempUrlsProjectRepository.create(projectName, urlsList) + .then(() => { + console.log('CRAWLER - Crawled URLs are saved temporary') + }) + .catch((error) => { + console.error(error) + console.error('CRAWLER - Crawled URLs could not be saved temporary') + }) + } else { + temporaryUrlsAlreadySaved = temporaryUrlsAlreadySaved.urlsList + const crawledNotSavedUrls = urlsList.filter(e => !temporaryUrlsAlreadySaved.includes(e)) + const urlsToUpdate = crawledNotSavedUrls.concat(temporaryUrlsAlreadySaved) + await tempUrlsProjectRepository.updateUrls(projectName, urlsToUpdate) + .then(() => { + console.log('CRAWLER - Crawled URLs temporary are updated') + }) + .catch((error) => { + console.error(error.message) + console.error('CRAWLER - Crawled URLs could not be updated temporary') + }) + } + } +} + +/** + * + * @param {string} link any link found by the crawler * @returns a formatted link * Avoid issue with relatives URL by formatting links to be crawled. Ex : given '/about' when crawling, function will construct an usable URL with websiteProtocol and websitePrefix to be : 'https://nameofthewebsite.com/about' */ - CrawlerService.prototype.getUrl = function (link) { // Exclude links that are outside website - if ((link.includes(websiteProtocol) || link.includes(alternativeProtocol)) && link.slice(0, webSitePrefixWithProtocol.length) !== webSitePrefixWithProtocol) { + if ((link.includes(websiteProtocol) || link.includes(alternativeProtocol)) && link.startsWith(webSitePrefixWithProtocol)) { return undefined } else { // If the link is part of the website @@ -183,5 +238,26 @@ CrawlerService.prototype.checkUrl = function (url) { } else return true } +/** + * + * @param {string} projectName project name + * retrieve the temporary urls saved from last crawling in the database for this project + * @returns a list of urls crawled saved + */ +CrawlerService.prototype.retrieveCrawledUrl = async function (projectName) { + return new Promise((resolve, reject) => { + tempUrlsProjectRepository.findUrls(projectName) + .then((result) => { + if (result && result.urlsList.length > 0) { + resolve(result.urlsList) + } else { + reject(new Error('No crawled urls were saved for this project')) + } + }).catch((error) => { + reject(error) + }) + }) +} + const crawlerService = new CrawlerService() module.exports = crawlerService diff --git a/EcoSonar-API/services/loginProxyConfigurationService.js b/EcoSonar-API/services/loginProxyConfigurationService.js index ec88871..ba7bf71 100644 --- a/EcoSonar-API/services/loginProxyConfigurationService.js +++ b/EcoSonar-API/services/loginProxyConfigurationService.js @@ -4,9 +4,10 @@ const retrieveLoginProxyYamlConfigurationService = require('./yamlConfiguration/ class LoginProxyConfigurationService {} -LoginProxyConfigurationService.prototype.insert = async function (projectName, loginCredentials, proxy) { +LoginProxyConfigurationService.prototype.insertLoginCredentials = async function (projectName, loginCredentials) { let projectSettingsRegistered = null let systemError = null + const succesMessage = 'INSERT LOGIN CONFIGURATION - Success' await projectsRepository.getProjectSettings(projectName) .then((projectSettings) => { projectSettingsRegistered = projectSettings @@ -16,17 +17,55 @@ LoginProxyConfigurationService.prototype.insert = async function (projectName, l } }) if (projectSettingsRegistered === null) { - await projectsRepository.createLoginConfiguration(projectName, loginCredentials, proxy) + await projectsRepository.createLoginConfiguration(projectName, loginCredentials) .then(() => { - console.log('INSERT LOGIN - PROXY CONFIGURATION - Success') + console.log(succesMessage) }) .catch(() => { systemError = new SystemError() }) } else { - await projectsRepository.updateLoginConfiguration(projectName, projectSettingsRegistered.procedure, loginCredentials, proxy) + await projectsRepository.updateLoginConfiguration(projectName, projectSettingsRegistered.procedure, loginCredentials) .then(() => { - console.log('UPDATE LOGIN - PROXY CONFIGURATION - Success') + console.log(succesMessage) + }) + .catch(() => { + systemError = new SystemError() + }) + } + return new Promise((resolve, reject) => { + if (systemError !== null) { + reject(systemError) + } else { + resolve() + } + }) +} + +LoginProxyConfigurationService.prototype.insertProxyConfiguration = async function (projectName, proxy) { + let projectSettingsRegistered = null + let systemError = null + const succesMessage = 'INSERT PROXY CONFIGURATION - Success' + await projectsRepository.getProjectSettings(projectName) + .then((projectSettings) => { + projectSettingsRegistered = projectSettings + }).catch((error) => { + if (error instanceof SystemError) { + systemError = new SystemError() + } + }) + if (projectSettingsRegistered === null) { + await projectsRepository.createProxyConfiguration(projectName, proxy) + .then(() => { + console.log(succesMessage) + }) + .catch(() => { + systemError = new SystemError() + }) + } else { + await projectsRepository.updateProxyConfiguration(projectName, projectSettingsRegistered.procedure, proxy) + .then(() => { + console.log(succesMessage) }) .catch(() => { systemError = new SystemError() diff --git a/EcoSonar-API/services/projectService.js b/EcoSonar-API/services/projectService.js index afee04a..bb1d3fe 100644 --- a/EcoSonar-API/services/projectService.js +++ b/EcoSonar-API/services/projectService.js @@ -1,9 +1,15 @@ const SystemError = require('../utils/SystemError') const retrieveAnalysisService = require('../services/retrieveAnalysisService') +const projectsRepository = require('../dataBase/projectsRepository') +const w3cRepository = require('../dataBase/w3cRepository') +const lighthouseRepository = require('../dataBase/lighthouseRepository') +const greenItRepository = require('../dataBase/greenItRepository') +const bestPracticesRepository = require('../dataBase/bestPracticesRepository') +const urlsProjectRepository = require('../dataBase/urlsProjectRepository') +const tempurlsProjectRepository = require('../dataBase/tempurlsProjectRepository') const scores = ['ecoIndex', 'perfScore', 'accessScore', 'w3cScore'] -class ProjectService { -} +class ProjectService { } /** * get an average of all score for all projects of the database of last analysis @@ -23,11 +29,9 @@ ProjectService.prototype.getAllInformationsAverage = async function (date) { resultformatted[scoreType] += result.projects[project][scoreType] scoreNbProject[scoreType] += 1 } - } else { - if (result.projects[project][scoreType] !== null) { - resultformatted[scoreType] = result.projects[project][scoreType] - scoreNbProject[scoreType] = 1 - } + } else if (result.projects[project][scoreType] !== null) { + resultformatted[scoreType] = result.projects[project][scoreType] + scoreNbProject[scoreType] = 1 } } }) @@ -50,9 +54,9 @@ function selectRightAnalysisByDateAndUrl (searchDate, projectsAnalysis, urlField acc[obj[urlFieldName]].push(obj) return acc }, {}) - Object.keys(groupedAnalysisByIdKeys).forEach(UrlAnalysisId => { - const retainedAnalysis = filterPerDate(searchDate, groupedAnalysisByIdKeys[UrlAnalysisId]) - allAnalysisPerUrl[UrlAnalysisId] = retainedAnalysis + Object.keys(groupedAnalysisByIdKeys).forEach(id => { + const retainedAnalysis = filterPerDate(searchDate, groupedAnalysisByIdKeys[id]) + allAnalysisPerUrl[id] = retainedAnalysis }) return allAnalysisPerUrl } @@ -247,16 +251,52 @@ ProjectService.prototype.getAllProjectInformations = async function (date, sortB } }) } else { - return new Promise((_resolve, reject) => { - reject(new Error(error)) - }) + return Promise.reject(new Error(error)) } } else { - return new Promise((_resolve, reject) => { - reject(new Error(error)) - }) + return Promise.reject(new Error(error)) } } +/** + * Delete all part the project (project, urls and analysis) + * @param {string} projectName name of the project to delete + */ +ProjectService.prototype.deleteProject = async function (projectName) { + let urlsProjects = [] + let systemError = false + try { + await urlsProjectRepository.findAll(projectName, true) + .then((result) => { + urlsProjects = result.map((e) => e.idKey) + }) + } catch (error) { + console.error('\x1b[31m%s\x1b[0m', error.message) + systemError = true + } + if (systemError) { + Promise.reject(new SystemError()) + } + try { + await lighthouseRepository.deleteProject(urlsProjects) + await greenItRepository.deleteProject(urlsProjects) + await w3cRepository.deleteProject(urlsProjects) + await bestPracticesRepository.deleteProject(urlsProjects) + await urlsProjectRepository.deleteProject(urlsProjects) + await tempurlsProjectRepository.deleteProject(projectName) + await projectsRepository.deleteProjectPerProjectName(projectName) + } catch (error) { + console.error('\x1b[31m%s\x1b[0m', error.message) + systemError = true + } + return new Promise((resolve, reject) => { + if (systemError) { + reject(new SystemError()) + } else { + resolve() + } + }) +} + const projectService = new ProjectService() module.exports = projectService diff --git a/EcoSonar-API/services/urlConfigurationService.js b/EcoSonar-API/services/urlConfigurationService.js index a12c299..037a1c9 100644 --- a/EcoSonar-API/services/urlConfigurationService.js +++ b/EcoSonar-API/services/urlConfigurationService.js @@ -1,17 +1,29 @@ +const tempUrlsProjectRepository = require('../dataBase/tempurlsProjectRepository') const urlsProjectRepository = require('../dataBase/urlsProjectRepository') const SystemError = require('../utils/SystemError') class UrlConfigurationService { } + +/** + * @param {String} projectName is the name of the project + * Retrieve the urls saved for the project (those who will be audited when running an analysis) + * @return list of urls saved + * @return list of errors (duplication or validation) + */ UrlConfigurationService.prototype.getAll = function (projectName) { return new Promise((resolve, reject) => { - urlsProjectRepository.findAll(projectName, false).then((resultats) => { - if (resultats.length === 0) { - reject(new Error('Your project has no url assigned into EcoSonar. You must at least add one url if you want to analyse ecodesign practices.')) - } - const resultatsFormatted = resultats.map((res) => res.urlName) - resolve(resultatsFormatted) - }) + urlsProjectRepository.findAll(projectName, false) + .then((results) => { + if (results.length === 0) { + reject(new Error('Your project has no url assigned into EcoSonar. You must at least add one url if you want to analyse ecodesign practices.')) + } + const resultatsFormatted = results.map((res) => res.urlName) + resolve(resultatsFormatted) + }) + .catch((error) => { + reject(error) + }) } ) } @@ -22,17 +34,18 @@ UrlConfigurationService.prototype.getAll = function (projectName) { * This function will do 2 checks : * 1 - Verify into database if URL isn't already registered * 2- Verify that every URLs in the list is different + * 3- Insert into database + * 4- Remove crawled urls that have been saved * Later a check about syntax is made (using REGEX) * URLs are trimmed to avoid issues with copy-paste adding whitespace and tab charactes * @reject in case of error, the function reject error type and description */ - UrlConfigurationService.prototype.insert = async function (projectName, urlList) { // Initializing parameters - const notInsertedArray = [] - const errorArray = [] + let systemError = false - let urlAlreadyAddedInProject + let urlAlreadyAddedInProject = [] + let errorRegexp = [] // Retrieving URLs in database for project await urlsProjectRepository.findAll(projectName, true) .then((urlListResult) => { urlAlreadyAddedInProject = urlListResult.map((res) => res.urlName) }) @@ -42,6 +55,41 @@ UrlConfigurationService.prototype.insert = async function (projectName, urlList) } }) + const { errorArray, notInsertedArray } = verifyNoDuplication(urlList, urlAlreadyAddedInProject) + + if (notInsertedArray.length === 0 && !systemError) { + urlsProjectRepository.insertAll(projectName, urlList) + .catch((error) => { + if (error instanceof SystemError) { + systemError = true + } + errorRegexp = error + }) + } + + if (notInsertedArray.length === 0 && !systemError && errorRegexp.length === 0) { + systemError = await removeTemporaryUrlsThatWereSaved(projectName, urlList) + } + + return new Promise((resolve, reject) => { + if (notInsertedArray.length > 0) { + console.log('URL CONFIGURATION SERVICE - Some urls are duplicated') + reject(errorArray) + } else if (errorRegexp.length > 0) { + console.log('URL CONFIGURATION SERVICE - Some urls are invalid') + reject(errorRegexp) + } else if (systemError) { + console.log('URL CONFIGURATION SERVICE - An error occured when reaching the database') + reject(new SystemError()) + } else { + resolve() + } + }) +} + +function verifyNoDuplication (urlList, urlAlreadyAddedInProject) { + const errorArray = [] + const notInsertedArray = [] let index = 0 while (index < urlList.length) { const newList = urlList.filter((url) => url.trim() === urlList[index].trim()) @@ -53,20 +101,35 @@ UrlConfigurationService.prototype.insert = async function (projectName, urlList) } index++ } + return { errorArray, notInsertedArray } +} - return new Promise((resolve, reject) => { - if (notInsertedArray.length === 0 && !systemError) { - urlsProjectRepository.insertAll(projectName, urlList) - .then(() => resolve()) - .catch((error) => { reject(error) }) - } else if (systemError) { - reject(new SystemError()) - } else { - reject(errorArray) - } - }) +async function removeTemporaryUrlsThatWereSaved (projectName, urlList) { + // Retrieved all urls crawled to remove saved urls + let systemError = false + await tempUrlsProjectRepository.findUrls(projectName) + .then((crawledList) => { + const urlsCrawledList = crawledList ? crawledList.urlsList : [] + if (urlsCrawledList.length > 0) { + // remove urls that were saved previously in the temporary url list + const crawledNotSaved = urlsCrawledList.filter(e => !urlList.includes(e)) + tempUrlsProjectRepository.updateUrls(projectName, crawledNotSaved) + .catch(() => { + systemError = true + }) + } + }) + .catch(() => { + systemError = true + }) + return systemError } +/** + * @param {String} projectName is the name of the project + * @param {String} urlName is the url in the project to be deleted + * Delete a url into a project + */ UrlConfigurationService.prototype.delete = async function (projectName, urlName) { return new Promise((resolve, reject) => { urlsProjectRepository.delete(projectName, urlName) diff --git a/EcoSonar-API/utils/bestPractices/greenItData.json b/EcoSonar-API/utils/bestPractices/greenItData.json new file mode 100644 index 0000000..80e8b45 --- /dev/null +++ b/EcoSonar-API/utils/bestPractices/greenItData.json @@ -0,0 +1,146 @@ +{ + "addExpiresOrCacheControlHeaders": { + "title": "Add Expires or Cache-Control HTTP Headers", + "description": "You should reach 100% of resources cached.", + "correction": "The Expires and Cache-Control headers determine how long a browser should keep a resource in its cache. You should therefore use them, and configure them correctly for CSS style sheets, JavaScript scripts and images. Ideally, these elements should be kept as long as possible so that the browser does not request them again from the server. This saves HTTP requests, bandwidth and CPU power server-side.Here is a configuration example for Expires and Cache-Control headers for the Apache web server:
<IfModule mod_headers.c>
    <FilesMatch "\.(ico|jpe?g|png|gif|swf|css|gz)$">
        Header set Cache-Control "max-age=2592000, public"
    </FilesMatch>
    <FilesMatch  "\.(html|htm)$">
       Header set Cache-Control "max-age=7200, public"
    </FilesMatch>
 </IfModule>
Government Regulation: RGESN 6.3 : Does the digital service use caching mechanisms for all transferred content under its control? See : Cache Header Cache-Control HeadersExpires Headers", + "titleData": "{0}% of resources cached" + }, + "compressHttp": { + "title": "Compress ressources", + "description": "To get an A score, reach 100 % of compression ratio.", + "correction": "You can compress the content of HTML pages to minimize bandwidth consumption between the client and the server. All modern browsers (for smartphones, tablets, notebook and desktop computers) accept HTML compressed via gzip or Deflate. The easiest way to do so is to configure the web server so that it compresses the HTML data stream, either on-the-fly or automatically, as it leaves the server. This practice (on-the-fly compression) is only beneficial for a HTML data stream as it is constantly evolving. When possible, we recommend that you manually compress static resources (e.g. CSS and JavaScript libraries) all in one go. With Apache, the Deflate and gzip compression methods offer considerable savings. A typical 26 KB HTML file is reduced to 6 KB after being compressed with gzip. If your frontend framework is one of the following: React / Angular / Vue.js If your front-end framework is React Government Regulation; RGESN 6.4 : Has the digital service implemented compression techniques on all transferred resources under its control? See : Http Compress How to compress HTML code ?", + "titleData": "{0}% of resources compressed" + }, + "domainsNumber": { + "title": "Limit the number of domains", + "description": "You should limit the number of domains to 3 per page.", + "correction": " When a website or online service hosts a web page's components across several domains, the browser must establish an HTTP connection with every single one. Once the HTML page has been retrieved, the browser calls the sources as it traverses the DOM (Document Object Model). Some resources are essential for the page to work. If they are hosted on another domain which is slow, it may increase the page's render time. You should therefore, when possible, group all resources on a single domain. The only exception to this is for static resources (style sheets, images, etc.), which should be hosted on a separate domain to avoid sending one or multiple cookies for each browser GET HTTP request. This reduces response time and unnecessary bandwidth consumption. For a corporate website with heavy traffic, it is better to have two domains: - the application server at www.domain.tld - the cookieless media server at media.domain.tld By doing so, you minimize the number of domains while also avoiding unnecessarily sending a cookie for each GET HTTP request for a static resource. Best practices n°82 from '115 bonnes pratiques d'écoconception web v4' See : What is a domain number and how does it works Minimize the number of domains ", + "titleData": "{0} domain(s) found per page in average" + }, + "dontResizeImageInBrowser": { + "title": "Don't resize image in browser", + "description": "There should be no image resized in browser in your web application. You are resizing an image if you are using the attributes HEIGHT and WIDTH in the HTML tag of the image.", + "correction": "Do not resize images using HTML height and width attributes. Doing so sends images in their original size, wasting bandwidth and CPU power. A PNG-24 350 x 300 px image is 41 KB. If you resized the same image file using HTML and displayed it as a 70 x 60 px thumbnail, it would still be 41 KB, when it should be no more than 3 KB! Meaning 38 KB downloaded for nothing. The best solution is to resize images using software such as Photoshop, without using HTML. When content added by the website's users has no specific added value, it is best to prevent them from being able to insert images using a WYSIWYG editor e.g., CKEditor. Non Compliant : ”loading Compliant : ”loading Government Regulation: RGESN 5.2 Does the digital service offer image content whose level of compression is appropriate for the content and viewing context?See : How to resize image outside the browser ", + "titleData": "{0} image(s) resized in browser found in the whole project" + }, + "emptySrcTag": { + "title": "Avoid empty src tag", + "description": "You should have no empty src tags.", + "correction": "If there is an image tag with an empty src attribute, the browser will call the directory in which the page is located, generating unnecessary, additional HTTP requests.The following image tag will request the foo directory's index file from the server: ”” for a page located at: http://domain.tld/foo/bar.htmlSee : Delete image tags with empty src attributes", + "titleData": "{0} empty src tags found in the whole project" + }, + "externalizeCss": { + "title": "Externalize css", + "description": "You should reach 0 inline stylesheets.", + "correction": "Ensure that CSS files and JavaScript code are separate from the page's HTML code, except for any configuration variables for JavaScript objects. If you include CSS or JavaScript code in the body of the HTML file, and it is used for several pages (or even the whole website), then the code must be sent for each page requested by the user, therefore increasing the volume of data sent. However, if the CSS and JavaScript code are in their own separate files, the browser can avoid requesting them again by storing them in its local cache.
Non compliant :
<style type=\"text/css\" media=\"screen\"> p { color: #333, margin: 2px 0 }/* All the website's CSS declarations */ </style>
Compliant :
<link href=\"css/styles.css\" rel=\"stylesheet\">Best practices n°42 from '115 bonnes pratiques d'écoconception web v4' See : Externalize CSS files How to create a css external stylesheet", + "titleData": "{0} inline stylesheet(s) found in the whole project" + }, + "externalizeJs": { + "title": "Externalize js", + "description": "You should have no inline JavaScript script in HTML code.", + "correction": "Ensure that JavaScript code is separate from the page's HTML code, except for any configuration variables for JavaScript objects. If you include JavaScript code in the body of the HTML file, and it is used for several pages (or even the whole website), then the code must be sent for each page requested by the user, therefore increasing the volume of data sent. However, if the JavaScript code is in his own separate file, the browser can avoid requesting them again by storing them in its local cache.
Non Compliant :
File 1 Avoid using script directly into HTML : <script type=\"\"text\/javascript\"\"> alert(\"\"Hello Jean ! \"\") <\/script><button onclick=alert('Hello Jeanne')>Say Hello Jeanne</button>
Compliant :
Instead use a JavaScript file where the function is defined : script: function sayHello(name) { alert(\"\"Hello\"\" + name +\"\" ! \"\")}<script type=\"\"text\/javascript\"\">sayHello(Jean)<\/script>Best practices n°42 from '115 bonnes pratiques d'écoconception web v4'See : Externalize JS Files Externalize JavaScript Files", + "titleData": "{0} inline javascripts found in the whole project" + }, + "httpError": { + "title": "Avoid HTTP request errors", + "description": "You should not have HTTP errors when loading your pages.", + "correction": "Requests with HTTP errors consume resources unnecessarily.Best practices n°89 from '115 bonnes pratiques d'écoconception web v4' See : How to troubleshoot common http error code", + "titleData": "{0} HTTP error(s) in the whole project" + }, + "httpRequests": { + "title": "Limit the number of HTTP requests", + "description": "A page's download time client-side directly correlates to the number and size of files the browser must download. You should not have more than 25 requests to load one page of your application.", + "correction": "For each file, the browser sends a GET HTTP to the server. It waits for the response, and then downloads the resource as soon as it is available. Depending on the type of web server you use, the more requests per page there are, the fewer pages the server can handle. Reducing the number of requests per page is key to reducing the number of HTTP servers needed to run the website, and consequently its environmental impact.There are several ways to reduce the number of requests per page: - Combine static files e.g., CSS and JavaScript libraries - Use a CSS sprite to group the interface's images - Favor glyphs over images and, in general, vectors over bitmaps. - Fill in the browser cache as much as possible.Potential saving: reduced server load, thus minimizing the economic and environmental footprint by reducing the amount of equipment needed (from HTTP servers to application servers and RDBMS).Best practices n°47 from '115 bonnes pratiques d'écoconception web v4'Government Regulation: RGESN 6.2 : Does the digital service have a limit of requests per screen? See : Limit HTTP Requests", + "titleData": "{0} HTTP request(s) on average" + }, + "imageDownloadedNotDisplayed": { + "title": "Do not download unecessary image", + "description": "You should not request images from the server if they are not going to be displayed in your web application.", + "correction": "Downloading images that will not necessarily be visible consumes resources unnecessarily. For example, images that are displayed only after a user action.Government Regulation: RGESN 5.2 Does the digital service offer image content whose level of compression is appropriate for the content and viewing context?", + "titleData": "{0} image(s) downloaded but not displayed in the whole project" + }, + "jsValidate": { + "title": "Validate js", + "description": "You shouldn't have JavaScript errors when launching your web application.", + "correction": "JSLint is a JavaScript code quality tool that checks that the JavaScript syntax used will be understood by all browsers. The code produced thus complies with coding rules which enables interpreters to quickly and easily run the code. The CPU is therefore used for a shorter time.Install eslint to analyse your code npm install eslint --save-dev or yarn add -D eslintOnce installed, you can launch eslint project configuration npx eslint --initThe command prompt will guide you through the linter configuration adapted to your projet. The file name .eslintrc defines the linter configuration and can be modified according to your requirement.To launch the linter analysis : eslint --ext .js .To let the linter fix issues automatically :eslint --fix --ext .js .Best practices n°82 from '115 bonnes pratiques d'écoconception web v4'See : Check JavaScript code", + "titleData": "{0} JavaScript error(s) found in the whole project" + }, + "maxCookiesLength": { + "title": "Max cookies length", + "description": "Maximum size of your cookies should be the smallest possible as it is sent with each request. Your cookies should not be longer than 512 bytes.", + "correction": "A cookie makes it possible to maintain a state between the internet user's browser and the remote web server thanks to identifiers. This text file is transferred in each http request. It is therefore necessary to optimize its size as much as possible and delete it as soon as its presence is no longer mandatory. You can automatically delete a cookie when it is no longer useful, by specifying an expiry date, as follows: Set-Cookie: user_myvariable=myvalue; expires=Wed, 12 Dec 2012 07:40:20 UTC See : Using HTTP cookies ", + "titleData": "Max cookies length = {0} Bytes, in the whole project" + }, + "minifiedCss": { + "title": "Minified css", + "description": "You should reach 100% of your CSS files minified.", + "correction": "You can use the Yahoo's YUI Compressor specialized filters to :- remove comments and white spaces - remove the last semi-colon - remove extra semi-colons - remove empty declarations - remove units of measure when using 0 values and reduce multiple 0s into one - remove 0 for values less than 1 - convert RGB colors into hex values and reduce 6-digit hex values to 3-digit values - remove extra charsets - optimize the alpha layer's opacity values in Internet Explorer - replace none with 0Government Regulation: RGESN 6.4 : Has the digital service implemented compression techniques on all transferred resources under its control? See : CSS Minification ", + "titleData": "{0}% of minified stylesheet" + }, + "minifiedJs": { + "title": "Minified js", + "description": "You should reach 100% of your JS files minified.", + "correction": "Use a tool such as YUI Compressor to :- remove unnecessary white spaces - remove unnecessary line breaks - remove unnecessary semi-colons - shorten local variable names This operation can be automated using Google Apache speed moduleA standard 248 KB JavaScript file will be 97KB after being minifiedGovernment Regulation: RGESN 6.4 : Has the digital service implemented compression techniques on all transferred resources under its control?See : JavaScript minification ", + "titleData": "{0}% of minified JavaScript" + }, + "noCookieForStaticRessources": { + "title": "No cookie for static ressources", + "description": "You should have 0 static resources with cookies.", + "correction": "Images, CSS stylesheets, and JavaScript files must be hosted on a cookie-free domain. This prevents the browser from sending a cookie for each resource ... when it is unnecessary. Indeed, although transferred in each http request, the cookie is useless for static content, since it is used to maintain a state between the Internet user's browser and the remote application server thanks to the identifiers contained in the text file.. It is therefore preferable to store this type of content on a specific domain name, for example static.domainname.com. For static resources, a cookie is unnecessary, so it unnecessarily consumes bandwidth. To avoid this, we can use a different domain for static resources or restrict the scope of cookies created. Web Giants are using a dedicated domain to serve static ressources which does not require cookies. For example, Yahoo! uses the domain called yimg.com, Youtube ytimg.com and Amaon images-amazon.comGovernment Regulation: RGESN 6.11 : Does the digital service host the transferred static resources of which it is the issuer on the same domain?See : Serve static content from a cookieless domain. Why do you need a cookie-less domain", + "titleData": "{0} static ressource(s) with cookie in the whole project" + }, + "noRedirect": { + "title": "Avoid redirect", + "description": "Redirections should be avoided as much as possible as they slow down response and drain resources unnecessarily.", + "correction": "These redirections can take place on various levels: HTML code, JavaScript code, HTTP server and application server. Best practices n°112 from '115 bonnes pratiques d'écoconception web v4' See : Avoid redirections Avoid multiple page redirects", + "titleData": "{0} redirect in the whole project" + }, + "optimizeBitmapImages": { + "title": "Optimize bitmap images", + "description": "You should optimize all your bitmap images.", + "correction": "The first step is to choose the correct format between bitmap (e.g., JPEG, PNG & GIF) and vector (SVG). Bitmaps should only be used for photos and interface elements that are not possible though icons or CSS styling.The choice of bitmap format depends on the image's characteristics: black and white or color, color palette, need for transparency, etc. For these characteristics, the ability to use lossy compression on the image is more suited to JPEG and WebP (Google); while the need for transparency and/or lossless compression is more suited to GIF or PNG.Tools such as pngcrush, ImageMagick and jpegtran will help you reduce the size of images as much as possible.Potential saving: At least 25% saved by fine-tuning the color palette and the compression ratio, and up to over 80% compared to a uncompressed bitmap. WebP is on average 30% smaller than JPEG.Government Regulation: RGESN 5.2 Does the digital service offer image content whose level of compression is appropriate for the content and viewing context? See : Optimize Bitmap images", + "titleData": "{0} bitmap image to optimize in the whole project" + }, + "optimizeSvg": { + "title": "Optimize svg images", + "description": "All SVG images added into your web application should be optimized.", + "correction": "Svg images are less heavy than bitmap images, nevertheless they can be optimized and minified via tools (for example, svgo).Government Regulation: RGESN 5.2 Does the digital service offer image content whose level of compression is appropriate for the content and viewing context? See : Tools for optimizing SVG", + "titleData": "{0} images to optimize in the whole project" + }, + "plugins": { + "title": "Do not use plugins", + "description": "You should use no plugin in your web application if possible.", + "correction": "Avoid using plugins (Flash Player, Java and Silverlight virtual machines, etc.) because they can be a heavy drain on resources (CPU and RAM).This is especially true with Adobe's Flash Player, to such an extent that Apple decided to not install the technology on its mobile devices to maximize battery life. Favor standard technology such as HTML5 and ECMAScript. See : Limit plugins ", + "titleData": "{0} plugin found in the whole project" + }, + "printStyleSheet": { + "title": "Provide print stylesheet", + "description": "Each of your pages should have at least one print style sheet to get an A Score.", + "correction": "In addition to the benefits for the user, this style sheet reduces the number of pages printed, and therefore indirectly minimizes the website's ecological footprint. It should be as streamlined as possible and employ an ink-light typeface e.g., Century Gothic. Also consider hiding the header, footer, menu and sidebar, as well as deleting all images except those needed for content. This print style sheet makes for a cleaner print by trimming down what is displayed on the screen. Compliant : Best practices n°31 from '115 bonnes pratiques d'écoconception web v4' See : Print Stylesheet CSS Printer Friendly Pages. How to set up a print stylesheet Print stylesheet guide", + "titleData": "{0} print stylesheet found in the whole project" + }, + "socialNetworkButton": { + "title": "Do not use standards social button", + "description": "You should not use social media buttons.", + "correction": "Social Network like Facebook, Twitter, Pinterest gives plugins to install on web page to get a share button and a like counter. These plugins consume unnecessary resources, it's better to put direct links.Best practices n°59 from '115 bonnes pratiques d'écoconception web v4'", + "titleData": "{0} standard(s) social button(s) found in the whole project" + }, + "styleSheets": { + "title": "Limit Stylesheet files", + "description": "You should have at most 2 stylesheets per page.", + "correction": "Minimize the number of CSS files to reduce the number of HTTP requests. If several style sheets are used on all the website's pages, concatenate them into one single file.Some CMS and frameworks offer ways to do such optimization automatically. The HTTP server can also be configured to compress and reduce the size of style sheets.With the Apache web server, simply add the following line in the .htaccess configuration file:# compress css:AddOutputFilterByType DEFLATE text/cssThis instruction activates the Deflate mode which compresses all the style sheets between the server and the HTTP client. Learn more about DeflateBest practices n°35 from '115 bonnes pratiques d'écoconception web v4'", + "titleData": "{0} stylesheet(s) found on average" + }, + "useETags": { + "title": "Use ETags", + "description": "ETags limit the number of server requests and avoid unnecessary use of bandwidth. At least 95 % of your resources should have ETags to have a B Score.", + "correction": "An ETag is a signature attached to a server response. If the client requests a URL (HTML page, image, style sheet, etc.) whose ETag is identical to the one it already has, the web server will reply that it does not need to download the resource and that it should use the one it already possesses. Using ETags saves huge amounts of bandwidth.Refer to the File Etag documentation for Apache: Apache documentation on EtagSee : ETags Use ETags Header", + "titleData": "{0}% of resources with ETags " + }, + "useStandardTypefaces": { + "title": "Use Standard Typefaces", + "description": "You should reduce the size of custom fonts used.", + "correction": "Use standard typefaces as they already exist on the user's computer, and therefore do not need to be downloaded. This saves bandwidth and improves the website's render time. When possible, use typefaces such as : Courrier New Georgia Arial Comic Impact Tahoma Trebuchet MS Times New Roman Verdana Segoe UI Best practices n°32 from '115 bonnes pratiques d'écoconception web v4' See : Use standard typefaces List of typeface included with MacOS X Revised Font Stack", + "titleData": "{0} KB custom fonts found in the whole project" + } +} \ No newline at end of file diff --git a/EcoSonar-API/utils/bestPractices/lighthouseAccessibilityData.json b/EcoSonar-API/utils/bestPractices/lighthouseAccessibilityData.json new file mode 100644 index 0000000..070391e --- /dev/null +++ b/EcoSonar-API/utils/bestPractices/lighthouseAccessibilityData.json @@ -0,0 +1,272 @@ +{ + "ariaAllowedAttr": { + "title": "`[aria-*]` attributes match their roles", + "description": "Lighthouse flags mismatches between ARIA roles and aria-* attributes.", + "correction": "Each ARIA `role` supports a specific subset of `aria-*` attributes. Mismatching these invalidates the `aria-*` attributes. An ARIA role attribute can be added to an element to instruct assistive technologies to treat the element as something other than its native HTML element type. For example, an < a > element with role='button' is to be treated as a button, not a link.Some ARIA property and state attributes are allowed only for certain ARIA roles. When an assistive technology encounters a mismatch between an element's role and its state or property attributes, it might ignore attributes or respond in an unexpected way. As a result, people who use assistive technologies might find the element difficult or impossible to use. See : [aria-*] attributes do not match their roles aria-allowed-attr", + "titleData": "{0} attribute(s) do not match their roles" + }, + "ariaCommandName": { + "title": "`button`, `link`, and `menuitem` elements have accessible names", + "description": "Lighthouse flags custom ARIA items whose names aren't accessible to assistive technologies.", + "correction": "When an element doesn't have an accessible name, screen readers announce it with a generic name, making it unusable for users who rely on screen readers. ARIA buttons, links, and menuitems are custom controls corresponding respectively to HTML < button >, < a >, and < menuitem > elements. An accessible name is a word or phrase coded in a way that assistive technologies can associate it with a specific user interface object. Assistive technologies can then refer to the object by name, not just by type. When an ARIA button, link, or menuitem doesn't an accessible name, people who use assistive technologies have no way of knowing its purpose. See :ARIA items do not have accessible names", + "titleData": "{0} element(s) have no accessible name" + }, + "ariaHiddenBody": { + "title": "`[aria-hidden=\"true\"]` is not present on the document ``", + "description": "Lighthouse flags pages whose element has an aria-hidden='true' attribute.", + "correction": "Assistive technologies, like screen readers, work inconsistently when `aria-hidden='true'` is set on the document `< body >`. In some browsers, the attribute aria-hidden='true' hides an element and all its children from assistive technologies. Users can still use the keyboard to navigate to any focusable child elements in the , but their content is inaccessible to people who use assistive technologies. For example, screen readers are silent. See :[aria-hidden='true'] is present on the document aria-hidden-body", + "titleData": "{0} element(s) have an hidden body" + }, + "ariaHiddenFocus": { + "title": "`[aria-hidden=\"true\"]` elements do not contain focusable descendents", + "description": "Lighthouse flags focusable elements that have parents with the aria-hidden='true' attribute.", + "correction": "Focusable descendents within an `[aria-hidden='true']` element prevent those interactive elements from being available to users of assistive technologies like screen readers. In some browsers, the attribute aria-hidden='true' hides an element and all its children from assistive technologies. Users can still use the keyboard to navigate to any focusable child elements, but their content is inaccessible to people who use assistive technologies. For example, screen readers are silent. (An element is focusable if it can receive input focus via scripting, mouse interaction, or keyboard tabbing.) See :[aria-hidden='true'] elements contain focusable descendants aria-hidden-focus", + "titleData": "{0} element(s) contain focusable descendents" + }, + "ariaRequiredAttr": { + "title": "`[role]`s have all required `[aria-*]` attributes", + "description": "Lighthouse flags ARIA roles that don't have the required states and properties.", + "correction": " Some ARIA roles have required attributes that describe the state of the element to screen readers. See : [role]s do not have all required [aria-*] attributes aria-required-attr", + "titleData": "{0} element(s) have no required attribute" + }, + "ariaRoles": { + "title": "`[role]` values are valid", + "description": "Lighthouse flags ARIA roles with invalid values.", + "correction":"An ARIA role attribute can be added to an element to instruct assistive technologies to treat the element as something other than its native HTML element type. For example, an < a > element with role='button' will be treated as a button, not as a link.When an assistive technology encounters an element whose role attribute has an invalid value, it might ignore the element or respond to it in an unexpected way. As a result, people who use assistive technologies might find the element difficult or impossible to detect or use.See : [role] values are not valid aria-roles", + "titleData": "{0} [role] value(s) are not valid" + }, + "ariaValidAttrValue": { + "title": "`[aria-*]` attributes have valid values", + "description": "Lighthouse flags ARIA attributes with invalid values.", + "correction": "When an assistive technology encounters an element with an invalid ARIA attribute value, it might ignore the attribute or respond to it in an unexpected way.As a result, people who use assistive technologies might find the element difficult or impossible to use.See : [aria-*] attributes do not have valid values aria-valid-attr-value", + "titleData": "{0} [aria-*] attribute(s) are not valid" + }, + "ariaValidAttr": { + "title": "`[aria-*]` attributes are valid and not misspelled", + "description": "Lighthouse flags invalid ARIA attributes.", + "correction": "When an assistive technology encounters an element with an invalid ARIA attribute name, it might ignore the attribute or respond to it in an unexpected way. As a result, people who use assistive technologies might find the element difficult or impossible to use.See : [aria-*] attributes are not valid or misspelled aria-valid-attr", + "titleData": "{0} [aria-*] attribute(s) are not valid or misspelled" + }, + "bypass": { + "title": "The page contains a heading, skip link, or landmark region", + "description": "Lighthouse flags pages that don't provide a way to skip repetitive content.", + "correction": "Web pages typically begin with blocks of content that repeat across multiple pages, such as banners and site navigation menus. A person who uses a mouse can visually skim past that repeated content and access a link or other control within the primary content with a single click.Similarly, a bypass mechanism allows keyboard users to navigate directly to the page's main content. Otherwise, reaching the primary content could require dozens of keystrokes. People with limited mobility could find this task difficult or painful, and people who use screen readers could find it tedious to listen as each repeated element is announced.See : The page does not contain a heading, skip link, or landmark region bypass", + "titleData": "At least one of the page does not contain a heading, skip link or landmark region", + "titleDataSuccess": "All pages contain a heading, skip line or landmark region" + }, + "colorContrast": { + "title": "Background and foreground colors don't have a sufficient contrast ratio.", + "description": "Lighthouse flags text whose background and foreground colors don't have a sufficiently high contrast ratio.", + "correction": "Most people find it easier to read text when it has a sufficiently high contrast against its background. People with visual disabilities, low vision, limited color perception, or presbyopia are likely to find text unreadable when contrast is too low.See : Background and foreground colors do not have a sufficient contrast ratio color-contrast", + "titleData": "At least one page has background and foreground colors do not have a sufficient contrast ratio", + "titleDataSuccess": "Background and foreground colors have a sufficient contrast ratio" + }, + "documentTitle": { + "title": "Document has a `` element", + "description": "Lighthouse flags pages without a <title> element in the page's <head>.", + "correction": "Typically, the first thing a user learns about a web page is its title. The title is displayed in the browser tab and in search engine results, and it's announced by assistive technologies as soon as a user navigates to a page. A descriptive page title helps everyone, especially users of assistive technologies, determine whether a page contains information relevant to their current needs.See : Document doesn't have a title element document-title", + "titleData": "{0} page(s) do not have a <title> element" + }, + "duplicateIdActive": { + "title": "`[id]` attributes on active, focusable elements are unique", + "description": "Lighthouse flags focusable elements that have duplicate ids.", + "correction": "When multiple active, focusable elements share the same id attribute, both scripting (such as JavaScript) and assistive technologies are likely to act only on the first and ignore the others. As a consequence, both functionality and accessibility can be degraded. (An element is focusable if it can receive input focus via scripting, mouse interaction, or keyboard tabbing. It's active if it is not marked as disabled.)See :[id] attributes on active, focusable elements are not unique duplicate-id-active", + "titleData": "{0} [id] attribute(s) on active, focusable elements are not unique" + }, + "duplicateIdAria": { + "title": "ARIA IDs are unique", + "description": "Lighthouse flags elements that share an ID referred to by another element's aria-labelledby attribute.", + "correction": "Labels and ARIA relationship attributes (such as aria-controls, aria-labelledby, and aria-owns) depend on unique id values to identify specific UI components. When multiple elements in a web page share the same id value, assistive technologies are likely to recognize only the first, and ignore others.See : ARIA IDs are not unique duplicate-id-aria", + "titleData": "{0} ARIA id(s) are not unique" + }, + "headingOrder": { + "title": "Heading elements appear in a sequentially-descending order", + "description": "Lighthouse flags pages whose headings skip one or more levels.", + "correction": "Properly ordered headings that do not skip levels convey the semantic structure of the page, making it easier to navigate and understand when using assistive technologies. See : Heading elements are not in a sequentially-descending order", + "titleData": "{0} heading element(s) are not in a sequentially-descending order" + }, + "htmlHasLang": { + "title": "`<html>` element has a `[lang]` attribute", + "description": "Lighthouse flags pages whose <html> element doesn't have a lang attribute.", + "correction": "When a web page's primary language is programmatically identified, browsers and assistive technologies can render the text more accurately; screen readers can use the correct pronunciation; visual browsers can display the correct characters; media players can show captions correctly; and automated translation is enabled.All users find it easier to understand the page's content.See : <html> element does not have a [lang] attribute html-has-lang", + "titleData": "{0} <html> element(s) does not have a [lang] attribute" + }, + "htmlLangValid": { + "title": "`<html>` element has a valid value for its `[lang]` attribute", + "description": "Lighthouse flags pages whose <html> element doesn't have a valid value for its lang attribute.", + "correction": "When a web page's primary language is programmatically identified, browsers and assistive technologies can render the text more accurately; screen readers can use the correct pronunciation; visual browsers can display the correct characters; media players can show captions correctly; and automated translation is enabled.All users find it easier to understand the page's content.See :<html> element does not have a valid value for its [lang] attribute html-lang-valid", + "titleData": "{0} <html> element(s) does not have a valid value for its [lang] attribute" + }, + "imageAlt": { + "title": "Image elements have `[alt]` attributes", + "description": "Lighthouse flags <img> elements that don't have alt attributes.", + "correction": "Because assistive technologies can't interpret an image directly, they rely on alternative text to communicate the image's meaning to users. If an image has (non-empty) alternative text, the image is identified as meaningful, and its alternative text is presented to the user. If an image has an empty alt attribute, the image is identified as decorative and ignored. If an image has no alternative text at all, the image is presumed to be meaningful, and its filename is likely to be presented to the user.See : Image elements do not have [alt] attributes image-alt", + "titleData": "{0} image element(s) do not have [alt] attributes" + }, + "label": { + "title": "Form elements have associated labels", + "description": "Lighthouse flags form elements that don't have associated labels.", + "correction": "A form control is an interactive HTML element used for user input. Form controls include buttons, checkboxes, text fields, color pickers, and more.An accessible name is a word or phrase coded in a way that assistive technologies can associate it with a specific user interface object. Assistive technologies can then refer to the object by its name, not just by type (role).When a form control doesn't have an accessible name, people who use assistive technologies have no way of knowing its specific purpose.See : Form elements do not have associated labels label", + "titleData": "{0} element(s) have no associated labels" + }, + "linkName": { + "title": "Links have a discernible name", + "description": "Lighthouse flags links that don't have discernible names.", + "correction": "An accessible name is a word or phrase coded in a way that assistive technologies can associate it with a specific user interface object. Assistive technologies can then refer to the object by name, not just by type. When a link doesn't have an accessible name, people who use assistive technologies have no way of knowing its purpose.See : Links do not have a discernible name link-name", + "titleData": "{0} link(s) do not have a discernible name" + }, + "list": { + "title": "Lists contain only `<li>` elements and script supporting elements", + "description": "Lighthouse flags lists that contain content elements that shouldn't be in a list.", + "correction": "In a properly structured list, all content is contained within list items. Content includes text and other HTML elements. Certain non-content elements are also allowed.When an assistive technology encounters a list that's poorly structured or contains disallowed elements, it might respond in an unexpected way. As a result, people who use assistive technologies might find it difficult to interpret the list.See : Lists do not contain only < li > elements and script supporting elements (< script > and < template >) list", + "titleData": "{0} list(s) do not contain only <li> elements and script supporting elemnts (<script> and <template>) " + }, + "listItem": { + "title": "List items (`<li>`) are contained within `<ul>` or `<ol>` parent elements", + "description": "Lighthouse flags list items (<li>) that aren't contained in <ul> or <ol> parent elements.", + "correction": "In a properly structured list, all list items (< li > elements) are contained by a < ul >, < ol >, or < menu > parent element.When an assistive technology encounters a list that's poorly structured, it might respond in an unexpected way. As a result, people who use assistive technologies might find it difficult to interpret the list. See : List items (< li >) are not contained within < ul > or < ol > parent elements listitem", + "titleData": "{0} List item(s) <li> are not contained within <ul> or <ol> parent elements" + }, + "tabIndex": { + "title": "No element has a `[tabindex]` value greater than 0", + "description": "Lighthouse flags elements that have a tabindex value greater than 0.", + "correction": "A value greater than 0 implies an explicit navigation ordering. Although technically valid, this often creates frustrating experiences for users who rely on assistive technologies. See : Some elements have a [tabindex] value greater than 0", + "titleData": "{0} element(s) have a [tabindex] value greater than 0" + }, + "tdHeadersAttr": { + "title": "Td Headers Attributes", + "description": "Lighthouse flags tables that have more than one table header per column.", + "correction": "In a table, a header cell and a data cell are programmatically related if they are coded in a way that assistive technologies can accurately determine their relationship. When a data cell has a headers attribute that points to a cell in a different table, the programmatic relationship isn't defined in a way that assistive technologies can recognize. As a result, assistive technology users can't tell which header cell goes with a given data cell.See : Cells in a < table > element that use the [headers] attribute refer to an element ID not found within the same table td-headers-attr", + "titleData": "{0} cell(s) in a <table> element that use the [headers] attribute refer to an element id not found within the same table" + }, + "validLang": { + "title": "[lang] attributes have a valid value", + "description": "Lighthouse flags elements that have a lang attribute with an invalid value.", + "correction": "Sometimes a web page written in one language has a passage in a different language. When the language of such a passage is correctly identified (by a lang attribute on the containing element), browsers and assistive technologies can render the text more accurately; screen readers can use the correct pronunciation; visual browsers can display the correct characters; and media players can show captions correctly. All users find it easier to understand the content.See : [lang] attributes do not have a valid value valid-lang", + "titleData": "{0} element(s) do not have valid lang attribute" + }, + "ariaInputFieldName": { + "title": "ARIA input fields have accessible names", + "description": "Lighthouse flags custom ARIA items whose names aren't accessible to assistive technologies.", + "correction": "When an input field doesn't have an accessible name, screen readers announce it with a generic name, making it unusable for users who rely on screen readers. See : ARIA items do not have accessible names aria-input-field-name", + "titleData": "{0} ARIA input field(s) do not have accessible names" + }, + "ariaMeterName": { + "title": "ARIA `meter` elements have accessible names", + "description": "Lighthouse flags custom ARIA items whose names aren't accessible to assistive technologies", + "correction": "When an element doesn't have an accessible name, screen readers announce it with a generic name, making it unusable for users who rely on screen readers. An ARIA meter is a custom control corresponding to the HTML < meter > element. A meter represents either a scalar value within a known range, or a fractional value. For example, a meter might represent the unused portion of total storage capacity. An accessible name is a word or phrase coded in a way that assistive technologies can associate it with a specific user interface object. Assistive technologies can then refer to the object by name, not just by type. When an ARIA meter doesn't an accessible name, people who use assistive technologies have no way of knowing its purpose.See : ARIA items do not have accessible names aria-meter-name", + "titleData": "{0} ARIA meter element(s) do not have accessible names" + }, + "ariaProgressbarName": { + "title": "ARIA `progressbar` elements have accessible names", + "description": "Lighthouse flags custom ARIA items whose names aren't accessible to assistive technologies", + "correction": "When a `progressbar` element doesn't have an accessible name, screen readers announce it with a generic name, making it unusable for users who rely on screen readers. An ARIA progressbar is a custom control corresponding to the HTML < progress > element. A progressbar represents progress on a task that takes a long time to complete. An accessible name is a word or phrase coded in a way that assistive technologies can associate it with a specific user interface object. Assistive technologies can then refer to the object by name, not just by type. When an ARIA progressbar doesn't an accessible name, people who use assistive technologies have no way of knowing its purpose. See : ARIA items do not have accessible names aria-progress-bar-name", + "titleData": "{0} ARIA progressbar element(s) do not have accessible names" + }, + "ariaRequiredChildren": { + "title": "ARIA Required children", + "description": "Lighthouse flags ARIA roles that don't have the required child roles.", + "correction": "An ARIA role attribute can be added to an element to instruct assistive technologies to treat the element as something other than its native HTML element type. For example, a < ul > element with role='listbox' is to be treated as a listbox control, not as a static list.Some ARIA 'parent' roles identify composite controls that always include managed controls, identified by 'child' roles. For example, role='listbox' identifies a composite control that manages a set of managed controls identified by role='option'. People who use assistive technologies might find it difficult or impossible to use a composite control if its managed controls lack the required child role. See : Elements with an ARIA [role] that require children to contain a specific [role] are missing some or all of those required children aria-required-children", + "titleData": "{0} element(s) have missing child roles" + }, + "ariaRequiredParent": { + "title": "`[role]`s are contained by their required parent element", + "description": "Lighthouse flags ARIA child roles that aren't contained by the required parent", + "correction": "An ARIA role attribute can be added to an element to instruct assistive technologies to treat the element as something other than its native HTML element type. For example, an < li > element with role='option' is to be treated as a selectable option in a listbox control, not as a static list item.Some ARIA 'child' roles identify managed controls that are always part of a larger composite control, identified by a 'parent' role. For example, role='option' identifies a child control that is managed by a parent control identified by role='listbox'. People who use assistive technologies might find it difficult or impossible to use a child control if its managing control lacks the required parent role.See : [role]s are not contained by their required parent element aria-required-parent", + "titleData": "{0} element(s) have missing parent roles" + }, + "ariaToggleFieldName": { + "title": "ARIA toggle fields have accessible names", + "description": "Lighthouse flags custom ARIA items whose names aren't accessible to assistive technologies.", + "correction": "When a toggle field doesn't have an accessible name, screen readers announce it with a generic name, making it unusable for users who rely on screen readers. See :ARIA items do not have accessible names aria-toggle-field-name", + "titleData": "Not all ARIA toggle fields have accessible names", + "titleDataSuccess": "ARIA toggle fields have accessible names" + }, + "ariaTooltipName": { + "title": "ARIA `tooltip` elements have accessible names", + "description": "Lighthouse flags custom ARIA items whose names aren't accessible to assistive technologies.", + "correction": "An ARIA tooltip is a contextual popup with text describing an interface element. The tooltip typically becomes visible when the mouse hovers over, or focus is received by, the owning element. An accessible name is a word or phrase coded in a way that assistive technologies can associate it with a specific user interface object. Assistive technologies can then refer to the object by name, not just by type. When an ARIA tooltip doesn't an accessible name, people who use assistive technologies have no way of knowing its purpose.See : ARIA items do not have accessible names aria-tooltip-name", + "titleData": "Not all ARIA tooltip elements have accessible names", + "titleDataSuccess": "ARIA tooltip elements have accessible names" + }, + "ariaTreeitemName": { + "title": "ARIA `treeitem` elements have accessible names", + "description": "Lighthouse flags custom ARIA items whose names aren't accessible to assistive technologies.", + "correction": "When an element doesn't have an accessible name, screen readers announce it with a generic name, making it unusable for users who rely on screen readers.See : ARIA items do not have accessible names", + "titleData": "Not all ARIA treeitem elements have accessible names", + "titleDataSuccess": "ARIA treeitem elements have accessible names" + }, + "buttonName": { + "title": "Buttons have an accessible name", + "description": "Lighthouse flags buttons that don't have text content or an aria-label property.", + "correction": "An accessible name is a word or phrase coded in a way that assistive technologies can associate it with a specific user interface object. Assistive technologies can then refer to the object by name, not just by type. When a button doesn't have an accessible name, people who use assistive technologies have no way of knowing its purpose.See : Buttons do not have an accessible name button-name", + "titleData": "{0} button(s) do not have an accessible name" + }, + "definitionList": { + "title": "Definition List", + "description": "Lighthouse flags <dl> elements that don't contain properly ordered <dt> and <dd> groups, <script>, or <template> elements.", + "correction": "A definition list is a list of terms (words or phrases), and their definitions. A definition list can contain only certain element types, and it requires a specific structure.When an assistive technology encounters a definition list that's poorly structured or contains invalid elements, it might respond in an unexpected way. As a result, people who use assistive technologies might find it difficult to interpret the list.See : < dl > do not contain only properly ordered < dt > and < dd > groups, < script >, or < template > elements definition-list", + "titleData": " {0} `<dl>`'s don't contain only properly-ordered `<dt>` and `<dd>` groups, `<script>`, `<template>` or `<div>` elements " + }, + "dlItem": { + "title": "Definition list items are wrapped in `<dl>` elements", + "description": "Lighthouse reports when definition list items are not wrapped in <dl> elements.", + "correction": "A definition list is a list of terms (words or phrases), and their definitions. The < dt > and < dd > elements must be contained by a < dl > element.When an assistive technology encounters a definition list that's poorly structured, it might respond in an unexpected way. As a result, people who use assistive technologies might find it difficult to interpret the list.See :Definition list items are not wrapped in < dl > elements dlitem", + "titleData": "{0} definition list item(s) are not wrapped in <dl> elements" + }, + "formFieldMultipleLabels": { + "title": "No form fields have multiple labels", + "description": "Lighthouse flags form elements that have more than one label.", + "correction": "Form fields with multiple labels can be confusingly announced by assistive technologies like screen readers which use either the first, the last, or all of the labels. See : Form fields have multiple labels", + "titleData": "{0} Form Field(s) have multiple labels" + }, + "frameTitle": { + "title": "`<frame>` or `<iframe>` elements have a title", + "description": "Lighthouse flags <frame> and <iframe> elements that don't have titles.", + "correction": "A < frame > or < iframe > is used to embed one HTML document within another. An accessible name is a word or phrase coded in a way that assistive technologies can associate it with a specific user interface object. Assistive technologies can then refer to the object by name, not just by type.People with good vision can glance at a < frame > or < iframe > element to get a good idea of its content. People who use assistive technologies rely on the frame's accessible name to determine whether it contains information relevant to their current needs.See :< frame > or < iframe > elements do not have a title frame-title", + "titleData": "{0} <frame> or <iframe> element(s) do not have a title" + }, + "inputImageAlt": { + "title": "`<input type=\"image\">` elements have `[alt]` text", + "description": "Lighthouse flags < input type='image' > elements that don't have alt text", + "correction": "An image button is an < input > element with type='image'. Alternative text is a word or phrase that (1) is coded in a way that assistive technologies can associate it with a specific non-text object, and (2) conveys the same information as the non-text object.Because assistive technologies can't interpret an image directly, they rely on alternative text to communicate the image button's purpose. When an image button doesn't have alternative text, people who use assistive technologies have no way of knowing its purpose.See :< input type='image' > elements do not have [alt] text input-image-alt", + "titleData": "{0} < input type='image' > element(s) do not have [alt] text" + }, + "metaRefresh": { + "title": "The document does not use `<meta http-equiv=\"refresh\">`", + "description": "Lighthouse flags pages that contain a <meta> tag with the http-equiv='refresh' attribute.", + "correction": "Using http-equiv='refresh' in a < meta > element causes a web page to refresh automatically at a specified time interval.An automatic page refresh can be disorienting. If a refresh causes input focus to move unexpectedly back to its original state, it can be especially frustrating for people who use screen readers and other keyboard users.See : The document uses < meta http-equiv='refresh' > meta-refresh", + "titleData": "At least one document uses <meta http-equiv='refresh'>", + "titleDataSuccess": "All documents do not use <meta http-equiv='refresh'>" + }, + "metaViewport": { + "title": "Meta Viewport", + "description": "Lighthouse flags pages that disable browser zooming.", + "correction": "Using content='user-scalable=no' in a < meta name='viewport' > element disables zooming in some browsers. Users are forced to view the text at the specified size.Most people find it easier to read text when it is sufficiently large. People with visual disabilities, low vision, or limited color perception are likely to find text unreadable when it's too small.See : [user-scalable='no'] is used in the < meta name='viewport' > element or the [maximum-scale] attribute is less than 5 meta-viewport", + "titleData": "{0} `[user-scalable=\"no\"]` is used in the `<meta name=\"viewport\">` element and the `[maximum-scale]` attribute is not less than 5. " + }, + "objectAlt": { + "title": "`<object>` elements have `[alt]` text", + "description": "Lighthouse flags <object> elements that don't have alternative text.", + "correction": "An < object > element is used to embed multimedia content in a web page. It can also be used to embed one web page inside another.Alternative text is a word or phrase that (1) is coded in a way that assistive technologies can associate it with a specific non-text object, and (2) conveys the same information as the non-text object.Because assistive technologies can't interpret objects directly, they rely on alternative text to communicate the meaning of non-text content to users.See : < object > elements do not have alt text object-alt", + "titleData": "{0} <object> element(s) do not have [alt] text" + }, + "thHasDataCells": { + "title": "<th> elements have data cells they describe.", + "description": "Lighthouse flags <th> elements and elements with [role='columnheader'/'rowheader'] that don't have the data cells they describe.", + "correction": "When people with good vision see a table with a row or column header that has no associated data cells, they can tell at a glance that the data is missing. People who use assistive technologies must explore a table deliberately to discover its contents; they are likely to have difficulty interpreting a table with missing data cells.See : < th > elements and elements with [role='columnheader'/'rowheader'] do not have data cells they describe th-has-data-cells", + "titleData": "{0} <th> elements and elements with [role='columnheader'/'rowheader'] do not have the data cells they describe" + }, + "videoCaption": { + "title": "Video Caption", + "description": "Lighthouse flags <video> elements that are missing a <track> element with the attribute kind='captions'.", + "correction": "When a video provides a caption it is easier for deaf and hearing impaired users to access its information. See :< video > elements do not contain a < track > element with [kind=\"captions\"]", + "titleData": "{0} <video> element(s) have missing <track> element with the attribute kind='captions'" + }, + "accessKeys": { + "title": "`[accesskey]` values are unique", + "description": "Lighthouse flags pages with duplicate access keys.", + "correction": "Access keys let users quickly focus a part of the page. For proper navigation, each access key must be unique. See :[accesskey] values are not unique", + "titleData": "{0} value(s) are not unique" + } +} \ No newline at end of file diff --git a/EcoSonar-API/utils/bestPractices/lighthousePerformanceData.json b/EcoSonar-API/utils/bestPractices/lighthousePerformanceData.json new file mode 100644 index 0000000..c6e9052 --- /dev/null +++ b/EcoSonar-API/utils/bestPractices/lighthousePerformanceData.json @@ -0,0 +1,169 @@ +{ + "viewport": { + "title": "Has a Viewport meta tag", + "description": "Lighthouse flags pages without a viewport meta tag.", + "correction": "A `< meta name=\"viewport\" >` not only optimizes your app for mobile screen sizes, but also prevents a 300 millisecond delay to user inputGovernment Regulation: RGESN 1.6 : Does the digital service adapt to different types of terminals display ? See Does not have a < meta name=\"viewport\" > tag with width or initial-scaleViewport meta tag Lighthouse: Use a < meta name=\"viewport\" > tag with width or initial-scale", + "titleData": "have missing `<meta name=\"viewport\">` tag", + "titleDataSuccess": "have viewport meta tag" + }, + "serverResponseTime": { + "title": "Initial server response time was short", + "description": "This audit fails when the browser waits more than 600 ms for the server to respond to the main document request.", + "correction": "Keep the server response time for the main document short because all other requests depend on it. Government Regulation: RGESN 6.3 : Does the digital service use caching mechanisms for all transferred content under its control? RGESN 7.1 : Does the digital service use a server caching system for the most used data? See : Reduce server response times (TTFB) Lighthouse: Reduce initial server response time 7 Ways to Reduce Server Response Time 8 Ways to Effectively Reduce Server Response Time", + "titleData": "Root document took {0}ms on average" + }, + "mainthreadWorkBreakdown": { + "title": "Minimize main-thread work", + "description": "Lighthouse flags pages that keep the main thread busy for longer than 4 seconds during load.", + "correction": "Consider reducing the time spent parsing, compiling and executing JS. You may find delivering smaller JS payloads helps with this.Government Regulation: RGESN 6.8 : Does the digital service avoid triggering the loading of unused assets and content for each feature? See : Minimize main thread work Lighthouse: Minimize main-thread work How do I minimize main thread work? How to minimize main thread work in React Component", + "titleData": "{0}s on average" + }, + "bootupTime": { + "title": "Reduce JavaScript execution time", + "description": "Lighthouse shows a warning when JavaScript execution takes longer than 2 seconds. The audit fails when execution takes longer than 3.5 seconds.", + "correction": "Consider reducing the time spent parsing, compiling and executing JS. You may find delivering smaller JS payloads helps with this.Government Regulation: RGESN 6.8 : Does the digital service avoid triggering the loading of unused assets and content for each feature? See : Reduce JavaScript execution time Lighthouse: Reduce JavaScript execution time How to Reduce Javascript Execution Time Reduce script evaluation time in 4 steps", + "titleData": "{0}s on average" + }, + "fontDisplay": { + "title": "Ensure text remains visible during webfont load", + "description": "Lighthouse flags any font URLs that may flash invisible text.", + "correction": "Leverage the font-display CSS feature to ensure text is user-visible while webfonts are loading. See : Ensure text remains visible during webfont load Lighthouse: Ensure text remains visible during webfont load How to Ensure Text Remains Visible During Webfont Load", + "titleData": "text are invisible during one of pages load", + "titleDataSuccess": "no text are invisible" + }, + "thirdPartySummary": { + "title": "Minimize third-party usage", + "description": "Reduce the presence of third-party code.", + "correction": "Third-party code can significantly impact load performance. Limit the number of redundant third-party providers and try to load third-party code after your page has primarily finished loading. Best practices n°82 from '115 bonnes pratiques d'écoconception web v4' See : Loading Third-Party JavaScript Lighthouse: Reduce the impact of third-party code Reduce the Impact of Third-Party Code", + "titleData": "Third-party code blocked the main thread for {0}ms on average" + }, + "thirdPartyFacades": { + "title": "Some third-party resources can be lazy loaded with a facade", + "description": "Some third-party embeds can be lazy loaded. Consider replacing them with a facade until they are required.", + "correction": "Some third-party embeds can be lazy loaded. Consider replacing them with a facade until they are required.Government Regulation: RGESN 6.8 : Does the digital service avoid triggering the loading of unused assets and content for each feature? See : Lazy load third-party resources with facades Lighthouse: Lazy load third-party resources with facades Lazy load third-party resources with facades", + "titleData": "{0} facade alternative(s) available" + }, + "lcpLazyLoaded": { + "title": "Don't lazy load Largest Contentful Paint image", + "description": "Above-the-fold images that are lazily loaded render later in the page lifecycle, which can delay the largest contentful paint.", + "correction": "Lazy-loading is a technique to defer downloading a resource until it's needed, which conserves data and reduces network contention for critical assets. It became a web standard in 2019 and today loading='lazy' for images is supported by most major browsers. That sounds great, but is there such a thing as too much lazy loading? What we found is that lazy-loading can be an amazingly effective tool for reducing unneeded image bytes, but overuse can negatively affect performance. Concretely, our analysis shows that more eagerly loading images within the initial viewport—while liberally lazy-loading the rest—can give us the best of both worlds: fewer bytes loaded and improved Core Web Vitals.Government Regulation: RGESN 6.6 : Does the digital service provide a progressive loading mechanism for graphics and media that require it? See : The performance effects of too much lazy-loading Lighthouse: Don't lazy load Largest Contentful Paint image", + "titleData": "At least one largest contentful image was lazy loaded", + "titleDataSuccess": "No largest contentful image was lazy loaded" + }, + "nonCompositedAnimations": { + "title": "Avoid non-composited animations", + "description": "Animations which are not composited can be janky and increase Cumulative Layout Shift.", + "correction": "Animations which are not composited can be janky and increase CLS. See :Avoid non-composited animations Lighthouse: Avoid non-composited animations Non-composited animations", + "titleData": "{0} animated elements found" + }, + "domSize": { + "title": "Avoids an excessive DOM size", + "description": "Lighthouse flags pages with DOM trees that: Warns when the body element has more than ~800 nodes. Errors when the body element has more than ~1,400 nodes. Be careful EcoSonar rules are more strict : you should have less than 475 nodes to reach a B Score.", + "correction": "A large DOM will increase memory usage, cause longer style calculations, and produce costly layout reflows. Best practices n°12 from '115 bonnes pratiques d'écoconception web v4' See : Avoid an excessive DOM size Reduce the scope and complexity of style calculations Minimizing browser reflow", + "titleData": "{0} elements in average" + }, + "usesLongCacheTtl": { + "title": "Serve static assets with an efficient cache policy", + "description": "Lighthouse flags all static resources that aren't cached.", + "correction": "A long cache lifetime can speed up repeat visits to your page.Government Regulation: RGESN 6.9 : Does the digital service use client-side storage of some resources to avoid unnecessary network exchanges? See : Serve static assets with an efficient cache policy Lighthouse: Serve static assets with an efficient cache policy How to Serve Assets With an Efficient Cache Policy on WordPress", + "titleData": "{0} resources found" + }, + "usesResponsiveImages": { + "title": "Properly size images", + "description": "The Opportunities section of your Lighthouse report lists all images in your page that aren't appropriately sized, along with the potential savings in kibibytes (KiB).", + "correction": "Serve images that are appropriately-sized to save cellular data and improve load time.Government Regulation: RGESN 5.2 Does the digital service offer image content whose level of compression is appropriate for the content and viewing context? See : Properly size images Optimize bitmaps How to properly size images", + "titleData": "Potential savings of {0} KiB" + }, + "offscreenImages": { + "title": "Defer offscreen images (or lazy-loading)", + "description": "The Opportunities section of your Lighthouse report lists all offscreen or hidden images in your page along with the potential savings in kibibytes (KiB). Consider lazy-loading these images after all critical resources have finished loading to lower Time to Interactive.", + "correction": "Consider lazy-loading offscreen and hidden images after all critical resources have finished loading to lower time to interactive.Government Regulation: RGESN 6.6 : Le service numérique propose-t-il un mécanisme de chargement progressif pour les éléments graphiques et les médias le nécessitant ? See : Defer offscreen images Learn how to fix 'defer offscreen images' Lighthouse: Defer offscreen images How to Defer Offscreen Images", + "titleData": "Potential savings of {0} KiB" + }, + "unusedCssRules": { + "title": "Reduce unused CSS", + "description": "The Opportunities section of your Lighthouse report lists all stylesheets with unused CSS with a potential savings of 2 KiB or more. Remove the unused CSS to reduce unnecessary bytes consumed by network activity.", + "correction": "Reduce unused rules from stylesheets and defer CSS not used for above-the-fold content to decrease bytes consumed by network activity.Government Regulation: RGESN 6.8 : Does the digital service avoid triggering the loading of unused assets and content for each feature? See : Remove unused CSS How Do You Remove Unused CSS From a Site? 4 Ways to Remove Unused CSS", + "titleData": "Potential savings of {0} KiB" + }, + "unusedJavascript": { + "title": "Reduce unused JavaScript", + "description": "Lighthouse flags every JavaScript file with more than 20 kibibytes of unused code.", + "correction": "Reduce unused JavaScript and defer loading scripts until they are required to decrease bytes consumed by network activity. Government Regulation: RGESN 6.8 : Does the digital service avoid triggering the loading of unused assets and content for each feature? See :Remove unused JavaScript A Lifehack for removing unused JS/CSS.. or just unminify How to Remove Unused JavaScript", + "titleData": "Potential savings of {0} KiB" + }, + "usesOptimizedImages": { + "title": "Efficiently encode images", + "description": "The Opportunities section of your Lighthouse report lists all unoptimized images, with potential savings in kibibytes (KiB).", + "correction": "Optimized images load faster and consume less cellular data.Government Regulation: RGESN 5.2 Does the digital service offer image content whose level of compression is appropriate for the content and viewing context? See : Efficiently encode images Lighthouse: Efficiently encode images", + "titleData": "Potential savings of {0} KiB" + }, + "modernImageFormats": { + "title": "Serve images in modern formats", + "description": "The Opportunities section of your Lighthouse report lists all images in older image formats, showing potential savings gained by serving AVIF versions of those images.", + "correction": "Image formats like WebP and AVIF often provide better compression than PNG or JPEG, which means faster downloads and less data consumption.Government Regulation: RGESN 5.1 : Does the digital service use a file format adapted to the content and viewing context of each image content? See : Serve images in modern formats How to Serve Next-Gen Image Formats in Modern Browsers Using Modern Image Formats: AVIF And WebP", + "titleData": "Potential savings of {0} KiB" + }, + "usesTextCompression": { + "title": "Enable text compression", + "description": "The Opportunities section of your Lighthouse report lists all text-based resources that aren't compressed.", + "correction": "Text-based resources should be served with compression (gzip, deflate or brotli) to minimize total network bytes.Government Regulation: RGESN 6.4 : Has the digital service implemented compression techniques on all transferred resources under its control? See :Enable text compression Lighthouse: Enable text compression How to Enable GZIP Compression for Faster Web Pages Enable Text Compression", + "titleData": "Potential savings of {0} KiB" + }, + "usesHttp2": { + "title": "Use HTTP/2", + "description": "Lighthouse lists all resources not served over HTTP/2.", + "correction": "HTTP/2 offers many benefits over HTTP/1.1, including binary headers and multiplexing.Regulation: Best practices n°83 from '115 bonnes pratiques d'écoconception web v4' See : Does not use HTTP/2 for all of its resources Lighthouse: Use HTTP/2 for all resources Use HTTP/2 for all of its resources", + "titleData": "{0} requests not served via HTTP/2" + }, + "efficientAnimatedContent": { + "title": "Use video formats for animated content", + "description": "The Opportunities section of your Lighthouse report lists all animated GIFs, along with estimated savings in seconds achieved by converting these GIFs to video.", + "correction": "Large GIFs are inefficient for delivering animated content. Consider using MPEG4/WebM videos for animations and PNG/WebP for static images instead of GIF to save network bytes. See : Use video formats for animated content Lighthouse: Use video formats for animated content", + "titleData": "Potential savings of {0} KiB" + }, + "legacyJavascript": { + "title": "Avoid serving legacy JavaScript to modern browsers", + "description": "Lighthouse identifies the polyfills and transforms that should not be present if using the module/nomodule pattern.", + "correction": "Polyfills and transforms enable legacy browsers to use new JavaScript features. However, many aren't necessary for modern browsers. For your bundled JavaScript, adopt a modern script deployment strategy using module/nomodule feature detection to reduce the amount of code shipped to modern browsers, while retaining support for legacy browsers. See : Lighthouse: Avoid serving legacy JavaScript to modern browsers Deploying ES2015+ Code in Production Today", + "titleData": "Potential savings of {0} KiB" + }, + "totalByteWeight": { + "title": "Avoid enormous network payloads", + "description": "Lighthouse shows the total size in kibibytes (KiB) of all resources requested by your page. The largest requests are presented first.", + "correction": "Large network payloads cost users real money and are highly correlated with long load times.Government Regulation: RGESN 6.4 : Has the digital service implemented compression techniques on all transferred resources under its control? See : Avoid enormous network payloads Lighthouse: Avoid enormous network payloads", + "titleData": "Total size was {0} KiB in average" + }, + "noDocumentWrite": { + "title": "Avoids document.write()", + "description": "Lighthouse flags calls to document.write() that weren't blocked by Chrome.", + "correction": "For users on slow connections, external scripts dynamically injected via `document.write()` can delay page load by tens of seconds.Government Regulation: RGESN 4.1 : Is the digital service usable via a low-speed connection? See : Uses document.write() Intervening against document.write()", + "titleData": "have at least one document.write()", + "titleDataSuccess": "have no document.write()" + }, + "layoutShiftElements": { + "title": "Avoid large layout shifts", + "description": "These DOM elements contribute most to the Cumulative Layout Shift of the page.", + "correction": "Large layout shifts can create a frustrating experience for your visitors as they make your page appear visually jarring, as page elements appear suddenly, move around, and affect how your visitors interact with the page. Avoiding large layout shifts is essential in creating a smooth and streamlined experience for your visitors. Regarding image dimensions, you should not be resizing image within your browser. Images should have by default the width and height that you want to set up. See : Lighthouse: Avoid large layout shifts Optimize Cumulative Layout Shift", + "titleData": "{0} elements found" + }, + "usesPassiveEventListeners": { + "title": "Does not use passive listeners to improve scrolling performance", + "description": "Consider marking your touch and wheel event listeners as `passive` to improve your page's scroll performance.", + "correction": "Consider marking your touch and wheel event listeners as `passive` to improve your page's scroll performance. See : Use passive listeners to improve scrolling performance Improving scroll performance with passive event listeners", + "titleData": "at least one page does not have passive listeners", + "titleDataSuccess": "have passive listeners" + }, + "duplicatedJavascript": { + "title": "Remove duplicate modules in JavaScript bundles", + "description": "If the wasted bandwidth exceeds 1 KB, this audit triggers.", + "correction": "Remove large, duplicate JavaScript modules from bundles to reduce unnecessary bytes consumed by network activity.Government Regulation: RGESN 6.8 : Does the digital service avoid triggering the loading of unused assets and content for each feature? See : Lighthouse: Remove duplicate modules in JavaScript bundles Reduce webpack bundle size by eliminating duplicated", + "titleData": "Potential savings of {0} KiB" + }, + "unminifiedJavascript": { + "title": "Minify JavaScript", + "description": "The Opportunities section of your Lighthouse report lists all unminified JavaScript files, along with the potential savings in kibibytes (KiB) when these files are minified.", + "correction": "Minifying JavaScript files can reduce payload sizes and script parse time.Best practices n°77 from '115 bonnes pratiques d'écoconception web v4' See : Minifiy Javascript Minify Javascript files Minification", + "titleData": "Potential savings of {0} KiB" + } +} \ No newline at end of file diff --git a/EcoSonar-SonarQube/README.md b/EcoSonar-SonarQube/README.md index 609d82a..5f7d26e 100644 --- a/EcoSonar-SonarQube/README.md +++ b/EcoSonar-SonarQube/README.md @@ -1,8 +1,10 @@ # Plugin SonarQube EcoSonar -## Introduction -This plugin aims to embed EcoSonar Audits, Recommendations as well as Configuration. +## Introduction + +This plugin aims to embed EcoSonar Audits, Recommendations as well as Configuration. It fulfills three purposes : + - enable automatic trigger of EcoSonar Analysis each time a Sonarqube analysis is done - static code analysis with green coding rules implemented by EcoCode project - add EcoSonar audit reports directly into Sonarqube projet User Interface @@ -10,29 +12,18 @@ It fulfills three purposes : ## Getting Started ### Prerequisites -- Sonarqube- minimum version 9.4 -https://docs.sonarqube.org/latest/setup/install-server/ -https://docs.sonarqube.org/latest/setup/install-cluster/ -No constraint on the edition type. Please check with your infrastructure team which edition are you allowed to use. -- If Sonarqube version is 9.9 or above, choose Java– version 17, otherwise Java – version 11 -- Maven - 3.8.3 - -### Build the SonarQube Plugin - -#### EcoSonar V2.3 and below -To trigger and retrieve EcoSonar audits, you need to set up in the plugin configuration the URL to reach the EcoSonar API. -Please change in both files `src/main/java/com/ls/api/GreenITAnalysis.java` and `src/main/js/config/axiosConfiguration.js`, the parameter called `baseUrlHosted` to set it with the EcoSonar API Server you use. - -To build the plugin JAR file: - -``` -mvn clean package -``` +- Sonarqube- minimum version 9.4 + https://docs.sonarqube.org/latest/setup/install-server/ + https://docs.sonarqube.org/latest/setup/install-cluster/ + No constraint on the edition type. Please check with your infrastructure team which edition are you allowed to use. +- Java : version 17 if Sonarqube version is 9.9 or above, otherwise version 11 +- Maven - 3.8.3 -#### EcoSonar V3.0 and above +### Build the SonarQube Plugin related to EcoSonar -To build the plugin JAR file: +To build the plugin JAR file, first you need to retrieve the URL of the deployed server for EcoSonar API. +Then run the following commands: For Windows: @@ -48,20 +39,20 @@ export REACT_APP_BASE_URL_ECOSONAR_API=#EcoSonar-API-URL mvn clean package -Durl=#EcoSonar-API-URL ``` -EcoSonar-API-URL should be replaced in local by `http://localhost:3000` and by the EcoSonar API URL for a deployed version. +If you are running EcoSonar locally, EcoSonar-API-URL should be by default `http://localhost:3000`. -### Install Sonarqube Plugins Manually +### Install Sonarqube Plugins (EcoSonar + Ecocode) manually -1. Copy the file located at the following path `target/ecosonar-X-SNAPSHOT.jar`. -2. Go to your Sonarqube folder `extensions/plugins/` and paste the JAR. -3. Retrieve all JAR files available in the `ecocode` folder (there should be 6, one by language): -4. Go to your Sonarqube folder `extensions/plugins/` and paste the JAR files to add the EcoCode Sonarqube plugins. +1. Copy the file located at the following path `EcoSonar-SonarQube/target/ecosonar-X-SNAPSHOT.jar`. +2. Go to your Sonarqube folder `extensions/plugins/` and paste the JAR. +3. Retrieve all JAR files available in the `EcoSonar-SonarQube/ecocode` folder (there should be 6, one by language): +4. Go to your Sonarqube folder `extensions/plugins/` and paste the JAR files to add the EcoCode Sonarqube plugins. To finally launch Sonarqube with the plugin, run the shell script: `bin/windows-x86-64/StartSonar.bat`. - ![Ecosonar Plugin Sonarqube](../images/ecosonar-plugin.webp) +![Ecosonar Plugin Sonarqube](../images/ecosonar-plugin.webp) -The Sonarqube instance startup logs are located in the file `logs/web.log` +The Sonarqube instance startup logs are located in the file `logs/web.log` Official documentation about installing a SonarQube plugin: https://docs.sonarqube.org/latest/setup/install-plugin/. @@ -73,4 +64,4 @@ Set up a CI/CD pipeline to build the executable and automatically add it in the Check this link : https://docs.sonarqube.org/latest/extend/developing-plugin/ -Otherwise feel free to use our code as example with respect of licence. \ No newline at end of file +Otherwise feel free to use our code as example with respect of licence. diff --git a/EcoSonar-SonarQube/package.json b/EcoSonar-SonarQube/package.json index 865c2a3..88257b1 100644 --- a/EcoSonar-SonarQube/package.json +++ b/EcoSonar-SonarQube/package.json @@ -1,6 +1,6 @@ { "name": "ecosonar-plugin", - "version": "3.3.0", + "version": "3.4", "description": "Ecodesign and accessibility tool to help developpers minimize carbon footprint of their web-application", "main": "index.js", "scripts": { @@ -13,7 +13,7 @@ "license": "GNU", "dependencies": { "apexcharts": "^3.37.0", - "axios": "^1.3.2", + "axios": "^1.6.2", "classnames": "^2.3.2", "file-saver": "^2.0.5", "focus-trap-react": "^10.0.2", diff --git a/EcoSonar-SonarQube/pom.xml b/EcoSonar-SonarQube/pom.xml index 5d6c7d0..c39ca7a 100644 --- a/EcoSonar-SonarQube/pom.xml +++ b/EcoSonar-SonarQube/pom.xml @@ -6,7 +6,7 @@ <groupId>com.ls</groupId> <artifactId>ecosonar</artifactId> - <version>3.3</version> + <version>3.4</version> <packaging>sonar-plugin</packaging> @@ -20,7 +20,7 @@ <dependency> <groupId>org.sonarsource.sonarqube</groupId> <artifactId>sonar-plugin-api</artifactId> - <version>8.0</version> + <version>9.4.0.54424</version> <scope>provided</scope> </dependency> </dependencies> diff --git a/EcoSonar-SonarQube/src/main/js/ecosonar_bestpractices_page/components/BestPracticesBody.js b/EcoSonar-SonarQube/src/main/js/ecosonar_bestpractices_page/components/BestPracticesBody.js index f4d8052..0efa9f8 100644 --- a/EcoSonar-SonarQube/src/main/js/ecosonar_bestpractices_page/components/BestPracticesBody.js +++ b/EcoSonar-SonarQube/src/main/js/ecosonar_bestpractices_page/components/BestPracticesBody.js @@ -1,7 +1,7 @@ import React, { useState } from 'react' import AccordionManager from './Accordion/AccordionManager' import BestPracticesFilters from './BestPracticesFilters/BestPracticesFilters' -import { allTools, auditTypes, defaultSelectedComplianceLevel, greenITTool, lighthouseAccessibility, lighthousePerformanceTool, w3cValidator, setTools } from './BestPracticesFilters/Filters' +import { allTools, auditTypes, defaultSelectedComplianceLevel, greenITTool, lighthouseAccessibility, lighthousePerformanceTool, setTools, w3cValidator } from './BestPracticesFilters/Filters' export default function BestPracticesBody (props) { const { diff --git a/EcoSonar-SonarQube/src/main/js/ecosonar_configuration_page/components/ConfigurationPage.js b/EcoSonar-SonarQube/src/main/js/ecosonar_configuration_page/components/ConfigurationPage.js index 4247f22..d0a9fd4 100644 --- a/EcoSonar-SonarQube/src/main/js/ecosonar_configuration_page/components/ConfigurationPage.js +++ b/EcoSonar-SonarQube/src/main/js/ecosonar_configuration_page/components/ConfigurationPage.js @@ -1,6 +1,5 @@ import React from 'react' import { getUrlsConfiguration } from '../../services/configUrlService' -import { crawl } from '../../services/crawlerService' import AddUrlForm from './AddUrlForm' import CrawlerPage from './Crawler/CrawlerPage' import DeleteUrlForm from './DeleteUrlForm' @@ -17,10 +16,7 @@ export default class ConfigurationPage extends React.PureComponent { error: '', indexToDelete: 0, urls: [], - crawledUrls: [], - crawlerLoading: false, - displayCrawler: false, - hasCrawled: false + displayCrawler: false } } @@ -30,17 +26,11 @@ export default class ConfigurationPage extends React.PureComponent { }) getUrlsConfiguration(this.props.project.key) .then((urls) => { - this.setState({ urls }) - this.setState({ - loading: false - }) + this.setState({ urls, loading: false }) }) .catch((result) => { if (result instanceof Error) { - this.setState({ error: result.message }) - this.setState({ - loading: false - }) + this.setState({ error: result.message, loading: false }) } }) } @@ -80,27 +70,15 @@ export default class ConfigurationPage extends React.PureComponent { addNewUrls = (urlsAdded) => { this.setState({ - urls: this.state.urls.concat(urlsAdded) - }) - this.setState({ crawledUrls: [] }) - this.setState({ displayCrawler: false }) - this.setState({ hasCrawled: false }) - this.setState({ error: '' }) - } - - setMainUrl = async (url) => { - this.setState({ crawlerLoading: true }) - await crawl(this.state.projectName, url.trim()).then((response) => { - this.setState({ crawlerLoading: false }) - this.setState({ crawledUrls: response }) - this.setState({ hasCrawled: true }) + urls: this.state.urls.concat(urlsAdded), + crawledUrls: [], + displayCrawler: false, + error: '' }) } setDisplayCrawler = () => { this.setState({ displayCrawler: !this.state.displayCrawler }) - this.setState({ crawledUrls: [] }) - this.setState({ hasCrawled: false }) } checkUrl = () => { @@ -114,12 +92,8 @@ export default class ConfigurationPage extends React.PureComponent { return ( <div className='boxed-group'> <CrawlerPage - setMainUrl={this.setMainUrl} - crawledUrls={this.state.crawledUrls} projectName={this.state.projectName} addNewUrls={this.addNewUrls} - crawlerLoading={this.state.crawlerLoading} - hasCrawled={this.state.hasCrawled} setDisplayCrawler={this.setDisplayCrawler} /> </div> @@ -131,30 +105,30 @@ export default class ConfigurationPage extends React.PureComponent { return ( <main role='main' aria-hidden='true'> - <div className='page' aria-hidden='true'> - <div className='page-header' role='banner' aria-label='configuration page presentation'> - <h1 className='page-title'>URL Configuration for project {this.state.projectName}</h1> - <div className='page-actions' aria-hidden={this.state.openCreate}> - <button - className='basic-button' - disabled={this.state.displayCrawler} - onClick={this.handleCreateOpen} - type='button' - aria-haspopup='dialog' - aria-label='add new urls' - aria-controls='dialog' - > - Add new URLs - </button> - {this.state.openCreate && <AddUrlForm isDisplayed={this.state.openCreate} projectName={this.state.projectName} onClose={this.handleCreateClose} onSubmitSuccess={this.addNewUrls} />} + <div className='page' aria-hidden='true'> + <div className='page-header' role='banner' aria-label='configuration page presentation'> + <h1 className='page-title'>URL Configuration for project {this.state.projectName}</h1> + <div className='page-actions' aria-hidden={this.state.openCreate}> + <button + className='basic-button' + disabled={this.state.displayCrawler} + onClick={this.handleCreateOpen} + type='button' + aria-haspopup='dialog' + aria-label='add new urls' + aria-controls='dialog' + > + Add new URLs + </button> + {this.state.openCreate && <AddUrlForm isDisplayed={this.state.openCreate} projectName={this.state.projectName} onClose={this.handleCreateClose} onSubmitSuccess={this.addNewUrls} />} + </div> + + <p className='page-description'> + In order to analyse your code and monitor key ecodesign metrics, you will need to set every route defined for your web application. + <br /> + EcoSonar will then analyse all pages of your web app and will guide you to set up practices optimizing ressources. + </p> </div> - - <p className='page-description'> - In order to analyse your code and monitor key ecodesign metrics, you will need to set every route defined for your web application. - <br /> - EcoSonar will then analyse all pages of your web app and will guide you to set up practices optimizing ressources. - </p> - </div> {!this.state.loading ? this.checkUrl() : <div className="loader"></div>} {this.state.deleting && ( @@ -166,9 +140,8 @@ export default class ConfigurationPage extends React.PureComponent { onCloseDelete={this.onCloseDelete} /> )} - </div> + </div> </main> - ) } } diff --git a/EcoSonar-SonarQube/src/main/js/ecosonar_configuration_page/components/Crawler/CrawlerPage.js b/EcoSonar-SonarQube/src/main/js/ecosonar_configuration_page/components/Crawler/CrawlerPage.js index 5bc7b37..80b8b97 100644 --- a/EcoSonar-SonarQube/src/main/js/ecosonar_configuration_page/components/Crawler/CrawlerPage.js +++ b/EcoSonar-SonarQube/src/main/js/ecosonar_configuration_page/components/Crawler/CrawlerPage.js @@ -1,13 +1,23 @@ import React from 'react' import { insertUrlsConfiguration } from '../../../services/configUrlService' import CrawledUrlItem from './CrawledUrlItem' +import { crawl, getCrawl } from '../../../services/crawlerService' export default function CrawlerPage (props) { - const { setMainUrl, crawledUrls, projectName, addNewUrls, crawlerLoading, hasCrawled, setDisplayCrawler } = props + const { + projectName, + addNewUrls, + setDisplayCrawler + } = props const [url, setUrl] = React.useState('') const [checkedUrls, setCheckedUrls] = React.useState([]) const [globalError, setGlobalError] = React.useState('') const [allChecked, setAllChecked] = React.useState(false) + const [autoSaveUrlsResult, setAutoSaveUrlsResult] = React.useState(false) + const [crawledUrls, setCrawledUrls] = React.useState([]) + const [crawlerLoading, setCrawlerLoading] = React.useState(false) + const [crawlerLaunched, setCrawlerLaunched] = React.useState(false) + const [crawlerErrorMessage, setCrawlerErrorMessage] = React.useState('') const handleChangeSetUrl = (event) => { setUrl(event.target.value) @@ -17,7 +27,9 @@ export default function CrawlerPage (props) { if (!checkedUrls.includes(checkedUrl)) { setCheckedUrls((checkedUrlList) => [...checkedUrlList, checkedUrl]) } else { - setCheckedUrls((checkedUrlList) => checkedUrlList.filter((urlObject) => urlObject !== checkedUrl)) + setCheckedUrls((checkedUrlList) => + checkedUrlList.filter((urlObject) => urlObject !== checkedUrl) + ) } } @@ -33,7 +45,7 @@ export default function CrawlerPage (props) { }) } - function compareArray (crawledUrlsObject, checkedUrlsObject) { + const compareArray = (crawledUrlsObject, checkedUrlsObject) => { for (let i = 0; i < crawledUrlsObject.length; i++) { if (crawledUrlsObject[i] !== checkedUrlsObject[i]) { return false @@ -53,89 +65,189 @@ export default function CrawlerPage (props) { } } - const checkCrawledAndNoUrl = () => { - return hasCrawled && crawledUrls.length === 0 + const handleChangeAutoSaveUrlsResult = () => { + setAutoSaveUrlsResult(!autoSaveUrlsResult) + } + + const launchCrawler = (url, autoSave) => { + setCrawlerLoading(true) + setCrawlerErrorMessage('') + crawl(projectName, url.trim(), autoSave).then(() => { + setCrawlerLoading(false) + setCrawlerLaunched(true) + }) + .catch((error) => { + setCrawlerLoading(false) + if (error instanceof Error) { + setCrawlerErrorMessage(error.message) + } + }) + } + + const getCrawlerResult = async () => { + setCrawlerLoading(true) + setCrawlerErrorMessage('') + await getCrawl(projectName).then((response) => { + setCrawlerLoading(false) + setCrawledUrls(response) + setCrawlerLaunched(false) + }).catch((error) => { + setCrawlerLoading(false) + if (error instanceof Error) { + setCrawlerErrorMessage(error.message) + } + }) } return ( <div> + <div className="url-list-button"> + <p className="crawler-message"> + I want to automatically search for all pages within my website + </p> + </div> + <div className="crawler-buttons"> + <input + className="input-crawler" + name="url" + type="text" + onChange={handleChangeSetUrl} + id="webhook-url" + placeholder="Add the homepage from my website" + aria-label="add the homepage from my website" + /> + <label + className="switch" + htmlFor="checkbox" + style={{ position: 'unset' }} + > + <input + type="checkbox" + checked={autoSaveUrlsResult} + aria-checked={autoSaveUrlsResult} + tabIndex={0} + aria-labelledby="checkbox" + onChange={() => handleChangeAutoSaveUrlsResult()} + id="checkbox" + /> + <div></div> + <p className="crawler-message"> + Save urls as to be audited by EcoSonar + </p> + </label> + <button + className="basic-button" + aria-label="launch-crawler" + onClick={() => launchCrawler(url, autoSaveUrlsResult)} + disabled={crawlerLoading || url === '' || crawledUrls.length > 0} + > + <span>Launch Crawler</span> + </button> + <button + className="basic-button" + aria-label="return to url list" + onClick={() => setDisplayCrawler()} + disabled={crawlerLoading} + > + <span>Return to url list</span> + </button> + </div> {crawledUrls.length === 0 && ( <div> - <div className='crawler-input'> - <p className='crawler-message'>I want to automatically search for all pages within my website</p> - <input - className='input' - name='url' - type='text' - onChange={handleChangeSetUrl} - id='webhook-url' - placeholder='add the homepage from my website' - aria-label='add the homepage from my website' - /> - - <div className='crawler-buttons'> - <button className='basic-button' aria-label='find pages' onClick={() => setMainUrl(url)} disabled={crawlerLoading}> - <span>Find pages</span> - </button> - <button className='basic-button' aria-label='return to url list' onClick={() => setDisplayCrawler()} disabled={crawlerLoading}> - <span>Return to url list</span> - </button> - </div> + <div> + <p className="crawler-message"> + If you previoulsy crawled your website, click directly on the button below. + </p> + <button + className="basic-button" + aria-label="return to url list" + onClick={() => getCrawlerResult()} + disabled={crawlerLoading} + > + <span>Get Crawled URLs</span> + </button> + </div> + <div> + {crawlerLaunched && ( + <div className="crawler-loading"> + <p> + Ecosonar crawler is running. Process can take several minutes + according to the size of the website. If you enabled the Save + option, URLS crawled will be saved automatically in the URL + Configuration list. Otherwise by default, the URLs crawled + will be saved in a temporary database and made available to + you by clicking on the button Get Crawler Result. + </p> + </div> + )} </div> - {crawlerLoading && ( - <div className='crawler-loading'> - <div className="loader"></div> - <p>Ecosonar crawler is running. Process can take several minutes according to the size of the website. Leave this page open.</p> - </div> - )} </div> )} - + <p className='text-danger' role='alert'> + {crawlerErrorMessage} + </p> {crawledUrls.length > 0 && ( - <div className='crawled-url-list'> - <table className='data-zebra' role='presentation'> + <div className="crawled-url-list"> + <table className="data-zebra" role="presentation"> <tbody> <tr> <td> - <p className='crawler-message'> - I want to automatically search for all pages within my website : <span className='url-name'>{url}</span> + <p className="crawler-message"> + Find below the URLs previously crawled for this project. Please select those you wish EcoSonar to audit. </p> </td> </tr> - <tr className='data-zebra-thead'> - <td className='head-url'>{'URL'}</td> + <tr className="data-zebra-thead"> + <td className="head-url">{'URL'}</td> <td> <input - type='checkbox' - onChange={() => { selectAll() }} + type="checkbox" + onChange={() => { + selectAll() + }} checked={allChecked} - aria-label='select all' + aria-label="select all" ></input> </td> </tr> {crawledUrls.map((crawledUrl, index) => { - return <CrawledUrlItem key={'url-' + index} index={index} url={crawledUrl} handleChangeCheckedUrls={handleChangeCheckedUrls} checkedUrls={checkedUrls} setAllChecked={setAllChecked} /> + return ( + <CrawledUrlItem + key={'url-' + index} + index={index} + url={crawledUrl} + handleChangeCheckedUrls={handleChangeCheckedUrls} + checkedUrls={checkedUrls} + setAllChecked={setAllChecked} + /> + ) })} </tbody> </table> - <div className='crawler-buttons'> - <button className='basic-button' aria-label='Cancel' onClick={() => setDisplayCrawler()}> + <div className="crawler-buttons"> + <button + className="basic-button" + aria-label="Cancel" + onClick={() => setDisplayCrawler()} + > <span>Cancel</span> </button> - <button className='basic-button' aria-label='Validate list' disabled={!checkedUrls.length} onClick={() => validateList()}> - <span>Validate list</span> + <button + className="basic-button" + aria-label="Validate list" + disabled={!checkedUrls.length} + onClick={() => validateList()} + > + <span>Validate</span> </button> </div> {globalError !== '' && ( - <p className='text-danger' role='alert'> + <p className="text-danger" role="alert"> {globalError} </p> )} </div> )} - {checkCrawledAndNoUrl() && ( - <p className='crawler-message-no-more-url'>No url detected</p> - )} </div> ) } diff --git a/EcoSonar-SonarQube/src/main/js/services/crawlerService.js b/EcoSonar-SonarQube/src/main/js/services/crawlerService.js index 8ddf233..8774f34 100644 --- a/EcoSonar-SonarQube/src/main/js/services/crawlerService.js +++ b/EcoSonar-SonarQube/src/main/js/services/crawlerService.js @@ -2,20 +2,35 @@ import { axiosInstance } from '../config/axiosConfiguration' import formatError from '../format/formatError' import errors from '../utils/errors.json' -export function crawl (projectName, mainUrl) { +export function crawl (projectName, mainUrl, saveUrls) { return new Promise((resolve, reject) => { - axiosInstance.post('/api/crawl', { projectName, mainUrl }, { timeout: 600000 }) + axiosInstance.post('/api/crawl', { projectName, mainUrl, saveUrls }, { timeout: 600000 }) + .then(() => { + resolve() + console.log('CRAWLER SERVICE - crawling started') + }) + .catch((error) => { + console.error(error) + console.error('CRAWLER SERVICE - unknown error occured : ', error.message) + reject(new Error(formatError(errors.errorCrawling, mainUrl))) + }) + }) +} + +export function getCrawl (projectNameReq) { + return new Promise((resolve, reject) => { + axiosInstance.get('/api/crawl', { params: { projectName: projectNameReq } }, { timeout: 600000 }) .then((response) => { - console.log('CRAWLER SERVICE - URL retrieved') + console.log(`CRAWLER SERVICE - ${response.data.length} URLs retrieved for project ${projectNameReq}`) resolve(response.data) }) .catch((error) => { if (error.response && error.response.status === 400) { - reject(new Error(formatError(errors.errorCrawling, mainUrl))) + reject(new Error(formatError(errors.errorCrawlingEmpty, projectNameReq))) } else { console.error(error) console.error('CRAWLER SERVICE - unknown error occured : ', error.message) - reject(new Error(formatError(errors.errorCrawling, mainUrl))) + reject(new Error(formatError(errors.errorGetCrawling, projectNameReq))) } }) }) diff --git a/EcoSonar-SonarQube/src/main/js/styles/_settings.scss b/EcoSonar-SonarQube/src/main/js/styles/_settings.scss index 4acf8d9..3649dd3 100644 --- a/EcoSonar-SonarQube/src/main/js/styles/_settings.scss +++ b/EcoSonar-SonarQube/src/main/js/styles/_settings.scss @@ -97,7 +97,6 @@ body { .basic-button { display: inline-flex; align-items: center; - overflow: hidden; white-space: nowrap; text-overflow: ellipsis; justify-content: center; @@ -164,6 +163,10 @@ body { cursor: not-allowed; box-shadow: none; } + + span { + padding: 3px 3px; + } } .boxed-group { border: 1px solid #e6e6e6; diff --git a/EcoSonar-SonarQube/src/main/js/styles/components/_urlConfig.scss b/EcoSonar-SonarQube/src/main/js/styles/components/_urlConfig.scss index c90a6ca..0c4dbb6 100644 --- a/EcoSonar-SonarQube/src/main/js/styles/components/_urlConfig.scss +++ b/EcoSonar-SonarQube/src/main/js/styles/components/_urlConfig.scss @@ -16,6 +16,7 @@ .modal-header-config { padding: 32px 32px 0; + .modal-title-config { margin: 0; font-size: 16px; @@ -24,9 +25,11 @@ overflow-wrap: break-word; } } + .modal-body-config { padding: 20px 32px; } + .modal-footer-config { padding: 20px 32px; border-top: 1px solid $medium-grey; @@ -45,35 +48,43 @@ margin-right: 2%; margin-bottom: 1%; } + .input { max-width: 30%; margin-right: 2%; margin-bottom: 4%; } + .basic-button { margin-bottom: 1%; } } + .crawler-loading { text-align: center; } + .crawled-url-list { display: flex; flex-direction: column; + .basic-button { max-width: 10%; margin-top: 3%; margin-left: auto; margin-right: 5%; text-align: right; + span { overflow: hidden; white-space: nowrap; text-overflow: ellipsis; } } + .crawler-message { margin-bottom: 1%; + span { text-decoration: underline; } @@ -83,6 +94,7 @@ .crawler-message-no-more-url { text-align: center; } + .crawler-input-no-url-assigned { display: flex; justify-content: space-around; @@ -91,6 +103,7 @@ .url-list-button { display: flex; align-items: center; + .basic-button { margin-left: 3%; } @@ -99,10 +112,17 @@ .crawler-buttons { display: flex; justify-content: right; - margin-bottom: 1%; + margin-bottom: 5%; + flex-direction: row; + align-items: center; .basic-button { margin-left: 5%; } + .crawler-message { + font-size: $small-font; + margin-right: 2%; + margin-bottom: 1%; + } } .button-delete { @@ -125,3 +145,20 @@ border-color: $red-error; color: $red-error; } + +.unset { + position: 'unset' +} + +.input-crawler { + margin-right: 8px; + width: calc(100% - 54px); + height: 24px; + padding: 0 6px; + border: 1px solid $light-grey; + box-sizing: border-box; + border-radius: 2px; + background: $white; + color: $dark-grey; + transition: border-color 0.2s ease; +} \ No newline at end of file diff --git a/EcoSonar-SonarQube/src/main/js/utils/errors.json b/EcoSonar-SonarQube/src/main/js/utils/errors.json index 7989bee..c4e6a97 100644 --- a/EcoSonar-SonarQube/src/main/js/utils/errors.json +++ b/EcoSonar-SonarQube/src/main/js/utils/errors.json @@ -15,6 +15,8 @@ "errorRetrievingAnalysisforURL": "An error occured while retrieving analysis for {1} in project {0}, please try again.", "errorRetrievingBestPractices": "An error occured while retrieving best practices for project {0}, please try again.", "errorCrawling": "An error occured while crawling from url {0}, please try again", + "errorGetCrawling": "An error occured while retrieving url crawled from project {0}, please try again", + "errorCrawlingEmpty": "No crawled urls were saved for project {0}", "errorW3cAnalysisNotFound": "The W3C Analysis could not be processed for project {0}, check with technical team to resolve this issue", "errorW3cLastAnalysisNotFoundForURL": "The W3C Analysis could not be processed for {0} in project {1}, check with technical team to resolve this issue", "errorSystemAddProcedure": "An error occured while adding procedure in project {0}, please try again.", diff --git a/README.md b/README.md index bcf0c62..fddfc7f 100644 --- a/README.md +++ b/README.md @@ -4,88 +4,98 @@ Our official website : https://ecosonar.org -User guide: https://github.com/Accenture/EcoSonar/blob/main/USER_GUIDE.md +User guide: https://github.com/Accenture/EcoSonar/blob/main/USER_GUIDE.md ## Main objectives of EcoSonar: -- Raising the awareness of delivery teams to environmental issues: enabling development teams to consider the environmental impact of digital technology during development and to promote knowledge of best eco-design and accessibility practices. -- Helping developers to implement best eco-design and accessibility practices with: - - Static Code Analysis with SonarQube, and dedicated green coding rules with the addition of Plugin EcoCode (https://www.ecocode.io/) - - Dynamic Code Analysis with EcoSonar API using three open-source tools to analyze the application as it is rendered on a web browser (Green-IT Analysis/EcoIndex, Google Lighthouse and W3C Validator). -- Get an environmental & performance monitoring solution to allow continuous improvement of delivery teams. +- Raising the awareness of delivery teams to environmental issues: enabling development teams to consider the environmental impact of digital technology during development and to promote knowledge of best eco-design and accessibility practices. +- Helping developers to implement best eco-design and accessibility practices with: + - Static Code Analysis with SonarQube, and dedicated green coding rules with the addition of Plugin EcoCode (https://www.ecocode.io/) + - Dynamic Code Analysis with EcoSonar API using three open-source tools to analyze the application as it is rendered on a web browser (Green-IT Analysis/EcoIndex, Google Lighthouse and W3C Validator). +- Get an environmental & performance monitoring solution to allow continuous improvement of delivery teams. # Summary + - [EcoSonar Architecture ](#archi) - [Prerequisites](#prerequisites) - - [Infrastructure Requirements](#infra) -- [EcoSonar Configuration](#configuration) + - [Infrastructure Requirements](#infra) +- [EcoSonar Local Installation](#installation) - [EcoCode Configuration](#ecocode-config) - [Audit Tools](#audit) - - [GreenIT-Analysis/EcoIndex](#greenit-cnumr) - - [Google Lighthouse](#ligthhouse) - - [W3C Validator](#w3c) - - [Ecocode](#ecocode) + - [GreenIT-Analysis/EcoIndex](#greenit-cnumr) + - [Google Lighthouse](#ligthhouse) + - [W3C Validator](#w3c) + - [Ecocode](#ecocode) - [About](#about) <a name="archi"></a> + ## EcoSonar Architecture The EcoSonar tool consists of: -- A containerized Node.js API - - run a GreenIT-Analysis/EcoIndex, Google Lighthouse and W3C Validator analysis for a project containing a list of predefined URLs. - - store audits in MongoDB Database - - retrieve audits through API calls. +- A containerized Node.js API + + - run a GreenIT-Analysis/EcoIndex, Google Lighthouse and W3C Validator analysis for a project containing a list of predefined URLs. + - store audits in MongoDB Database + - retrieve audits through API calls. -- Sonarqube plugins - - Able to configure and retrieve EcoSonar audit reports on dynamic rendering analysis. - - Launch an EcoSonar analysis by calling the API when a Sonarqube analysis is triggered. - - Add new eco-design coding rules in Sonarqube default configuration with EcoCode plugins. +- Sonarqube plugins + - Able to configure and retrieve EcoSonar audit reports on dynamic rendering analysis. + - Launch an EcoSonar analysis by calling the API when a Sonarqube analysis is triggered. + - Add new eco-design coding rules in Sonarqube default configuration with EcoCode plugins. - ![Architecture](./images/ecosonar-architecture.webp) Example of Architecture deployed on Azure: - ![Ecosonar Architecture Azure](./images/ecosonar-architecture-azure.webp) - -<a name="nodeprerequisitesjs"></a> -## Prerequisites -- Node.js (minimum version 16) - -For Sonarqube plugin only: -- Sonarqube- minimum version 9.4 -https://docs.sonarqube.org/latest/setup/install-server/ -https://docs.sonarqube.org/latest/setup/install-cluster/ -No constraint on the edition type. Please check with your infrastructure team which edition are you allowed to use. -- If Sonarqube version is 9.9 or above, choose Java– version 17, otherwise Java – version 11 -- Maven - 3.8.3 +![Ecosonar Architecture Azure](./images/ecosonar-architecture-azure.webp) <a name="infra"></a> + ### Infrastructure Requirements -- Docker Registry: storage of the Ecosonar API Docker image + +- Docker Registry: storage of the Ecosonar API Docker image. You can directly use our Github Package which host a Docker image of EcoSonar - Docker server with RAM > 4Gb necessary for the analysis by Google Lighthouse - MongoDB database - Private network: protects the data stored in the database and makes it only accessible to the specified services. - Subnet associated with the private network: connection between the database and the API - Password Manager: store the password to access the database from the API and credentials used to audits pages requiring authentication -<a name="configuration"></a> -## EcoSonar Configuration +<a name="installation"></a> + +## EcoSonar Local Installation + +To install EcoSonar locally, you have two options: + +1. Use Docker +Run the following commands: +``` +cd EcoSonar-SonarQube +export REACT_APP_BASE_URL_ECOSONAR_API=http://localhost:3000 +mvn clean package -Durl=http://localhost:3000 + +cd .. +docker-compose build +docker-compose up +``` -To setup the EcoSonar-API, follow the instructions available here: https://github.com/Accenture/EcoSonar/blob/main/EcoSonar-API/README.md +2. Launch each component seperately. +Follow instructions in both Readme file: +https://github.com/Accenture/EcoSonar/blob/main/EcoSonar-API/README.md +https://github.com/Accenture/EcoSonar/blob/main/EcoSonar-SonarQube/README.md -For EcoSonar Sonarqube plugin : https://github.com/Accenture/EcoSonar/blob/main/EcoSonar-SonarQube/README.md <a name="ecocode-config"></a> + ## Ecocode Configuration For specific details on Ecocode, please look at their GitHub repository: https://github.com/green-code-initiative/ecoCode . -You will find here https://github.com/Accenture/EcoSonar/tree/main/EcoSonar-SonarQube/ecocode the EcoCode Sonarqube plugin that needs to be imported into your Sonarqube instance. -To install plugins, you can follow the same instructions provided for EcoSonar Sonarqube plugin https://github.com/Accenture/EcoSonar/tree/main/EcoSonar-SonarQube#install-sonarqube-plugins and copy/paste the jar files into the same `extensions/plugins/` folder. - +You will find here `https://github.com/Accenture/EcoSonar/tree/main/EcoSonar-SonarQube/ecocode` the EcoCode Sonarqube plugin that needs to be imported into your Sonarqube instance. +To install the ecocode plugins, please follow instruction from EcoSonar SonarQube plugin : https://github.com/Accenture/EcoSonar/blob/main/EcoSonar-SonarQube/README.md When using Sonarqube as code analysis, a default Quality Profile is set up for each language. If you want to use EcoCode rules related to eco-design, you will have to: + - create a new Quality Profile based on the default one : click on the Setting icon for the languge you wish to extend and then Click on `Extend` and create the new Quality Profile ![EcoSonar Quality Profile Creation](./images/java-quality-profile.webp) @@ -102,56 +112,64 @@ When using Sonarqube as code analysis, a default Quality Profile is set up for e ![EcoSonar Default Quality Profile](./images/ecosonar-default-quality-profile.webp) +Do the same setup for each languague you wish to use with Ecocode rules. <a name="audit"></a> + ## Audit Tools <a name="greenit-cnumr"></a> + ### GreenIT-Analysis/EcoIndex EcoIndex makes it possible to become aware of the environmental impact of the Internet and to propose concrete solutions. You enter a URL into the EcoIndex, which then calculates the performance and environmental footprint of the tested page represented by a score out of 100 and a rating from A to G (the higher the rating, the better!). Several criteria are taken into account by our calculation method: -- The complexity of the page: the DOM (Document Object Model) represents the structure and the elements of an HTML web page. The more elements the DOM contains, the more complex the page is to decipher, and therefore to display for the browser. Concretely, all this means a greater effort to provide on the part of the processor of your computer to display the page, which reduces the life of your equipment. -- The weight of data transferred: before appearing on your screen, a web page is a set of data stored on a server. When you access a page, your browser sends a request to the server to communicate this data to it, in order to format it and display it on your screen. Only here: the transport of this data, more or less heavy, from the server to the browser requires energy. -- The number of HTTP requests: this criterion makes it possible to take into account the effort made by the servers to display the tested page. The greater the number of requests for the same page, the more servers will be needed to serve this page. -Official website: https://www.ecoindex.fr/ +- The complexity of the page: the DOM (Document Object Model) represents the structure and the elements of an HTML web page. The more elements the DOM contains, the more complex the page is to decipher, and therefore to display for the browser. Concretely, all this means a greater effort to provide on the part of the processor of your computer to display the page, which reduces the life of your equipment. +- The weight of data transferred: before appearing on your screen, a web page is a set of data stored on a server. When you access a page, your browser sends a request to the server to communicate this data to it, in order to format it and display it on your screen. Only here: the transport of this data, more or less heavy, from the server to the browser requires energy. +- The number of HTTP requests: this criterion makes it possible to take into account the effort made by the servers to display the tested page. The greater the number of requests for the same page, the more servers will be needed to serve this page. + +Official website: https://www.ecoindex.fr/ Chrome extension: https://chrome.google.com/webstore/detail/greenit-analysis/mofbfhffeklkbebfclfaiifefjflcpad?hl=fr -GitHub Link: https://github.com/cnumr/GreenIT-Analysis-cli +GitHub Link: https://github.com/cnumr/GreenIT-Analysis-cli <a name="ligthhouse"></a> + ### Google Lighthouse Lighthouse is an open-source, automated tool for improving the quality of web pages. You can run it against any web page, public or requiring authentication. It has audits for performance, accessibility and more. By default, Lighthouse produces a report in JSON or HTML. We will store the JSON report provided in the database to be able to monitor the various performances afterwards. It is also possible to customize this report to obtain only the desired metrics. -Official website: https://developer.chrome.com/docs/lighthouse/overview/ +Official website: https://developer.chrome.com/docs/lighthouse/overview/ -Chrome extension: https://chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk +Chrome extension: https://chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk -GitHub Link: https://github.com/GoogleChrome/lighthouse +GitHub Link: https://github.com/GoogleChrome/lighthouse <a name="w3c"></a> + ### W3C Validator The Markup Validator is a free service by W3C that helps check the validity of Web documents. Validating Web documents is an important step which can dramatically help improving and ensuring their quality, and it can save a lot of time and money. Validating Web Pages is also an important accessibility best practices to resolve (RGAA, criteria 8.2). If the HTML code is not well formatted, the browser will dynamically correct a certain number of elements to best display the pages causing problems. These dynamic corrections consume resources unnecessarily each time the pages concerned are loaded. -Official website: https://validator.w3.org/ +Official website: https://validator.w3.org/ -GitHub Link: https://github.com/zrrrzzt/html-validator +GitHub Link: https://github.com/zrrrzzt/html-validator <a name="ecocode"></a> + ### EcoCode -EcoCode is a SonarQube plugin developed by a french open-source community called Green Code Initiative. Their goal is to share best practices of development, be aware of environmental responsibility when programming, and together construct rules and metrics for assigning to mobile and web applications an "environmental label". They have defined a list of green coding rules to be checked through a Sonarqube analysis to reduce RAM or CPU usage of software application. +EcoCode is a SonarQube plugin developed by a french open-source community called Green Code Initiative. Their goal is to share best practices of development, be aware of environmental responsibility when programming, and together construct rules and metrics for assigning to mobile and web applications an "environmental label". They have defined a list of green coding rules to be checked through a Sonarqube analysis to reduce RAM or CPU usage of software application. Official website: https://www.ecocode.io/ -GitHub Link: https://github.com/green-code-initiative +GitHub Link: https://github.com/green-code-initiative <a name="about"></a> + ## About To get more info on EcoSonar, you can contact ecosonar-team@accenture.com and have a look at our new website : https://ecosonar.org. @@ -172,4 +190,4 @@ To know more on ecodesign best practices, EcoIndex Calculator and how an ecodesi https://blog.octo.com/sous-le-capot-de-la-mesure-ecoindex/ -https://blog.octo.com/une-bonne-pratique-vers-un-numerique-plus-responsable-mesurer-le-ressenti-des-internautes/ \ No newline at end of file +https://blog.octo.com/une-bonne-pratique-vers-un-numerique-plus-responsable-mesurer-le-ressenti-des-internautes/ diff --git a/ROADMAP.md b/ROADMAP.md deleted file mode 100644 index 9e84433..0000000 --- a/ROADMAP.md +++ /dev/null @@ -1,7 +0,0 @@ -# Roadmap - -1) GHG Emissions Calculation using SCI Methodology (https://github.com/Green-Software-Foundation/sci/blob/main/Software_Carbon_Intensity/Software_Carbon_Intensity_Specification.md) - -2) Integration of WebSite Analytics : weighted-average of EcoSonar scores according to page frequency, detection of pages not visited that could be decomissionned, etc. - -3) More green coding rules diff --git a/USER_GUIDE.md b/USER_GUIDE.md index 241db61..f6f4d37 100644 --- a/USER_GUIDE.md +++ b/USER_GUIDE.md @@ -2,24 +2,46 @@ # EcoSonar, the eco-design audit tool - USER GUIDE +## Summary + +- [Project Configuration](#project-config) +- [[OPTIONAL] Configure Authentication for your project](#auth) +- [[OPTIONAL] Configure Proxy for your project](#proxy) +- [[OPTIONAL] Configure User flow for each URL](#user-flow) +- [Launching an EcoSonar Analysis](#launch) +- [Retrieve an EcoSonar Analysis of your project](#get-analysis) +- [Retrieve EcoSonar recommendations](#get-recos) +- [Retrieve Green Coding Rules to implement](#get-green-code-smells) + +<a name="project-config"></a> + ## Project Configuration To realize an EcoSonar audit on a web-based application, you will need first to configure which URLs you want to audit. We recommend you to choose the Sonarqube project linked to your frontend code repository if you wish to launch EcoSonar audits directly in your CI/CD pipeline. Once Sonarqube project chosen to embed your future reports, you will need to go into the page called "EcoSonar URL Configuration". ![EcoSonar Configuration Page Access](./images/ecosonar-configuration-page.webp) +![EcoSonar URL Page](./images/ecosonar-url-page.webp) You will have two options to enter the pages you want to audit. 1. Automatically with a crawler -We have implemented a crawler that will detect automatically all pages from you website. It will be looking for "href" attributes to detect all redirections in your website. We suggest you to use this crawler when you want for the first time deploy to EcoSonar within your project. You will only need to enter the homepage of your website to retrieve all pages that can be accessible. - ![EcoSonar URL Configuration](./images/ecosonar-url-configuration.webp) + +We have implemented a crawler that will detect automatically all pages from you website. It will be looking for "href" attributes to detect all redirections in your website. We suggest you to use this crawler when you want for the first time deploy to EcoSonar within your project. + +You will need to enter the homepage of your website to retrieve all pages that can be accessible. +Then you have two options: + +- save the results in a temporary database you will retrieve by clicking on the button `Get crawled URLs`. Then you will be able to choose which one you want EcoSonar to audit. This option is enabled by default. +- save the results in database so that EcoSonar will audit them once analysis is triggered. This option is enabled by selecting `Save urls as to be audited by EcoSonar`. + ![EcoSonar URL Crawler Setup Configuration](./images/ecosonar-url-crawler-setup.webp) -![EcoSonar URL Crawler Result Configuration](./images/ecosonar-url-crawler-result.webp) -Once you have retrieved all pages detected automatically, you can choose the ones you wish to audit and validate the list. +After waiting a few minutes until the crawler found all pages within the website, you can retrieve them by clicking on the button `Get crawled URLs` (for option 1 only). Then, select the different pages you wish EcoSonar to audit. You will get them after in the initial URL Configuration page. + +![EcoSonar URL Crawler Result Configuration](./images/ecosonar-url-crawler-result.webp) 2. Manually @@ -29,6 +51,321 @@ Thanks to a configuration popup, you can enter manually the pages to audit. This ![EcoSonar Configuration Page](./images/ecosonar-configure-urls.webp) +<a name="auth"></a> + +## [OPTIONAL] Configure Authentication for your project + +In order to audit pages that can be accessed only through an authentication service (intranet pages for example), +you need to add authentication credentials into EcoSonar API to allow auditing dedicated pages. + +### When you have a simple login flow : username, password and click on a button + +#### EcoSonar V2.3 and below + +To implement that, you can create a YAML file login.yaml at the root of the folder `EcoSonar-API` and use the following format +if the CSS selector of you input field is `input[name=username]` or `input[type=email]`, password field `input[name=password]`, `input[type=password]`, `input[id=password]` and button `button[type=submit]` : + +``` +authentication_url: authenticationPage +username: yourUsername +password: yourPassword +``` + +or if one of the CSS Selector does not match the default CSS Selectors : + +``` +authentication_url:authenticationPage +username: yourUsername +password: yourPassword +loginButtonSelector: CSS_Selector_Button +usernameSelector: CSS_Selector_Login +passwordSelector: CSS_Selector_Password +``` + +##### CSS Selectors + +CSS Selectors are patterns in HTML code to apply some style (doc ). For exemple, to find the css selector of  loginButtonSelector: +Go to the login page of your website +Right click on the login button +Select inspect +Choose css selectors you want (class, type, name, id, ....) + +More Information : + +documentation: https://github.com/cnumr/GreenIT-Analysis-cli/blob/072987f7d501790d1a6ccc4af6ec06937b52eb13/README.md#commande +code: https://github.com/cnumr/GreenIT-Analysis-cli/blob/072987f7d501790d1a6ccc4af6ec06937b52eb13/cli-core/analysis.js#L198 + +#### EcoSonar V3.0 and above + +You can directly configure your login credentials at a project level in the API. +Be careful your login credentials will then be saved into the database, please check with your security team if you are allowed to do so. + +You can use the Endpoint "Save Login and Proxy" and enter the following body: + +``` +{ + "login": { + "authentication_url": "authenticationPage", + "username": "yourUsername", + "password": "yourPassword" + } +} +``` + +or + +``` +{ + "login": { + "authentication_url": "authenticationPage", + "username": "yourUsername", + "password": "yourPassword", + "loginButtonSelector": "CSS_Selector_Button", + "usernameSelector": "CSS_Selector_Login", + "passwordSelector": "CSS_Selector_Password" + } +} +``` + +### More complicated Login flows + +When the Username and password are not in the same page, or you need other user inputs to complete authentication + +#### EcoSonar V2.3 and below + +If the authentication of the website required steps or a page change, you must follow these requirements: + +1. Create a YAML file login.yaml at the root of the repo +2. Add authentication_url key and value is required +3. Add steps key is required +4. Fill steps part as follow + +To choose you authentification_url, you can either set it to the page in which you need to perform the authentification steps or pick the page that can only be accessed after being authenticated. + +(To help you to create steps, you can use on Google Chrome Tool "Recorder". (inspector -> recorder -> start a new recording) and save json file, then you can extract steps type, target, selectors) + +Each step is a description of an action made by a regular user: + +- "click" -> Click on a button for example: "submit" or "next" + type: "click" (required) + selector: CSS Selector of the field or button (required) +- "change" -> to fill a field like username or password + type: "change" (required) + selector: CSS Selector of the field or button (required) + value: value of the password or username (required) + /!\ CSS Selectors with "aria" label are not read by EcoSonar. + +Example of login.yaml file. to access into an account + +``` +authentication_url: authenticationPage +steps: +  -   type: "click" +      selectors: +          - "#input-email" +  -   type: "change" +      value: "my email" +      selectors: +          - "#input-email" +  -   type: "click" +      selectors: +        - "#lookup-btn" +  -   type: "change" +      value: "my password" +      selectors: +          - "#input-password" +  -   type: "click" +      selectors: +        - "#signin-button" +``` + +#### EcoSonar V3.0 and above + +You can use directly to configure your login credentials at a project level in the API. + +You can use the Endpoint "Save Login and Proxy" and enter the following body: + +``` +{ + "login": { + "authentication_url": "authenticationPage", + "steps" : [ ....] + } +} +``` + +<a name="proxy"></a> + +## [OPTIONAL] Configure Proxy for your project + +For some websites, you may need to configure a proxy in order to access it. +You need to seperate the analysis that are made with or without a proxy into several EcoSonar projects. + +### EcoSonar V2.3 and below + +To implement that, you can create a YAML file proxy.yaml at the root of the repo. +Please find below the configuration format : + +``` +ipaddress: ipAddress +port: port +projectName: (optional) + - PROJECT_NAME_1 + - PROJECT_NAME_2 +``` + +ipaddress : IP Address of your proxy +port : port of your proxy + +projectName : list of EcoSonar Projects (corresponding to Sonarqube projectKey) that needs a proxy to audit pages registered. If no projectName has been added but proxy.yaml file exists, then proxy will be applied by default to all your projects. + +### EcoSonar V3.0 and above + +You can directly configure your login credentials at a project level in the API. + +You can use the Endpoint "Save Login and Proxy" and enter the following body: + +``` +{ + "proxy": { + "ipAddress": "ipAddress", + "port" : "port" + } +} +``` + +<a name="user-flow"></a> + +## [OPTIONAL] Configure User flow for each URL + +In order to audit some pages, sometimes you may need to go through a user flow to get access to that page (for exemple fill in a form). Otherwise, if you don't have the context, the page can not be accessed. +We have added this functionality into EcoSonar. + +### User Flow Creation + +#### First method : using Chrome Recorder + +If your business allows to use Chrome Browser, we hightly recommend you to use this method. +Chrome has a native panel called "Recorder" that allows you to record, replay and measure user flows (https://developer.chrome.com/docs/devtools/recorder/). + +![Chrome Recorder](./images/chrome-recorder.webp) + +To access this panel, please click right on your browser, select Inspect, then choose Recorder in the DevTools Panel. + +To start recording a user flow, you can then click on button "Start new recording", choose a name then click on "Start a new recording". + +![Start Chrome Recorder](./images/chrome-start-recorder.webp) + +Then the Chrome browser is going to register every interaction that is made with the page and save it into the user flow. + +For example, we want to audit this page : http://www.ecometer.org/job?url=https%3A%2F%2Fwww.accenture.com%2Ffr-fr. It is only accessible if you are launching an analysis of the website with Ecometer : + +1. You need to navigate to the page : http://www.ecometer.org/ +2. You need to change the input to have your URL. +3. You need to click on the button "Analyse" to launch the analysis. + +![Chrome Recorder User flow](./images/chrome-recorder-result.webp) + +Chrome Recorder is going to register the user flow by saving every step/interaction. + +To make sure your user flow is correct and can be used through Ecosonar, please use "Replay" button and start from initial page to make sure the User flow automation is set up correctly. You should have the result as your previous manual configuration. + +/!\ Be Careful "click" steps are not duplicated in your userflow (same element triggered) otherwise it could not have the expected behaviour. You can remove step in the Recorder by clicking on the 3 dots. + +Once you have validated your userflow, you can export this User Flow using the export button and choose JSON. + +![Chrome Recorder User flow export](./images/save-chrome-recorder.webp) + +#### Second method : creating your own User Flow JSON + +If you are not allowed to use Chrome Browser, you can edit manually the user flow JSON file created by Chrome Recorder. +It should have the following format : + +``` +{​​​​​​​​ + "steps": [ + {​​​​ + "type": "navigate", + "url": "http://www.ecometer.org/", + }​​​​​​​​​​​​​​​​​​, + {​​​​​​​​​​​​​​​​​​​​​​ + "type": "click", + "selectors": [ + [ + "body > div.container.begin > div > form > input.url" + ] + ], + }​​​​​​​​​​​​​​​​​​​​​​, + {​​​​​​​​​​​​​​​​​​​ + "type": "change", + "value": "https://www.accenture.com/fr-fr", + "selectors": [ + [ + "body > div.container.begin > div > form > input.url" + ] + ], + }​​​​​​​​​​​​​​​​​​​, + {​​​​​​​​​​​​​​​​​​​ + "type": "click", + "selectors": [ + [ + "body > div.container.begin > div > form > input.button" + ] + ], + ] + }​​​​​​​​​​​​​​​​​​​, + { + "type": "scroll", + "distancePercentage": 50 + }, + ] +}​​​​​​​​​​​​​​​​​​​​​​​​​​​​​ +``` + +We are handling into EcoSonar 4 kind of browser interactions : + +1. Navigate to a URL + It should have "type" = "navigate" and "url" the url you want to go to +2. Change an input field + "type" = "change", "value" : value to be set in the input field, "selectors" : list of CSS Selectors to find the right input field +3. Click on a button + "type" = "click", "selectors" : list of CSS Selectors to find the right button +4. Scroll in the page and stop at a certain percentage at the page. + It will scroll down each 100px until the scroll limit has been reached. For example, the page is 1080px and we want to stop at the middle of the page (so distancePercentage = 50 %), it will iterate every 100 pixels until the windows has scrolled 540 px. + "type" = "scroll" and "distancePercentage" = value between 0 and 100 + +### User Flow Integration + +#### EcoSonar v2.3 and below + +Once you have been able to define the JSON file matching to your user flow, you can followed instructions: + +1. Create a folder "userJourney" if it does not exists yet at the root of the folder `EcoSonar-API`. +2. Paste JSON file created in the folder "userJourney" and rename it with the URL you wish to audit. Please remove the following special character `:` `?` `:` `/` from the URL in order to save the JSON. To retrieve the user flow we are matching it with the URL registered through EcoSonar URL Configuration. This step is not to forget otherwise EcoSonar won't be auditing the right page. +3. Deploy EcoSonar-API with all relevant user flows. +4. Launch a new EcoSonar audit to verify there are no technical errors in the logs application. Correct them if needed. + +#### EcoSonar v3.0 and above + +With version 3.0, you can directly configure the user flow in the API provided (no longer need to reboot the instance) +You can use the Endpoint "Save User Flow" and enter the following body: + +``` +{ + "url": "urlToAudit, + "userFlow": { + "steps": [ ....] + } +} +``` + +### User Flow Verification + +To verify pages you audit are the correct ones, we suggest you to use both Chrome extensions : Green-IT Analysis (https://chrome.google.com/webstore/detail/greenit-analysis/mofbfhffeklkbebfclfaiifefjflcpad?hl=fr) and Google Lighthouse (https://chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk?hl=fr) and compare results from these extensions to the EcoSonar audits. There should be almost identical. +If that is not the case, do not hesitate to contact us to help you. + +<a name="launch"></a> + ## Launching an EcoSonar Analysis If your Sonarqube project is linked to a Code Repository with the Continuous and Integration Pipeline, then the EcoSonar analysis will be launched at the same time of Sonarqube analyis and will audit the pages you have registered. After the analysis has ended, you will be able to see the dashboard representing the scores of your application. @@ -38,6 +375,10 @@ If you do not wish to correlate a Sonarqube analysis and an EcoSonar audit, you Check for the Request called `Launch an EcoSonar Analysis` +Or go to the Swagger User interface available at `[ECOSONAR-API-URL]/swagger/` and then choose the endpoint `Launch an EcoSonar Analysis` + +<a name="get-analysis"></a> + ## Retrieve an EcoSonar Analysis of your project ![EcoSonar Analysis Page](./images/ecosonar-analyis-access.webp) @@ -59,7 +400,6 @@ In the central panel, you will find all the metrics used to calculate the 3 scor ![EcoSonar Analysis Page charts](./images/ecosonar-analyis-charts.webp) - In this first panel, you will find an average of all metrics from your website (sum of all pages). But you can be more precise in your analysis by retrieving the audit page per page with the same amount of details. ![EcoSonar Audit per page](./images/ecosonar-audit-per-page.webp) @@ -76,13 +416,16 @@ And then, each sheet will summarize audit results for each page of your website ![EcoSonar Export Project](./images/ecosonar-export-url.webp) -## Retrieve EcoSonar recommendations +<a name="get-recos"></a> + +## Retrieve EcoSonar recommendations The last page in the EcoSonar tool is the EcoSonar Best Practices. ![EcoSonar Best Practices Page Access](./images/ecosonar-best-practices-access.webp) EcoSonar lists now audits from ecodesign and accessibility best practices coming from : + - Green-IT Analysis and Google Lighthouse Performance for ecodesign purposes - Google Lighthouse Accessibility & W3C Validator for accessibility purposes @@ -91,6 +434,7 @@ EcoSonar lists now audits from ecodesign and accessibility best practices coming ![EcoSonar Best Practices Page Accessibility](./images/ecosonar-best-practices-accessibility.webp) For each recommendation, you can find the following information: + - Title of the Best Practice - Level of implementation (a letter from A to G) : represents if the best practice has been implemented or not in your project (a score from 0 to 100 is also available through the API) - Measured metric in your project/page related to the best practice. The level of implementation has been set by comparing this value to ecodesign standards. @@ -104,6 +448,7 @@ For each recommendation, you can find the following information: When first arriving to this page, you will have the choose the right Procedure. A procedure in EcoSonar is a sorting algorithm that will sort your ecodesign best practices according the 3 different configuration: + - `Score Impact` : best practices will be sorted by descending order of implementation (best practices not implemented returned first) - `Highest Impact` : best practices will be sorted by order of impact to improve EcoSonar scores (best practices most efficient returned first) - `Quick Wins` : best practices will be sorted by ascending order of difficulty (best practices easy to implement returned first) @@ -113,6 +458,7 @@ Choose the one that will better fit with your priorities. We suggest you if you ![EcoSonar Procedure Page](./images/ecosonar-procedure-page.webp) Once your procedure chosen, feel free to use and discover the several audits made for your website with the available filters: + - Type of audit : `ecodesign` or `accessibility` - Audit Tool : `Green-IT Analysis`, `Google Lighthouse Performance`, `Google Lighthouse Accessibility` or `W3C Validator` - Levels : `A`, `B`, `C`, `D`, `E`, `F`, `G` and `N.A` (by default `A` and `N.A` best practices will not be displayed) @@ -120,9 +466,11 @@ Once your procedure chosen, feel free to use and discover the several audits mad ![EcoSonar Audit Filters](./images/ecosonar-audit-filters.webp) +<a name="get-green-code-smells"></a> + ## Retrieve Green Coding Rules to implement -EcoSonar now integrates Ecocode green coding rules to help you code greener. This functionality comes in addition to default coding rules audited through a SonarQube analysis. Right now, 3 languages are supportes : Java, PHP and Python. +EcoSonar now integrates Ecocode green coding rules to help you code greener. This functionality comes in addition to default coding rules audited through a SonarQube analysis. Right now, 7 languages are supported : Java, PHP, Python, JavaScript, TypeScript, Android and iOS/Swift. Let's take as example a Java project. @@ -142,4 +490,4 @@ When you want to resolve a code smell, you can click directly into it and it wil To get some documentation on how to solve the code smell, you can click on the link `Why is this an issue ?` and a pop up will be displayed to explain why is this issue an ecodesign best practice and example of compliant and non-compliant code to help you in your implementation. -![EcoSonar Code Smell Correction](./images/ecosonar-code-smell-correction.webp) \ No newline at end of file +![EcoSonar Code Smell Correction](./images/ecosonar-code-smell-correction.webp) diff --git a/images/delete-project.webp b/images/delete-project.webp new file mode 100644 index 0000000..ca414a1 Binary files /dev/null and b/images/delete-project.webp differ diff --git a/images/ecosonar-url-configuration.webp b/images/ecosonar-url-configuration.webp index c2efafe..4576c4b 100644 Binary files a/images/ecosonar-url-configuration.webp and b/images/ecosonar-url-configuration.webp differ diff --git a/images/ecosonar-url-crawler-result.webp b/images/ecosonar-url-crawler-result.webp index f33c17a..971a5d1 100644 Binary files a/images/ecosonar-url-crawler-result.webp and b/images/ecosonar-url-crawler-result.webp differ diff --git a/images/ecosonar-url-crawler-setup.webp b/images/ecosonar-url-crawler-setup.webp index d93d361..62fbc8d 100644 Binary files a/images/ecosonar-url-crawler-setup.webp and b/images/ecosonar-url-crawler-setup.webp differ diff --git a/images/ecosonar-url-page.webp b/images/ecosonar-url-page.webp new file mode 100644 index 0000000..9d3e7d1 Binary files /dev/null and b/images/ecosonar-url-page.webp differ diff --git a/images/get-best-practices-documentation.webp b/images/get-best-practices-documentation.webp new file mode 100644 index 0000000..a6ef2d6 Binary files /dev/null and b/images/get-best-practices-documentation.webp differ diff --git a/images/get-crawler-result.webp b/images/get-crawler-result.webp index 9de78ae..003af81 100644 Binary files a/images/get-crawler-result.webp and b/images/get-crawler-result.webp differ diff --git a/images/get-version.webp b/images/get-version.webp new file mode 100644 index 0000000..75867c4 Binary files /dev/null and b/images/get-version.webp differ diff --git a/images/launch-crawler.webp b/images/launch-crawler.webp new file mode 100644 index 0000000..c675737 Binary files /dev/null and b/images/launch-crawler.webp differ diff --git a/images/save-login-and-proxy-for-project.webp b/images/save-login-and-proxy-for-project.webp deleted file mode 100644 index f1df744..0000000 Binary files a/images/save-login-and-proxy-for-project.webp and /dev/null differ diff --git a/images/save-login-for-project.webp b/images/save-login-for-project.webp new file mode 100644 index 0000000..338d8c4 Binary files /dev/null and b/images/save-login-for-project.webp differ diff --git a/images/save-proxy-for-project.webp b/images/save-proxy-for-project.webp new file mode 100644 index 0000000..661c1cd Binary files /dev/null and b/images/save-proxy-for-project.webp differ