Skip to content

Commit

Permalink
added logger and cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
shashankshampi committed Oct 30, 2024
1 parent e7d05e3 commit adf6796
Show file tree
Hide file tree
Showing 4 changed files with 114 additions and 54 deletions.
106 changes: 84 additions & 22 deletions tests-functional/README.MD
Original file line number Diff line number Diff line change
@@ -1,41 +1,103 @@
Here’s the updated README with the additional prerequisites and instructions:

---

## Overview

Functional tests for status-go
Functional tests for `status-go`

## Table of Contents

- [Overview](#overview)
- [How to Install](#how-to-install)
- [How to Run](#how-to-run)
- [Running Tests](#running-tests)
- [Implementation details](#implementation-details)
- [Implementation Details](#implementation-details)
- [Build Status Backend](#build-status-backend)

## How to Install

* Install [Docker](https://docs.docker.com/engine/install/) and [Docker Compose](https://docs.docker.com/compose/install/)
* Install [Python 3.10.14](https://www.python.org/downloads/)
* In `./tests-functional`, run `pip install -r requirements.txt`
* **Optional (for test development)**: Use Python virtual environment for better dependency management. You can follow the guide [here](https://akrabat.com/creating-virtual-environments-with-pyenv/):
1. Install [Docker](https://docs.docker.com/engine/install/) and [Docker Compose](https://docs.docker.com/compose/install/)
2. Install [Python 3.10.14](https://www.python.org/downloads/)
3. **Set up a virtual environment (recommended):**
- In `./tests-functional`, run:
```bash
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```
- **Optional (for test development)**: Use Python virtual environment for better dependency management. You can follow the guide [here](https://akrabat.com/creating-virtual-environments-with-pyenv/)
4. Install pre-commit hooks (optional):
```bash
pre-commit install
```

## How to Run

### Running dev RPC (anvil with contracts)
- In `./tests-functional` run `docker compose -f docker-compose.anvil.yml up --remove-orphans --build`, as result:
* an [anvil](https://book.getfoundry.sh/reference/anvil/) container with ChainID 31337 exposed on `0.0.0.0:8545` will start running
* Status-im contracts will be deployed to the network
### Running dev RPC (Anvil with contracts)

In `./tests-functional`:
```bash
docker compose -f docker-compose.anvil.yml up --remove-orphans --build
```

This command will:
- Start an [Anvil](https://book.getfoundry.sh/reference/anvil/) container with ChainID `31337`, exposed on `0.0.0.0:8545`
- Deploy Status-im contracts to the Anvil network

### Running Tests

To run the tests:

1. In `./tests-functional`, start the testing containers:
```bash
docker compose -f docker-compose.anvil.yml -f docker-compose.test.status-go.yml -f docker-compose.status-go.local.yml up --build --remove-orphans
```

This command will:
- Create a container with [status-go as daemon](https://github.com/status-im/status-go/issues/5175), exposing `APIModules` on `0.0.0.0:3333`
- Configure `status-go` to use [Anvil](https://book.getfoundry.sh/reference/anvil/) as the `RPCURL` with ChainID `31337`
- Deploy all Status-im contracts to the Anvil network

2. To execute tests:
- Run all tests:
```bash
pytest
```
- Run tests marked as `wallet`:
```bash
pytest -m wallet
```
- Run a specific test:
```bash
pytest -k "test_contact_request_baseline"
```

## Implementation Details

- Functional tests are implemented in `./tests-functional/tests` using [pytest](https://docs.pytest.org/en/8.2.x/).
- Each test performs two types of verifications:
- **`verify_is_valid_json_rpc_response()`**: Checks for a status code `200`, a non-empty response, JSON-RPC structure, presence of the `result` field, and the expected ID.
- **`jsonschema.validate()`**: Validates that the response contains expected data, including required fields and types. Schemas are stored in `/schemas/wallet_MethodName`.
- **Schema Generation**:
- New schemas can be generated with `./tests-functional/schema_builder.py` by passing a response to the `CustomSchemaBuilder(schema_name).create_schema(response.json())` method. This should be used only during test creation.
- Search `how to create schema:` in test files for examples.
## Build Status Backend
You can build the binary with the following command in the `status-go` root directory:
```bash
make status-backend
```

### Run tests
- In `./tests-functional` run `docker compose -f docker-compose.anvil.yml -f docker-compose.test.status-go.yml -f docker-compose.status-go.local.yml up --build --remove-orphans`, as result:
* a container with [status-go as daemon](https://github.com/status-im/status-go/issues/5175) will be created with APIModules exposed on `0.0.0.0:3333`
* status-go will use [anvil](https://book.getfoundry.sh/reference/anvil/) as RPCURL with ChainID 31337
* all Status-im contracts will be deployed to the network
For further details on building and setting up `status-go` and `status-backend`, refer to the official documentation:
- [status-backend README](https://github.com/status-im/status-go/blob/develop/cmd/status-backend/README.md)
- [status-go cmd directory](https://github.com/status-im/status-go/tree/develop/cmd/status-backend)

* In `./tests-functional/tests` directory run `pytest -m wallet`
Location of the binary: `cmd/status-backend/status-backend`

## Implementation details
---

- Functional tests are implemented in `./tests-functional/tests` based on [pytest](https://docs.pytest.org/en/8.2.x/)
- Every test has two types of verifications:
- `verify_is_valid_json_rpc_response()` checks for status code 200, non-empty response, JSON-RPC structure, presence of the `result` field, and expected ID.
- `jsonschema.validate()` is used to check that the response contains expected data, including required fields and types. Schemas are stored in `/schemas/wallet_MethodName`
- New schemas can be generated using `./tests-functional/schema_builder.py` by passing a response to the `CustomSchemaBuilder(schema_name).create_schema(response.json())` method, should be used only on test creation phase, please search `how to create schema:` to see an example in a test
This README should cover your additional setup, installation, and testing instructions with clear steps for users. Let me know if there are any further modifications needed!
53 changes: 30 additions & 23 deletions tests-functional/clients/signals.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,53 +9,60 @@ class SignalClient:

def __init__(self, ws_url, await_signals):
self.url = f"{ws_url}/signals"

self.await_signals = await_signals
self.received_signals = {
signal: [] for signal in self.await_signals
}
self.received_signals = {signal: [] for signal in self.await_signals}

def on_message(self, ws, signal):
signal = json.loads(signal)
if signal.get("type") in self.await_signals:
self.received_signals[signal["type"]].append(signal)
logger = logging.getLogger(__name__)

signal_data = json.loads(signal)
signal_type = signal_data.get("type")

logger.info(f"Received signal: {signal_data}")

if signal_type in self.await_signals:
self.received_signals[signal_type].append(signal_data)
# logger.debug(f"Signal {signal_type} stored: {signal_data}")

def wait_for_signal(self, signal_type, expected_event=None, timeout=20):
"""
Wait for a signal of a specific type with optional expected event details.
"""
logger = logging.getLogger(__name__)
start_time = time.time()
while time.time() - start_time < timeout:
# Check if the signal list for this type has received signals
if self.received_signals.get(signal_type):
received_signal = self.received_signals[signal_type][0]
if expected_event:
# Check if the event in received_signal matches expected_event
event = received_signal.get("event", {})
if all(event.get(k) == v for k, v in expected_event.items()):
logging.debug(f"Signal {signal_type} with event {expected_event} received.")
logger.info(f"Signal {signal_type} with event {expected_event} received and matched.")
return received_signal
else:
logger.debug(
f"Signal {signal_type} received but event did not match expected event: {expected_event}. Received event: {event}")
else:
logging.debug(f"Signal {signal_type} received without specific event validation.")
logger.info(f"Signal {signal_type} received without specific event validation.")
return received_signal
time.sleep(0.2) # Wait before retrying to prevent excessive polling
time.sleep(0.2)

# If no matching signal is found within the timeout
raise TimeoutError(f"Signal {signal_type} with event {expected_event} not received in {timeout} seconds")

def _on_error(self, ws, error):
logging.error(f"Error: {error}")
logger = logging.getLogger(__name__)
logger.error(f"WebSocket error: {error}")

def _on_close(self, ws, close_status_code, close_msg):
logging.info(f"Connection closed: {close_status_code}, {close_msg}")
logger = logging.getLogger(__name__)
logger.info(f"WebSocket connection closed: {close_status_code}, {close_msg}")

def _on_open(self, ws):
logging.info("Connection opened")
logger = logging.getLogger(__name__)
logger.info("WebSocket connection opened")

def _connect(self):
ws = websocket.WebSocketApp(self.url,
on_message=self.on_message,
on_error=self._on_error,
on_close=self._on_close)
ws = websocket.WebSocketApp(
self.url,
on_message=self.on_message,
on_error=self._on_error,
on_close=self._on_close
)
ws.on_open = self._on_open
ws.run_forever()
1 change: 0 additions & 1 deletion tests-functional/src/steps/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
class StepsCommon:
@pytest.fixture(scope="function", autouse=False)
def start_1_node(self):
# Use Config for static paths and account data
account_data = {
**ACCOUNT_PAYLOAD_DEFAULTS,
"rootDataDir": LOCAL_DATA_DIR1,
Expand Down
8 changes: 0 additions & 8 deletions tests-functional/tests/test_contact_request.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,12 @@ def test_contact_request_baseline(self):
# Validate contact requests
missing_contact_requests = []
for first_node, second_node, display_name, index in nodes:
# Capture the response from send_and_wait_for_message
result = self.send_and_wait_for_message((first_node, second_node), display_name, index, timeout_secs)
timestamp, message_id, contact_request_message, response = result

if not response:
missing_contact_requests.append((timestamp, contact_request_message, message_id))
else:
# Run validator on the captured response
validator = ContactRequestValidator(response)
validator.run_all_validations(
expected_chat_id=first_node.pubkey,
Expand All @@ -86,15 +84,12 @@ def send_and_wait_for_message(self, nodes, display_name, index, timeout=45):
first_node_pubkey = first_node.get_pubkey(display_name)
contact_request_message = f"contact_request_{index}"

# Send the contact request and capture timestamp and message_id
timestamp, message_id = self.send_with_timestamp(
second_node.send_contact_request, first_node_pubkey, contact_request_message
)

# Capture the response directly from send_contact_request
response = second_node.send_contact_request(first_node_pubkey, contact_request_message)

# Define expected signal details
expected_event_started = {"requestId": "", "peerId": "", "batchIndex": 0, "numBatches": 1}
expected_event_completed = {"requestId": "", "peerId": "", "batchIndex": 0}

Expand All @@ -103,14 +98,11 @@ def send_and_wait_for_message(self, nodes, display_name, index, timeout=45):
first_node.wait_for_signal("history.request.completed", expected_event_completed, timeout)
except TimeoutError as e:
logging.error(f"Signal validation failed: {str(e)}")
# Return None if signals failed, along with timestamp and message_id for tracking
return timestamp, message_id, contact_request_message, None

# Stop nodes after message processing
first_node.stop()
second_node.stop()

# Return timestamp, message_id, and the response for validation
return timestamp, message_id, contact_request_message, response

def test_contact_request_with_latency(self):
Expand Down

0 comments on commit adf6796

Please sign in to comment.