Skip to content

Commit

Permalink
Merge pull request #29 from chip-csg/csg/merge_test_event_2.2
Browse files Browse the repository at this point in the history
Merging TE2.2 CSG additions
  • Loading branch information
wahajsyed authored Jun 3, 2021
2 parents bb8a6ef + ba381d0 commit 463ea78
Show file tree
Hide file tree
Showing 12 changed files with 510 additions and 0 deletions.
87 changes: 87 additions & 0 deletions src/controller/python/DEVELOPMENT_GUIDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Admin Flow

* CHIP SDK is maintained on a main branch `master` and release branches cut out for specific deliverables like `test_event2.2`
* CSG fork of the CHIP SDK is maintained with all existing main and release branches left untouched and pristine.
* All CSG specific changes should be done on newly created branches (with a `csg/` prefix) based on these release branches. eg: `test_event2.2` -> `csg/test_event2.2`
* Only admins on the repo have access to pull in changes from the upstream repo and update all local main and release branches using the following commands or equivalent:
```bash
git fetch upstream
git pull upstream
git push origin
```
* Once updated create a local version of the release branch with a csg prefix to add our changes. This branch will be the main merge target for all the PRs in this dev period.

# Development Flow

* Developers must create a branch with the following prefix `csg/fix` or `csg/feature` and submit their PRs to be merged to only the csg version of the release branch.
* If the admins upstream the branches, the developers must rebase their changes resolve their conflicts locally before submitting the PR
* Changes should be only additive, isolated and minimal, effort should be made to minimize the footprint on changes in the existing code.

# Building

* At the root directory:
```bash
source scripts/activate.sh
gn gen out/debug
ninja -C out/debug
```
* A .whl file is generate at `out/debug/controller/python/chip-0.0-cp37-abi3-linux_aarch64.whl`
* Copy this wheel file into the sample test client folder or into the test harness controller dockerfile folder.

# Testing with a real accessory
* You need to map the the system dbus to the docker dbus and run the docker with --privileged to be able to communicate with the real accessory
* Launch the a Docker container for chip device controller
Sample Docker File Used: https://docs.google.com/document/d/1-CMDrKEBGnHhSvAUMHH7ZSRDiKhuu2iwI-GYVUjI7DY/edit?resourcekey=0-VTKXbTk9YYQmdq10xa_f7g#heading=h.yzihj7w54f7a
docker run --name docker-python-controller -v /var/run/dbus:/var/run/dbus --privileged -it <Docker Image>

* From the Launched Python controller Docker container run the chip device controller with the BLE adapter parameter set to hci0
chip-device-ctrl --bluetooth-adapter=hci0
* You should now be able to run 'ble-scan' and 'connect -ble' actions from the chip device controller

# Testing with virtual BLE accessory
* For this you will need to create virtual BLE interfaces in the Test harness Docker dev Container
* You need to map the the system dbus to the docker dbus and run the docker with --privileged to be able to create the virtual ble interfaces using Bluez

* Setting up the Virtual BLE interfaces from Inside a docker container to be used by Other docker containers on the same host:
The bluez in ubuntu seems already the same as the version as CHIP repo, so we don't have to build bluetoothd by ourselves.
Bluetoothd will start on the host on boot, So we infact skip the bluetoothd setup

You still need to install bluez inside container

* Starting with a Clean Raspi host post Bootup:
ubuntu@ubuntu:~$ ps -a
PID TTY TIME CMD
2357 pts/0 00:00:00 ps

hci0: Type: Primary Bus: UART
BD Address: B8:27:EB:C4:E2:93 ACL MTU: 1021:8 SCO MTU: 64:1
UP RUNNING
RX bytes:794 acl:0 sco:0 events:52 errors:0
TX bytes:2760 acl:0 sco:0 commands:52 errors:0

* Bringing up the Virtual Interfaces in Docker container 1:
Follow the instructions as mentioned on chip-device-controller documentation
```bash
cd third_party/bluez/repo/
./bootstrap
./configure --prefix=/usr --mandir=/usr/share/man --sysconfdir=/etc --localstatedir=/var --enable-experimental --with-systemdsystemunitdir=/lib/systemd/system --with-systemduserunitdir=/usr/lib/systemd --enable-deprecated --enable-testing --enable-tools
make
emulator/btvirt -L -l<Number of Virtual BLE interfaces to be created> &
```
* Check hciconfig to confirm the said number of Virtual BLE interfaces are created

* Launch the a Docker container for chip Linux Lighting app
Sample Docker File used: https://docs.google.com/document/d/1xOizHV3ZeG_mu70CJWp-tN4xYoDxRhjEnwFKzhkIiC0/edit#heading=h.9hjqswbm7x50
docker run --name docker-chip-lighting-app -v /var/run/dbus:/var/run/dbus --privileged -it <Docker Image>

* Launch the Linux Lighting App in the docker-chip-lighting-app Docker container specifying the ble-device parameter as follows:
./chip-lighting-app --ble-device 1 --wifi

* Launch the Chip Device controller in a different container(ex: docker-python-controller docker container as mentioned above in Testing with real accessory)
* Run the chip device controller with --ble-adapter set to one of the virtual BLE interfaces(ex: hci2 this interface has to be different then one used by the virtual BLE accessory) as follows:
chip-device-ctrl --bluetooth-adapter=hci2
* You should now be able to run 'ble-scan' and 'connect -ble' actions from the chip device controller

# Testing with Hybrid setup with Real and Virtual BLE accessories
* When we have a real accessory along with a virtual ble accessory you will be able to interact/control with only one accessory at a time. In such Hybrid Setup you will have to switch between BLE adapters in chip-device-controller to handle respective accessories.
* To switch between ble interfaces run "ble-adapter-select <adapter number example: hci1 to scan and connect to virtual accessory and hci0 to scan and connect to real accessory>"
164 changes: 164 additions & 0 deletions src/controller/python/chip-device-ctrl.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,12 @@
from cmd import Cmd
from chip.ChipBleUtility import FAKE_CONN_OBJ_VALUE
from chip.setup_payload import SetupPayload
from xmlrpc.server import SimpleXMLRPCServer
from enum import Enum
from typing import Any, Dict,Optional

from enum import Enum
from typing import Any, Dict,Optional
# Extend sys.path with one or more directories, relative to the location of the
# running script, in which the chip package might be found . This makes it
# possible to run the device manager shell from a non-standard install location,
Expand Down Expand Up @@ -71,6 +76,16 @@
elif sys.platform.startswith('linux'):
from chip.ChipBluezMgr import BluezManager as BleManager


class StatusCodeEnum(Enum):
SUCCESS = 0
FAILED = 1

class RPCResponseKeyEnum(Enum):
STATUS = "status"
RESULT = "result"
ERROR = "error"

# The exceptions for CHIP Device Controller CLI


Expand Down Expand Up @@ -719,8 +734,157 @@ def do_EOF(self, line):
def emptyline(self):
pass

### Additions needed by the Test Harness Tool ###
# TODO: Implement a custom device manager instead of using the existing manager object
# https://github.com/chip-csg/connectedhomeip/issues/8
device_manager = DeviceMgrCmd(rendezvousAddr=None,
controllerNodeId=0, bluetoothAdapter=0)


# CHIP commands needed by the Harness Tool
def echo_alive(message):
print(message)
return message


def resolve(fabric_id: int, node_id: int) -> Dict[str, Any]:
try:
__check_supported_os()
err = device_manager.devCtrl.ResolveNode(fabric_id, node_id)
if err != 0:
return __get_response_dict(status = StatusCodeEnum.FAILED, error = f"Failed to resolve node, with error code: {err}")

address = device_manager.devCtrl.GetAddressAndPort(node_id)
if address is not None:
address = "{}:{}".format(
*address)
return __get_response_dict(status = StatusCodeEnum.SUCCESS, result = {'address': address})

except Exception as e:
return __get_response_dict(status = StatusCodeEnum.FAILED, error = str(e))

def zcl_add_network(node_id: int, ssid: str, password: str, endpoint_id: Optional[int] = 1, group_id: Optional[int] = 0, breadcrumb: Optional[int] = 0, timeoutMs: Optional[int] = 1000) -> Dict[str, Any] :
try:
__check_supported_os()
args = {}
args['ssid'] = ssid.encode("utf-8") + b'\x00'
args['credentials'] = password.encode("utf-8") + b'\x00'
args['breadcrumb'] = breadcrumb
args['timeoutMs'] = timeoutMs
err, res = device_manager.devCtrl.ZCLSend("NetworkCommissioning", "AddWiFiNetwork", node_id, endpoint_id, group_id, args, blocking=True)
if err != 0:
return __get_response_dict(status = StatusCodeEnum.FAILED)
elif res != None:
return __get_response_dict(status = StatusCodeEnum.SUCCESS, result = str(res))
else:
return __get_response_dict(status = StatusCodeEnum.SUCCESS)

except Exception as e:
return __get_response_dict(status = StatusCodeEnum.FAILED, error = str(e))

def zcl_enable_network(node_id: int, ssid:str, endpoint_id: Optional[int] = 1, group_id: Optional[int] = 0, breadcrumb: Optional[int] = 0, timeoutMs: Optional[int] = 1000) -> Dict[str, Any]:
try:
__check_supported_os()
args = {}
args['networkID'] = ssid.encode("utf-8") + b'\x00'
args['breadcrumb'] = breadcrumb
args['timeoutMs'] = timeoutMs

err, res = device_manager.devCtrl.ZCLSend("NetworkCommissioning", "EnableNetwork", node_id, endpoint_id, group_id, args, blocking=True)
if err != 0:
return __get_response_dict(status = StatusCodeEnum.FAILED)
else:
return __get_response_dict(status = StatusCodeEnum.SUCCESS, result = str(res))

except Exception as e:
return __get_response_dict(status = StatusCodeEnum.FAILED, error = str(e))

def ble_scan():
try:
__check_supported_os()
device_manager.do_blescan("")
return __get_response_dict(status = StatusCodeEnum.SUCCESS, result = __get_peripheral_list())
except Exception as e:
return __get_response_dict(status = StatusCodeEnum.FAILED, error = str(e))

def __get_peripheral_list() -> Dict[Any, Any]:
device_list = []
for device in device_manager.bleMgr.peripheral_list:
device_detail = {}
devIdInfo = device_manager.bleMgr.get_peripheral_devIdInfo(device)
if devIdInfo != None:
device_detail['name'] = str(device.Name)
device_detail['id'] = str(device.device_id)
device_detail['rssi'] = str(device.RSSI)
device_detail['address'] = str(device.Address)
device_detail['pairing_state'] = devIdInfo.pairingState
device_detail['discriminator'] = devIdInfo.discriminator
device_detail['vendor_id'] = devIdInfo.vendorId
device_detail['product_id'] = devIdInfo.productId
if device.ServiceData:
for advuuid in device.ServiceData:
device_detail['adv_uuid'] = str(advuuid)
device_list.append(device_detail)
return device_list

def ble_connect(discriminator: int, pin_code: int, node_id: int) -> Dict[str, Any]:
try:
__check_supported_os()
device_manager.devCtrl.ConnectBLE(discriminator, pin_code, node_id)
return __get_response_dict(status = StatusCodeEnum.SUCCESS)
except Exception as e:
return __get_response_dict(status = StatusCodeEnum.FAILED, error = str(e))

def ip_connect(ip_address: string, pin_code: int, node_id: int) -> Dict[str, Any]:
try:
__check_supported_os()
device_manager.devCtrl.ConnectIP(ip_address.encode("utf-8"), pin_code, node_id)
return __get_response_dict(status = StatusCodeEnum.SUCCESS)
except Exception as e:
return __get_response_dict(status = StatusCodeEnum.FAILED, error = str(e))

def qr_code_parse(qr_code):
try:
result = SetupPayload().ParseQrCode(qr_code).Dictionary()
return __get_response_dict(status = StatusCodeEnum.SUCCESS, result = result)
except Exception as e:
return __get_response_dict(status = StatusCodeEnum.FAILED, error = str(e))

def start_rpc_server():
with SimpleXMLRPCServer(("0.0.0.0", 5000), allow_none=True) as server:
server.register_function(echo_alive)
server.register_function(ble_scan)
server.register_function(ble_connect)
server.register_function(ip_connect)
server.register_function(zcl_add_network)
server.register_function(zcl_enable_network)
server.register_function(resolve)
server.register_function(qr_code_parse)
server.register_multicall_functions()
print('Serving XML-RPC on localhost port 5000')
try:
server.serve_forever()
except KeyboardInterrupt:
print("\nKeyboard interrupt received, exiting.")
sys.exit(0)

def __get_response_dict(status: StatusCodeEnum, result: Optional[Dict[Any, Any]] = None, error:Optional[str] = None) -> Dict [str, Any]:
return { RPCResponseKeyEnum.STATUS.value : status.value, RPCResponseKeyEnum.RESULT.value : result, RPCResponseKeyEnum.ERROR.value : error }

def __check_supported_os()-> bool:
if platform.system().lower() == 'darwin':
raise Exception(platform.system() + " not supported")
elif sys.platform.lower().startswith('linux'):
return True

raise Exception("OS Not Supported")

######--------------------------------------------------######

def main():
start_rpc_server()

# Never reach here
optParser = OptionParser()
optParser.add_option(
"-r",
Expand Down
9 changes: 9 additions & 0 deletions src/controller/python/chip/setup_payload/setup_payload.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,3 +95,12 @@ def __InitNativeFunctions(self, chipLib):
setter.Set("pychip_SetupPayload_ParseManualPairingCode",
c_int32,
[c_char_p, SetupPayload.AttributeVisitor, SetupPayload.VendorAttributeVisitor])

######----------------------------------------------------------------------------------------######

def Dictionary(self):
payload_dict = {}
attributes_array = self.attributes + self.vendor_attributes
for name, value in attributes_array:
payload_dict[name] = value
return payload_dict
25 changes: 25 additions & 0 deletions src/controller/python/test_client/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
**/__pycache__
**/.classpath
**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.project
**/.settings
**/.toolstarget
**/.vs
**/.vscode
**/*.*proj.user
**/*.dbmdl
**/*.jfm
**/azds.yaml
**/bin
**/charts
**/docker-compose*
**/Dockerfile*
**/node_modules
**/npm-debug.log
**/obj
**/secrets.dev.yaml
**/values.dev.yaml
README.md
78 changes: 78 additions & 0 deletions src/controller/python/test_client/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# getting base image for ubuntu
FROM ubuntu:20.10

# To override
ENV TZ=America/Los_Angeles
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Executed during Image build
USER root
RUN apt-get update

# Install Dependencies for setting up CHIP environment
RUN apt-get install -y apt-utils \
git \
gcc \
g++ \
python \
pkg-config \
libssl-dev \
libdbus-1-dev \
libglib2.0-dev \
libavahi-client-dev \
libgirepository1.0-dev \
ninja-build \
python3-venv \
python3-dev \
python3-pip \
unzip

# Additional Dependencies for Chip device controller
RUN apt-get install -y autotools-dev \
automake \
libusb-dev \
libudev-dev \
libical-dev \
libreadline-dev \
libdbus-glib-1-dev \
libcairo2-dev \
libtool \
bluez \
bluetooth \
python3-gi \
python-is-python3 \
python3 \
pi-bluetooth \
avahi-daemon \
libavahi-client-dev \
build-essential \
protobuf-compiler \
wpasupplicant \
wireless-tools \
rfkill \
libcairo2-dev \
python3-widgetsnbextension \
python3-testresources \
isc-dhcp-client \
avahi-utils \
iproute2 \
iputils-ping

ADD requirements.txt .
RUN pip3 install -r requirements.txt --no-binary :all

# Wheel needs to be in the same dir as the dockerfile
ADD chip-0.0-cp37-abi3-linux_aarch64.whl .
RUN pip3 install chip-0.0-cp37-abi3-linux_aarch64.whl
WORKDIR /chip_tool

# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /chip_tool
USER appuser

# Set PYTHONPATH
ENV PYTHONPATH="/usr/bin/python3"

# Executed when you start/launch a container
# CMD bash
CMD ["chip-device-ctrl"]
Loading

0 comments on commit 463ea78

Please sign in to comment.