Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Second part of one_device_only big merge #40

Closed
wants to merge 6 commits into from

Conversation

rospogrigio
Copy link
Owner

Rebasing #32 gave too many conflicts, so I'll close it and just restarted from scratch and applied the second part of the changes, it should be much more readable.
Problems persist in async_setup_entry and async_unload_entry (see comments), plus the whole SCHEMA thing has to be reviewed (could not fix _convert_entity properly).
@postlund can you please review and help to fix this?
Here I post again the new YAML format:

localtuya:
  - host: 192.168.1.x
    device_id: xxxxx
    local_key: xxxxx
    friendly_name: Plug Stereo Wifi
    protocol_version: 3.3
    entities:
      - platform: switch
        friendly_name: Plug Stereo
        id: 1
      - platform: switch
        friendly_name: Plug Stereo USB
        id: 7
      - platform: sensor
        friendly_name: Plug Voltage
        id: 20
        scaling: 0.1
        unit_of_measurement: 'V'

  - host: 192.168.1.x
    device_id: xxxxx
    local_key: xxxxx
    friendly_name: Tapparelle studio
    protocol_version: 3.3
    entities:
      - platform: cover
        friendly_name: Tapparelle studio Cover
        id: 1
      - platform: switch
        friendly_name: Tapparelle studio Backlight
        id: 101

This commit introduces the use of tox, which currently only verifies
that code is formatted according to the black project. More checks will
come in upcoming PRs for codespell, pydocstyle, flake8 and mypy.

A GitHub actions is included that runs tox on new pull requests, so we
get automatic feedback and can block commits that fail any check.
Another (official) action that runs hassfest is included as well, to
make sure we are compatible with Home Assistant.
@postlund
Copy link
Collaborator

I have started to look at it, will continue with it as my next thing.

@postlund
Copy link
Collaborator

You have a few of my commits in there, please remove them.

@rospogrigio
Copy link
Owner Author

You have a few of my commits in there, please remove them.

Which commits? And how can I remove them?

Copy link
Collaborator

@postlund postlund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added a few comments. You can also remove setup_platform (and import of import_from_yaml) in all platform files as well as the module comment at the top, specifying now unsupported YAML config format.

for interface_config in entities_to_setup:
# this has to be done in case the device type is type_0d
tuyainterface.add_dps_to_request(interface_config[CONF_ID])
tuyaDevice = hass.data[DOMAIN].get(config_entry.data[CONF_DEVICE_ID])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should do something like this:

tuya_device = hass.data[DOMAIN][config_entry.entry_id][TUYA_DEVICE]

See comments in async_setup_entry regarding this.

Also, use snake-case for variable name (tuya_device). Same in multiple places.

@@ -34,6 +34,17 @@

UNSUB_LISTENER = "unsub_listener"

DEVICE_SCHEMA = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to provide a schema for the component, that takes a list of these. Here is an example:

https://github.com/postlund/hass-atv-beta/blob/6e8190ce9d4ded0b03e92def88c7d583f4580efb/custom_components/apple_tv/__init__.py#L69

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did not get what you're saying here, maybe it's because it's late...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just like we provide PLATFORM_SCHEMA for each platform to define what you are allowed to write in configuration.yaml, we now need to remove all of them and add a schema to then component that verifies the same thing, but on "domain" level. The component schema basically validated:

localtuya:
  what_you_can_write_here

We basically want to ensure that what_you_can_write_here is a list of DEVICE_SCHEMA items. A thing that kinda passed me by is that we need to do some kind of special validation for the entities, so you can't write configuration for a light entity under switch for instance. Not sure how to do that, will have to investigate a bit. It must be done dynamically though.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is where I really need your help because the Schema mechanism is not clear to me. I still don't get where it is used and how the defaults are being populated, in particular when importing from YAML...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's a bit tricky since it's part of Thea architecture of how a component works. But the schema defined by CONFIG_SCHEMA is basically what is used to validate:

localtuya:
  ...

And is always defined in _init__.py. For platforms, PLATFORM_SCHEMA is used to validate:

sensor:
  - platform: localtuya

And must in this case be defined in localtuya/sensor.py.

The schema used by a config entry is on the other hand completely determined by the config flow. So there's no schema definition like for YAML.

I'm working on a schema for this, it's a bit tricky but I have an idea. Hopefully we can share some code with the config flow. You should be able to finish nonetheless, but without proper validation.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, what I can't succeed at the moment is assigning the default values for the Options.
Plus, config flow does not work: it creates everything but probably some parameter is missing because pytuya gets a bad response from the device.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you fix the for-loop in _convert_entity that I mentioned? It uses self.platform which you don't set anymore so it won't find any platform specific options. That's probably the problem.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have set it again but still doesn't work. Don't really understand where the default values for missing parameters are assigned, it's really bonkers to me. 😢

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will have a look at it.

Shit kinda hit the fan though... In order to implement the schema, I need to pull configuration fields (flow_schema) from each platform in __init__.py, but since the platform imports stuff from said file we have a circular dependency. We can break things out in a separate file to fix this though, it's not a true circular dependency. I would prefer to finish the CI stuff (fix pydocstyle, flake8 and all that) first as that will help with refactoring. It gives faster feedback since it can tell me if something is misspelled, not imported, etc. So I'm gonna focus on that, so I can do the refactoring and finally we can finish this. Man, this was harder than I thought.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I have seen that flow_schema is created correctly and tried to pass it to schema_defaults but for some reason it crashes on
copy = schema.extend({})
My brain hurts A LOT...

return True

hosts = config[DOMAIN]
if not hosts:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part should not be necessary if the component schema is a list. Then we should have an empty list if no config is provided. So basically, the loop below is the only thing that should be needed.

hass.data[DOMAIN][entry.entry_id] = {
UNSUB_LISTENER: unsub_listener,
}
# had to comment out this because it crashes and doesn't proceed further, really don't know what's wrong with this.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure where it crashes, so I can't tell you want happens. But you should use the code you have comments out and add a key for the device:

    hass.data[DOMAIN][entry.entry_id] = {
        UNSUB_LISTENER: unsub_listener,
        TUYA_DEVICE: TuyaDevice(entry.data)
    }

Everything in hass.data[DOMAIN][entry.entry_id] is what we share with the platforms. By using a dict, we can simply place anything we want there.

Copy link
Owner Author

@rospogrigio rospogrigio Sep 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line keeps crashing, don't understand why we are not able to create this dict.
Edit: fixed it, it had to go after hass.data.setdefault(DOMAIN, {}) for it to work.

# UNSUB_LISTENER: unsub_listener,
# }

tuyaDevice = TuyaDevice(entry.data)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now we can remove this block completely.

@@ -258,7 +301,7 @@ def dps(self, dps_index):
if value is None:
_LOGGER.warning(
"Entity %s is requesting unknown DPS index %s",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer to have the entity id here (e.g. switch.something) rather than just a number.

@@ -2,6 +2,9 @@
import logging
from importlib import import_module

import pprint
pp = pprint.PrettyPrinter(indent=4)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These kind out print outs are obviously used during development, but shouldn't be checked into the repo. It is better to use the logging framework and log in appropriate places so we can shut off debug logs when not used. The applies generally.

for entity_conf in user_input[CONF_ENTITIES]:
# self.platform = user_input[CONF_PLATFORM]
print('FOUND: [{}]'.format(entity_conf))
self.entities.append(_convert_entity(entity_conf))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the entities in YAML maps directly to the format in the config entry, I guess we don't need any conversion at all? We can more or less pass user_data to data when creating the config entry below. The only bastard here is CONF_YAML_IMPORT, which I'm gonna remove soon (but we still have to keep). But just reply adding CONF_YAML_IMPORT with its value to user_input, remove everything else and pass user_info to async_create_entry and let's see what happens.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is where the default values are not loaded for Options, I'd say.

super().__init__(device, config_entry, switchid, **kwargs)
self._config[CONF_OPEN_CMD] = "on"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is happening here? Why are you assigning these values here? They should be default from the schema. Otherwise we should just use whatever value they have.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I told you, it's because I could not make the Options schema work ...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I assume you are referring to the _convert_entity stuff? Should hopefully work with if you do as I wrote. Otherwise you can push the changes you have so I can try it out myself.

TuyaCache(tuyainterface, config_entry.data[CONF_FRIENDLY_NAME]),
config_entry,
tuyaDevice,
device_config[CONF_FRIENDLY_NAME],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should still be config_entry, right?

@postlund
Copy link
Collaborator

If you look under the "Commits"-tab here, you can see some of my commits from other PRs. You should only ever see your own commits there and preferably no merge commits (IMHO). You can solve this in several ways, but since they should not conflict with any of your changes, you can probably just remove the commits with an interactive rebase:

$ git rebase -i HEAD~7

Remove the lines with my commits, save and quit. If you look with git log, you should not see them anymore. To catch up with master, you do (this is what you do instead of git merge master):

$ git fetch origin
$ git rebase origin/master

Now you should see only your commits on top of the latest commit on master and since you have re-written history, you need to force-push:

$ git push -f origin HEAD:<branch name>

That should work I hope. In general this is how you work without merges. An alternative way is to create a new branch from master (e.g. git checkout -b my_branch origin/master) and cherry-pick one commit at the time from the other branch.

@rospogrigio
Copy link
Owner Author

OK thanks, I need some time because I broke my development environment trying to install tox locally and I am trying to set it up again so I can test things. Will let you know when I'm done...

@postlund
Copy link
Collaborator

Hmm, that sounds strange. What happened? It should be sufficient to pip install tox and then just run tox. It stores everything in .tox which shouldn't conflict with anything.

@rospogrigio
Copy link
Owner Author

Don't know, I was following these instructions https://developers.home-assistant.io/docs/development_testing but when I launched script/setup it tried to install HA 0.116.0 and could not connect any longer, so I downgraded back to 0.115.2 and now it just hangs like this:

2020-09-22 22:19:26 DEBUG (MainThread) [custom_components.localtuya.pytuya] custom_components.localtuya.pytuya version 8.1.0
2020-09-22 22:19:26 DEBUG (MainThread) [custom_components.localtuya.pytuya] Python 3.7.3 (default, Jul 25 2020, 13:03:44)
[GCC 8.3.0] on linux
2020-09-22 22:19:26 DEBUG (MainThread) [custom_components.localtuya.pytuya] Using PyCrypto (3, 9, '8')
2020-09-22 22:19:26 DEBUG (MainThread) [custom_components.localtuya.pytuya] Using PyCrypto from '/root/home-assistant/homeassistant/lib/python3.7/site-packages/Crypto/__init__.py'
2020-09-22 22:19:27 INFO (MainThread) [homeassistant.components.switch] Setting up switch.localtuya
2020-09-22 22:19:27 INFO (MainThread) [homeassistant.components.cover] Setting up cover.localtuya

I'm trying to figure out what's going on...

@postlund
Copy link
Collaborator

Ok, I have no idea what about that I'm afraid. One theory I have is that my change to pass True as second argument to async_add_entities was a bad idea and we should perhaps revert that. Doing so tells Home Assistant to call update once before adding the entity. This can severely delay start up if update takes a long time, which can happen since we have many retries. But you could try adding _LOGGER.exception("error") in the except clause in update in LocalTuyaEntity and see if you get any hits.

@rospogrigio
Copy link
Owner Author

I don't know if it's what you just wrote, anyway I tried older commits and I figured out that since commit 1e23120 (device discovery) my current environment (and also the old, broken one) don't proceed. Now I am checking the code...

@rospogrigio
Copy link
Owner Author

Looks like the killing line is
from .discovery import discover
WTF??!? Why did this stop working??

@postlund
Copy link
Collaborator

Uhm, very strange. Can you try adding some logging in that file to see if it is loaded at all?

@rospogrigio
Copy link
Owner Author

rospogrigio commented Sep 22, 2020

Logging after that line is not being printed.
What do you mean by "in that file"? helpers/discovery.py ? If I put logging in here it is not printed as well.
Edit: I replaced it with
from homeassistant.helpers.discovery import discover
and now everything is working as normal. Don't understand what's going on, maybe some environment variable??

@postlund
Copy link
Collaborator

I was meaning the discovery module, yes. But you are looking at the wrong file as we have a discovery.py in this repo. There's where I put the discovery routine.

@rospogrigio
Copy link
Owner Author

rospogrigio commented Sep 22, 2020

Getting closer. The script is being loaded, and the final killer line is:
from Cryptodome.Cipher import AES
Missing package?

Edit: in pytuya, AES is loaded like this:

try:
    # raise ImportError
    import Crypto
    from Crypto.Cipher import AES  # PyCrypto
except ImportError:
    Crypto = AES = None
    import pyaes  # https://github.com/ricmoo/pyaes

don't know if it can be of any help...

Edit2: replacing Cryptodome with Crypto solves it.
Edit3: lib/python3.7/site-packages/Cryptodome/Cipher/AES.py is present, so I don't know why it fails to load it.

@postlund
Copy link
Collaborator

I was almost ready to suggest that it had to do with that. But we have the dependency specified in the manifest, so... We should throw all of that out and switch to cryptography instead. That's much better. And also get rid of the python 2 and multi-library compatibility. There's no point.

@rospogrigio
Copy link
Owner Author

So, how are we going to fix this?

@postlund
Copy link
Collaborator

I'm not sure why the dependency is a problem, it should be installed by home assistant. But I will make an attempt to move to cryptograhy instead since that is properly maintained and modern.

I think I have cracked the component schema now. It's very hacky, so I need to clean up and see what I can share with config_flow. But should be possible to fix later tonight. Then I can create to bring that together with what you have here and fix the final pieces.

@rospogrigio
Copy link
Owner Author

OK,in the meantime I think I am beginning to get into the CONFIG_SCHEMA system... but still far from the solution.

@rospogrigio
Copy link
Owner Author

rospogrigio commented Sep 24, 2020

OK I think I have something, but don't know which is the best path to follow. I have 3 options in mind:

  1. These SCHEMAs:
SWITCH_SCHEMA = vol.Schema({
    vol.Required(CONF_PLATFORM): cv.string,
    vol.Required(CONF_FRIENDLY_NAME): cv.string,
    vol.Optional(CONF_ID, default=DEFAULT_ID): cv.string,
})

COVER_SCHEMA = vol.Schema({
    vol.Required(CONF_PLATFORM): cv.string,
    vol.Required(CONF_FRIENDLY_NAME): cv.string,
    vol.Optional(CONF_ID, default=DEFAULT_ID): cv.string,
    vol.Optional(CONF_OPEN_CMD, default=DEFAULT_OPEN_CMD): cv.string,
    vol.Optional(CONF_CLOSE_CMD, default=DEFAULT_CLOSE_CMD): cv.string,
    vol.Optional(CONF_STOP_CMD, default=DEFAULT_STOP_CMD): cv.string,
})

DEVICE_SCHEMA = vol.Schema({
    vol.Required(CONF_HOST): cv.string,
    vol.Required(CONF_DEVICE_ID): cv.string,
    vol.Required(CONF_LOCAL_KEY): cv.string,
    vol.Required(CONF_FRIENDLY_NAME): cv.string,
    vol.Required(CONF_PROTOCOL_VERSION, default=DEFAULT_PROTOCOL_VERSION): vol.Coerce(
        float
    ),
    CONF_SWITCHES: vol.All( 
        cv.ensure_list,
        [ SWITCH_SCHEMA ],
    ),
    CONF_COVERS: vol.All( 
        cv.ensure_list,
        [ COVER_SCHEMA ],
    ),
})

CONFIG_SCHEMA = vol.Schema(
    {
        DOMAIN: {
            CONF_DEVICES: vol.All( 
                cv.ensure_list,
                [ DEVICE_SCHEMA ],
            )
        },
    },
    extra=vol.ALLOW_EXTRA,
)

validate this config structure, populating the defaults for covers correctly:

  devices:
  - host: 192.168.1.26
    device_id: xxxxx
    local_key: xxxxx
    friendly_name: Presa Stereo Wifi
    protocol_version: 3.3
    switches:
      - platform: switch
        friendly_name: Presa Stereo
        id: 1
      - platform: switch
        friendly_name: Presa Stereo USB
        id: 7  

  - host: 192.168.1.39
    device_id: xxxxx
    local_key: xxxxx
    friendly_name: Tapparelle studio
    protocol_version: 3.3
    covers:
      - platform: cover
        friendly_name: Tapparelle studio Cover
        id: 1
    switches:
      - platform: switch
        friendly_name: Tapparelle studio Backlight
        id: 101

Drawback: you have to define different schemas for each platform (not too bad).

  1. have a generic ENTITY_SCHEMA, and place all the optional parameters of all the devices there.
    Drawback: each platform receives all the default values for the optional parameters, even if they are not needed (quite ugly).

  2. we could have a generic ENTITY_SCHEMA, extending with the proper OPTION_SCHEMA conditionally depending on the platform value, would be the cleanest way but I don't know if it's feasible.

I would start going with 1) which could eventually move to 3) if it's feasible, but let me know your thoughts.
And, how this fits into the config_flow is unknown to me so maybe you can tell which is the best way to proceed (still think 1) is the best option ATM, though).
Let me know...

@postlund
Copy link
Collaborator

I already have a schema that validates this, that's what I meant with "cracked the component schema" 😊 It's fully dynamic based on the same principle as config flow.

This has more or less nothing to do with config flow. We just pass the structure here into the config flow and create a config entry based of it. It's just part of the architecture as the config flow framework has an import source for this.

@postlund
Copy link
Collaborator

My solution would correspond to 3.

@rospogrigio
Copy link
Owner Author

Wonderful, I'll wait for you to finish the job then! 👍

@postlund
Copy link
Collaborator

I think I sorted it out. I cleaned up my schema stuff a bit and made a commit. Then I applied your stuff on top of that, fixed some broken stuff and it seems to work. I have only tested with a single socket power outlet (including power monitoring). Both via YAML and config flow.

I'm not entirely satisfied yet, there's some things to clean up:

  • YAML accepts DP index as integer, whilst the config flow stores as strings. So the import method converts these integers into strings, which isn't pretty. I'm gonna fix so that we always store integers.
  • Code for creating the component schema ended up in config_flow.py for simplicity. This should probably be extracted and placed somewhere else.
  • Remove all YAML examples from platforms and create one example in __init__.py.

Maybe something more that I've forgotten (I'm very tired right now, so time for bed).
I pushed the changes to the comp_schema branch. We can probably create a PR from that or just start over in this PR but use the content on that branch instead. But feel free to have a look and give it a spin!

@postlund
Copy link
Collaborator

Just pushed an update which fixes the first and last bullet. The last one can potentially be left as a technical debt as I don't feel it's that important that we break things out (not much benefit right now). So the comp_schema branch might very well be the finished thing right now.

@rospogrigio
Copy link
Owner Author

I tested only the YAML import, it is partially working so I'll do some debugging. What I can see is:

  1. config validation is carried out ok
  2. I get some 2020-09-25 11:16:09 WARNING (SyncWorker_38) [custom_components.localtuya.common] Entity switch.tapparelle_studio_backlight is requesting unknown DPS index 101 (also with entity None)
  3. sensors are not working
    PS, wow, the code is very sleek, even if the syntax is a bit too advanced for my python knowledge to understand it in detail. I should be able to debug it anyway. How should I proceed? merge into this PR or work within your branch?
    Thank you, great job!

@rospogrigio
Copy link
Owner Author

Oh, and by the way: you are still keeping the Current/Voltage parts within the Switch entity?? I'd just remove it from there as they are supposed to be created as sensors... don't you agree?

@postlund
Copy link
Collaborator

For 2), I guess the code for adding DPS for all flow fields was lost somewhere. We only add for index "1" right now. Not sure where it went.

For 3): yeah, it looks quite nice. The concept seems to hold water so far 👍 Maybe just create a new PR from my branch and keep adding commits on top of that might be easier?

I would prefer to keep it unless you shout too much. Some people (including myself) might be interested in having the values, but not clutter HA with additional sensors. Since you can do automations based on attributes now and I intend to add support for that in the integration integration as well, an actual sensor for each value might not be needed. Also, if you send data to some other database (influx for instance), you can still plot those values without needing a sensor.

@rospogrigio
Copy link
Owner Author

I would prefer to keep it unless you shout too much. Some people (including myself) might be interested in having the values, but not clutter HA with additional sensors. Since you can do automations based on attributes now and I intend to add support for that in the integration integration as well, an actual sensor for each value might not be needed. Also, if you send data to some other database (influx for instance), you can still plot those values without needing a sensor.

I won't shout for sure 😆 , just looks much more simple to me defining a sensor rather than a template_sensor and fiddling with attributes, but if you believe there's a plus in there let's keep it.
Regarding the thermostat, we'd better ask the owner who requested its support to launch the latest tuyadebug I pushed to master recently and post us the output: it just autodetects the device type and outputs all the detected dps.

@postlund
Copy link
Collaborator

I won't shout for sure 😆 , just looks much more simple to me defining a sensor rather than a template_sensor and fiddling with attributes, but if you believe there's a plus in there let's keep it.

Then I suggest we keep it. Since most stuff in HA can read attribute as well, template sensors aren't necessarily needed (as mentioned above). But it depends on use case. Since the fields are optional, the user can pick which way to go.

Regarding the thermostat, we'd better ask the owner who requested its support to launch the latest tuyadebug I pushed to master recently and post us the output: it just autodetects the device type and outputs all the detected dps.

Yeah, that sounds like a plan!

@postlund
Copy link
Collaborator

I created #49 to get going. Have some things I want to fix, but need this merged to get going. So have a look again when you have time
😊

@postlund
Copy link
Collaborator

postlund commented Sep 25, 2020

Yeah, right, what not working with sensors btw? Kthxbye

@rospogrigio
Copy link
Owner Author

Actually now it's working, apart from the fact that I'd truncate to 1 digit after the comma because I get numbers like 217.10000000000002 V which is not so cool 😄
I'll try some more things, such as config flow. I also would like to investigate a strange behavior that I get sometimes with my plugs, and that is that it starts working but as soon as I change one switch, the device becomes unavailable and I keep getting these errors:
Failed to update status of device [192.168.1.26] Failed to receive data from 192.168.1.26. Raising Exception. Failed to update status of device [192.168.1.26] Failed to receive data from 192.168.1.26. Raising Exception. Failed to update status of device [192.168.1.26] 2020-09-25 21:00:13 ERROR (SyncWorker_29) [custom_components.localtuya.common] Failed to update status of device 192.168.1.26
At this point, also trying with tuyadebug makes me get these messages:
Failed to receive data from 192.168.1.26. Raising Exception. Failed to get status: [[Errno 104] Connection reset by peer] INFO:localtuya:TIMEOUT: No response from device 860687042462ab30b1e5 [192.168.1.26] after 2 attempts.
For several seconds, then all of a sudden it starts responding again. It's not due to this PR because it used to happen also in the past, but it's quite repeatable. And it's not a matter of connection or distance since it's very close to the AP. Anyway, let me check this PR a bit more and I'll give some more feedbacks.
Bye!

@postlund
Copy link
Collaborator

Yes, some rounding seems reasonable 😅

My best guess is that it relates to the devices only supporting one TCP connection at the time. We already now that a status update for a double gang socket means two connections. Then you get additional connections every time you send a command, e.g. to turn on or off. There are lots of potential problems that can happen here due to dangling connections, time outs and what nots that keeps connections living and blocking additional connections. We should aim at having one connection (per device) active at all time and re-use that.

@postlund
Copy link
Collaborator

I added rounding to two digits in sensor.

@rospogrigio
Copy link
Owner Author

I added rounding to two digits in sensor.

Mmm, why not using a number of digits related to the scaling factor (1 for 0.1, 3 for 0.001...)?

@postlund
Copy link
Collaborator

I added rounding to two digits in sensor.

Mmm, why not using a number of digits related to the scaling factor (1 for 0.1, 3 for 0.001...)?

If you know any good way to deal with rounding based on that, I'm all for it! Just took an easy solution now.

@rospogrigio
Copy link
Owner Author

If you know any good way to deal with rounding based on that, I'm all for it! Just took an easy solution now.

Mmm, I would just have done some kind of hack, converting the Scaling param to string and calculated the difference between the positions of the last "1" and the ".", but I agree that we can keep it simple for now. Also I noticed that the Voltage/Current values are hardcodedly divided by 10 while they can be configured if used as sensors... but I agree we can keep it simple as it is for now and focus on more important things such as the connection handling part, speaking of which:

My best guess is that it relates to the devices only supporting one TCP connection at the time. We already now that a status update for a double gang socket means two connections. Then you get additional connections every time you send a command, e.g. to turn on or off. There are lots of potential problems that can happen here due to dangling connections, time outs and what nots that keeps connections living and blocking additional connections. We should aim at having one connection (per device) active at all time and re-use that.

I think there's something else going on. Just to be sure, I stopped my official HA instance so I only have the development one running, and this issue keeps happening. Also, the TuyaCache object, even if rudimental, should be preventing concurring requests so I don't know why this happens. And finally, I see this happening only on my Smart Plugs (0x0a devices) and never on my cover devices (0x0d type). I don't know if there is still something fishy in the TuyaInterface, or if it's just some kind of bad firmware thing. Need to investigate more.
Anyway, #49 seems to be ready for prime time so I'd merge it and I'll start to have a look at #10 since it's been waiting for long. I think then we could also send out a release, so that people can try it and provide feedbacks on the plaform we couldn't test ourselves, what do you think? Or maybe it's better to first finish the "single connection" part? Let me know!

@rospogrigio
Copy link
Owner Author

Mmm...sorry for bringing bad news but actually at the moment I have all my devices unavailable (both the YAML-imported and the config_flow created), I don't understand what's going on!! Will take some more tests before merging, need to figure out what's happening...

@rospogrigio
Copy link
Owner Author

Actually it was (as usually happens) my fault, I was tinkering with the Lock/Unlock calls... I do still get long numbers for the Voltage, the Precision doesn't help, we need to truncate the numbers in some way, for the rest I think we can merge.

@postlund
Copy link
Collaborator

Mmm, I would just have done some kind of hack, converting the Scaling param to string and calculated the difference between the positions of the last "1" and the ".", but I agree that we can keep it simple for now. Also I noticed that the Voltage/Current values are hardcodedly divided by 10 while they can be configured if used as sensors... but I agree we can keep it simple as it is for now and focus on more important things such as the connection handling part, speaking of which:

I feel that it's not general enough. What would be the result of scaling factor 3 or 0.25 for instance? It's just easier to keep it a user-specific thing as you say for now and deal with it later.

I think there's something else going on. Just to be sure, I stopped my official HA instance so I only have the development one running, and this issue keeps happening. Also, the TuyaCache object, even if rudimental, should be preventing concurring requests so I don't know why this happens. And finally, I see this happening only on my Smart Plugs (0x0a devices) and never on my cover devices (0x0d type). I don't know if there is still something fishy in the TuyaInterface, or if it's just some kind of bad firmware thing. Need to investigate more.

I think something is not working correctly with something, somewhere (to be extremely precise) since I always get a lot of "Failed to connect" errors. With my changes to move polling out from the platforms and into the component instead, these errors have disappeared completely. My dev instance has been running for several hours today and I haven't seen a single one. Poll frequency is also five seconds just to stress test a bit, so pretty good I would say. As I haven't looked into the details of current implementation I can't say much, but this is at least a good way forward. Trying to decrease retries and timeout time would also be good as when these times gets longer than the update frequency we might run into problems...

Anyway, #49 seems to be ready for prime time so I'd merge it and I'll start to have a look at #10 since it's been waiting for long. I think then we could also send out a release, so that people can try it and provide feedbacks on the plaform we couldn't test ourselves, what do you think? Or maybe it's better to first finish the "single connection" part? Let me know!

Yes, would be great to get #49 merged. Totally agree! Would also be great if you continued with #10 as I have a few things to do.

My suggestion regarding release is that we wait until we have a few more things in place that I want to fix (single connection being one of them). Then we make a pre-release so that people can try it out. Since this release is just a big breaking change, it's important that we are clear about that so people don't upgrade and expect things to continue to work. We also need to spend some time on updating the documentation as it's very outdated now.

@rospogrigio
Copy link
Owner Author

I feel that it's not general enough. What would be the result of scaling factor 3 or 0.25 for instance? It's just easier to keep it a user-specific thing as you say for now and deal with it later.

Ooops, you are super right, didn't think of other scaling factors! OK let's keep it like this.

I think something is not working correctly with something, somewhere (to be extremely precise) since I always get a lot of "Failed to connect" errors. With my changes to move polling out from the platforms and into the component instead, these errors have disappeared completely. My dev instance has been running for several hours today and I haven't seen a single one. Poll frequency is also five seconds just to stress test a bit, so pretty good I would say. As I haven't looked into the details of current implementation I can't say much, but this is at least a good way forward. Trying to decrease retries and timeout time would also be good as when these times gets longer than the update frequency we might run into problems...

Cool, can't wait to see this in action on my environment then! 👍 I always felt that there was something strange in TuyaCache implementation...

Anyway, #49 seems to be ready for prime time so I'd merge it and I'll start to have a look at #10 since it's been waiting for long. I think then we could also send out a release, so that people can try it and provide feedbacks on the plaform we couldn't test ourselves, what do you think? Or maybe it's better to first finish the "single connection" part? Let me know!

Yes, would be great to get #49 merged. Totally agree! Would also be great if you continued with #10 as I have a few things to do.

My suggestion regarding release is that we wait until we have a few more things in place that I want to fix (single connection being one of them). Then we make a pre-release so that people can try it out. Since this release is just a big breaking change, it's important that we are clear about that so people don't upgrade and expect things to continue to work. We also need to spend some time on updating the documentation as it's very outdated now.

OK I agree 100%. I can work on #10 and update the documentation 👍

@rospogrigio
Copy link
Owner Author

Closed, accomplished in #49.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants