You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Osmose use to handle a few analysers that produces several hundred of thousands of warnings (8280/11 for instance).
Opendata continuously comes to reinforce OSM which will lead to many more warnings as well in coming months.
As it's not desirable to push several millions of warnings, I wonder how good it could be to limit the amount of warnings produced by each analyser.
Let's say we took the 100 000 first ones among 5M warnings and each solved warning let a supplementary one to show up.
Such a process would allow to define analysers that would lead to tremendous amounts of warnings without overloading osmose.
According to the current process, will it solve anything?
The text was updated successfully, but these errors were encountered:
Osmose use to handle a few analysers that produces several hundred of thousands of warnings (8280/11 for instance).
Opendata continuously comes to reinforce OSM which will lead to many more warnings as well in coming months.
It will be the case shortly regarding the power poles dataset, once we will be able to select the valid ones in this 5M file:
https://data.enedis.fr/explore/dataset/position-geographique-des-poteaux-hta-et-bt/
As it's not desirable to push several millions of warnings, I wonder how good it could be to limit the amount of warnings produced by each analyser.
Let's say we took the 100 000 first ones among 5M warnings and each solved warning let a supplementary one to show up.
Such a process would allow to define analysers that would lead to tremendous amounts of warnings without overloading osmose.
According to the current process, will it solve anything?
The text was updated successfully, but these errors were encountered: