-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce local search operators scope #509
Comments
For reference, this is known as granular neighborhoods in the literature:
Here is a reference implementation for CVRP: https://github.com/acco93/cobra |
I've been playing around with this in the We'll probably need more advanced reasoning for other moves, especially the |
When using "only" the simple approach described above (low hanging-fruits), we can basically cut down a lot of move lookups in an efficient way, while only slightly decreasing solution quality, meaning we're mostly ruling out "unecessary" moves. Comparison on the usual CVRP benchmarks across all exploration factors
Since that puts us in a much faster solving situation, we can definitely live with the slight degradation for Next stepsI also experimented a bit in the PR with sparsification, i.e. filtering moves based on some edge having a cost over a given threshold. This gives promising results but I'm potentially seeing more quality degredation so it's more like a trade-off thing. Thus I've reverted those commits since:
|
The current way local search operators look up for neighboring solutions is a bit naive in that we exhaustively check all options for routes / pairs of routes. Obviously relocating a job in another route at a rank where tasks are very far away does not make sense. In that case we stop early without any validity check because the gain itself is not interesting. Yet probably a lot of time is actually spent evaluating gain for silly moves.
I recall toying with more advanced filtering in the early days of the implementation, e.g. only trying a relocate move to the nearest routes or ranks in routes. From what I recall, it did have a detering impact on quality (some interesting moves missed in the process) and the computing gain was not that relevant because the overall approach was "fast enough" as a first implementation.
Now I think would be a good time to evaluate the speedups we could get from this idea. The question is whether we'll be able to easily find the sweet spot between unnecessary evaluation and missed moves. Either we can come up with some threshold where we'll only miss a marginal number of moves, either we can maybe make this configurable to some extent.
The text was updated successfully, but these errors were encountered: