Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce local search operators scope #509

Closed
jcoupey opened this issue May 13, 2021 · 3 comments · Fixed by #1008
Closed

Reduce local search operators scope #509

jcoupey opened this issue May 13, 2021 · 3 comments · Fixed by #1008

Comments

@jcoupey
Copy link
Collaborator

jcoupey commented May 13, 2021

The current way local search operators look up for neighboring solutions is a bit naive in that we exhaustively check all options for routes / pairs of routes. Obviously relocating a job in another route at a rank where tasks are very far away does not make sense. In that case we stop early without any validity check because the gain itself is not interesting. Yet probably a lot of time is actually spent evaluating gain for silly moves.

I recall toying with more advanced filtering in the early days of the implementation, e.g. only trying a relocate move to the nearest routes or ranks in routes. From what I recall, it did have a detering impact on quality (some interesting moves missed in the process) and the computing gain was not that relevant because the overall approach was "fast enough" as a first implementation.

Now I think would be a good time to evaluate the speedups we could get from this idea. The question is whether we'll be able to easily find the sweet spot between unnecessary evaluation and missed moves. Either we can come up with some threshold where we'll only miss a marginal number of moves, either we can maybe make this configurable to some extent.

@ktnr
Copy link

ktnr commented Aug 25, 2021

For reference, this is known as granular neighborhoods in the literature:

  • Toth, P., & Vigo, D. (2003). The granular tabu search and its application to the vehicle-routing problem. Informs Journal on computing, 15(4), 333-346.
  • Schneider, M., Schwahn, F., & Vigo, D. (2017). Designing granular solution methods for routing problems with time windows. European Journal of Operational Research, 263(2), 493-509.
  • also used in FILO which obtains impressive results on large-scale instances: Accorsi, L., & Vigo, D. (2021). A fast and scalable heuristic for the solution of large-scale capacitated vehicle routing problems. Transportation Science, 55(4), 832-856.

Here is a reference implementation for CVRP: https://github.com/acco93/cobra

@jcoupey
Copy link
Collaborator Author

jcoupey commented Oct 5, 2023

I've been playing around with this in the enhancement/operator-scope branch for a while. So far results are promising. It looks like we can find simple and easy ways to discard costly moves (looking at you SWAP*) without hurting much the solutions quality.
Currently I've been exploring a very simple approach: discard some moves between pair of routes if the bounding box for their jobs does not intersect. This makes perfect sense for the (Reverse)TwoOpt moves and some others.

We'll probably need more advanced reasoning for other moves, especially the Intra* ones but I've still added this to the next milestone as I feel we can already get a good benefit from the low-hanging fruits. Maybe afterward we should open a new ticket for more refined techniques going beyond that.

@jcoupey
Copy link
Collaborator Author

jcoupey commented Oct 18, 2023

When using "only" the simple approach described above (low hanging-fruits), we can basically cut down a lot of move lookups in an efficient way, while only slightly decreasing solution quality, meaning we're mostly ruling out "unecessary" moves.

Comparison on the usual CVRP benchmarks across all exploration factors

X master at f98080a  PR at 4dc9671    
  CT (ms) Gap (%) CT (ms) Gap (%) CT delta (%)
0 436 2.6 174 2.67 -60.1
1 1592 2.26 650 2.29 -59.2
2 3663 1.92 1485 1.96 -59.5
3 6500 1.6 2614 1.66 -59.8
4 12075 1.43 5029 1.46 -58.4
5 19119 1.33 8081 1.37 -57.7

trade_off

Since that puts us in a much faster solving situation, we can definitely live with the slight degradation for -x 5. I also tested the impact on the VRPTW Solomon benchmark: it is interesting too (CT gains between ~-17% and -25%) but less spectacular. This is because those instances have a lot of tight TW that we already use for local search pruning since #583.

Next steps

I also experimented a bit in the PR with sparsification, i.e. filtering moves based on some edge having a cost over a given threshold. This gives promising results but I'm potentially seeing more quality degredation so it's more like a trade-off thing. Thus I've reverted those commits since:

  • we can now ship a simple straightforward improvement without further heavy work;
  • if we want to introduce this, it should be tuned in the light of Rework solution space jump #874;
  • the potential gain I noticed in term of absolute computing time are much lower for the moves I tried;
  • those parts of the LS are not the main bottleneck anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants