Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement BaBSR Branching Heuristic as a new Branching Strategy #851

Open
wants to merge 24 commits into
base: master
Choose a base branch
from

Conversation

liamjdavis
Copy link
Contributor

@liamjdavis liamjdavis commented Dec 11, 2024

This pull request introduces a new branching strategy, the BaBSR branching heuristic. The new change for this new heuristic includes additions to the configuration, option parsing, and engine components.

Key changes include:

Branching Strategy Implementation:

  • Implemented the pickSplitPLConstraintBasedOnBaBsrHeuristic method in the Engine class to split on the ReLUs with the highest BaBSR score. [1] [2] [3]
  • Added support for BaBSR heuristic in the ReluConstraint class, including methods to compute the BaBSR score. [1] [2] [3] [4] [5] [6] [7] [8] [9]

Option Parsing:

  • Updated OptionParser to include babsr as a valid branching strategy.
  • Modified Options::getDivideStrategy() to handle the new babsr-heuristic strategy.

Testing:

  • Added a new unit test file for Test_BaBsrSplitting.h for unit tests of the branching strategy.

@MatthewDaggitt
Copy link
Collaborator

I think the new option should be documented in the CHANGELOG.md file?

@liamjdavis
Copy link
Contributor Author

liamjdavis commented Dec 17, 2024

Fixed!

Copy link
Collaborator

@wu-haoze wu-haoze left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall it looks good. I just have some comments regarding efficiency and style.

src/engine/ReluConstraint.h Outdated Show resolved Hide resolved
src/nlr/tests/Test_NetworkLevelReasoner.h Outdated Show resolved Hide resolved
src/engine/tests/Test_ReluConstraint.h Outdated Show resolved Hide resolved
src/engine/ReluConstraint.h Outdated Show resolved Hide resolved
src/engine/ReluConstraint.cpp Outdated Show resolved Hide resolved
src/nlr/NetworkLevelReasoner.cpp Outdated Show resolved Hide resolved
@liamjdavis
Copy link
Contributor Author

liamjdavis commented Jan 10, 2025

I made the requested changes, and ended up switching to saving bias as a double instead of a Map, updating the unit tests and other references as needed.

@liamjdavis
Copy link
Contributor Author

I made an optimization to getPreviousBias() where all the NLR iterates over all the constraints the first time getPreviousBias() is called and stores all the previous biases in a map in the NLR. Then whenever getPreviousBias() it pulls the previous bias from the map instead of iterating over all the constraints again.

Copy link
Collaborator

@wu-haoze wu-haoze left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one additional minor comment. Otherwise, LGTM!

@@ -152,7 +152,7 @@ void OptionParser::initialize()
&( ( *_stringOptions )[Options::SPLITTING_STRATEGY] ) )
->default_value( ( *_stringOptions )[Options::SPLITTING_STRATEGY] ),
"The branching strategy "
"(earliest-relu/pseudo-impact/largest-interval/relu-violation/polarity)."
"(earliest-relu/pseudo-impact/largest-interval/relu-violation/polarity/babsr-heuristic)."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to babsr

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants