Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attack #848

Open
wants to merge 69 commits into
base: master
Choose a base branch
from
Open

Conversation

mayaswissa
Copy link
Collaborator

No description provided.

@wu-haoze
Copy link
Collaborator

There seem to be extra files like .vscode/launch.json that need to be removed from the PR.

@idan0610
Copy link
Collaborator

idan0610 commented Dec 15, 2024

Does this solve the start up time issues described in #754 when linking libtorch? If so, how did you do it?

We are not actually solve this issue, and that's why we added a new CMake flag to allow building Marabou without importing the torch library, if someone is not interested in this delay when running multiple number of short queries in a row. But if someone else wants to run a more difficult query, that is expected to take a much longer time, this delay becomes negligible, and the adversarial attack is expected to significantly improve runtimes at least for SAT queries.

@MatthewDaggitt
Copy link
Collaborator

MatthewDaggitt commented Dec 16, 2024

We are not actually solve this issue, and that's why we added a new CMake flag to allow building Marabou without importing the torch library, if someone is not interested in this delay when running multiple number of short queries in a row. But if someone else wants to run a more difficult query, that is expected to take a much longer time, this delay becomes negligible, and the adversarial attack is expected to significantly improve runtimes at least for SAT queries.

Thanks for you answer @idan0610. While I understand the pragmatics of this decision, this proliferation of build flags seems like a mistake to me in the long term... What happens if the performance incompatibilities with the PyTorch build options get worse? How do we know how this flag interacts with all the other possible flags for Marabou?

If this is the approach taken then, given that we have known compatibility issues with this build config, my advice would be that at the very least we should run the whole test suite twice - once with the PyTorch build flag on and once with it off.

@MatthewDaggitt
Copy link
Collaborator

This should also be documented in the Changelog.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this file from PR

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also this

@@ -20,29 +20,29 @@ option(RUN_SYSTEM_TEST "Run system tests on build" OFF)
option(RUN_MEMORY_TEST "Run cxxtest testing with ASAN ON" ON)
option(RUN_PYTHON_TEST "Run Python API tests if building with Python" OFF)
option(ENABLE_GUROBI "Enable use the Gurobi optimizer" OFF)
option(ENABLE_OPENBLAS "Do symbolic bound tighting using blas" ON) # Not available on Windows
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return the default value to ON

CMakeLists.txt Outdated
option(CODE_COVERAGE "Add code coverage" OFF) # Available only in debug mode

option(BUILD_TORCH "Build libtorch" ON)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default should be OFF

@@ -78,7 +78,7 @@ const bool GlobalConfiguration::PREPROCESS_INPUT_QUERY = true;
const bool GlobalConfiguration::PREPROCESSOR_ELIMINATE_VARIABLES = true;
const bool GlobalConfiguration::PL_CONSTRAINTS_ADD_AUX_EQUATIONS_AFTER_PREPROCESSING = true;
const bool GlobalConfiguration::NL_CONSTRAINTS_ADD_AUX_EQUATIONS_AFTER_PREPROCESSING = true;
const double GlobalConfiguration::PREPROCESSOR_ALMOST_FIXED_THRESHOLD = 0.00001;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return the previous default value of 0.0001

@@ -71,6 +73,7 @@ void Options::initializeDefaultValues()
_intOptions[SEED] = 1;
_intOptions[NUM_BLAS_THREADS] = 1;
_intOptions[NUM_CONSTRAINTS_TO_REFINE_INC_LIN] = 30;
_intOptions[ATTACK_TIMEOUT] = 60; // todo?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

60 seconds sounds fine by me, remove todo

@@ -57,6 +57,7 @@ Marabou::~Marabou()

void Marabou::run()
{
std::cout << "start run time: " << TimeUtils::now().ascii() << std::endl;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove print, or transfer to log

@@ -69,6 +70,8 @@ void Marabou::run()

if ( Options::get()->getBool( Options::EXPORT_ASSIGNMENT ) )
exportAssignment();

std::cout << "end run time: " << TimeUtils::now().ascii() << std::endl;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here as well

if ( _engine->processInputQuery( _inputQuery ) )
{
if (_engine->getExitCode() == Engine::ATTACK_SAT)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix coding style

@@ -300,6 +308,11 @@ void Marabou::displayResults( unsigned long long microSecondsElapsed ) const
printf( "\n" );
}
}
else if (result == Engine::ATTACK_SAT)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

@idan0610 idan0610 closed this Dec 16, 2024
@idan0610 idan0610 reopened this Dec 16, 2024
@idan0610
Copy link
Collaborator

idan0610 commented Dec 16, 2024

We are not actually solve this issue, and that's why we added a new CMake flag to allow building Marabou without importing the torch library, if someone is not interested in this delay when running multiple number of short queries in a row. But if someone else wants to run a more difficult query, that is expected to take a much longer time, this delay becomes negligible, and the adversarial attack is expected to significantly improve runtimes at least for SAT queries.

Thanks for you answer @idan0610. While I understand the pragmatics of this decision, this proliferation of build flags seems like a mistake to me in the long term... What happens if the performance incompatibilities with the PyTorch build options get worse? How do we know how this flag interacts with all the other possible flags for Marabou?

If this is the approach taken then, given that we have known compatibility issues with this build config, my advice would be that at the very least we should run the whole test suite twice - once with the PyTorch build flag on and once with it off.

I understand your concerns, and I'm not sure what is the preferred way to handle this either.
@guykatzz What do you think?

@mayaswissa mayaswissa closed this Dec 18, 2024
@mayaswissa mayaswissa deleted the attack branch December 18, 2024 11:55
@mayaswissa mayaswissa reopened this Dec 18, 2024
@guykatzz
Copy link
Collaborator

Hi all, and apologies for the delay in my reply.

Matthew, I'm not sure I understand your particular concern about PyTorch. Marabou uses a bunch of other external packages - most notably Gurobi, but also blas, and boost, and possibly a few others. Sure, it is good to run tests with this flag set to both ON and OFF; but are you especially concerned about this package?

@MatthewDaggitt
Copy link
Collaborator

Matthew, I'm not sure I understand your particular concern about PyTorch. Marabou uses a bunch of other external packages - most notably Gurobi, but also blas, and boost, and possibly a few others. Sure, it is good to run tests with this flag set to both ON and OFF; but are you especially concerned about this package?

I'm concerned about this package in particular because when we build with Pytorch, we know that the performance of Marabou degrades noticeably (wish we knew why). At least as far as I'm aware there's not a known incompatibility issue with the rest of the code base when building with Gurobi/blas/boost.

@MatthewDaggitt
Copy link
Collaborator

I'm definitely not in anyway trying to block this PR from merging, but I did want to highlight that it's potentially a hit to maintainability of the code base!

@guykatzz
Copy link
Collaborator

guykatzz commented Jan 2, 2025

Can you elaborate on what you mean by "we know that the performance of Marabou degrades noticeably"?

My understanding was that loading PyTorch incurred a certain overhead for each query, but that would only happen if the flag was set to true, right? And in Maya and Idan's experiments we see a significant speedup for certain queries.

@MatthewDaggitt
Copy link
Collaborator

Can you elaborate on what you mean by "we know that the performance of Marabou degrades noticeably"?

My understanding was that loading PyTorch incurred a certain overhead for each query, but that would only happen if the flag was set to true, right?

Yes, this overhead was what I was referring to. Sorry, I should have said "degrades noticeably on some types of queries".

For context, the added overhead for each query when building with PyTorch can't be a "we're running extra code" issue, but instead is some incompatibility with the build settings of PyTorch and Marabou and should really be fixable.

If we could fix it then we should always build with PyTorch because then we could use the tensor library in the neural network parsers to fix the existing performance issues. I could then actually make progress in replacing most of the Python parsers with calls to the C++ parser. However, it was (rightly!) judged that the effects building with the PyTorch flag on was unacceptable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants