-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NUP-2401: Check for prediction results discrepancies #3558
Conversation
…94-network-api # Conflicts: # docs/examples/opf/model_params.py
…xamples (OPF, network API and algo)
* Use new network links created in NUP-2396 * Remove customCompute() usage
…94-network-api # Conflicts: # docs/examples/opf/model_params.py
* Add unit test to make sure all 3 examples don't thow any exception * Change single quotes to double quotes everywhere * Remove utlity script to plot saved prediction results from all 3 examples * Remove part where examples save predictions to file * Rename networkapi to network for better readability
* Make the example code easier to follow in the quick-start section.
* RST documentation * Code snippets
* Use YAML params instead.
…les: * Rename each complete-example.py module to something more specific to avoid conflicts when importing all examples in test. * Make runHotgym() method yield prediction results to turn method into generator and minimally impact docs example code. * Update examples_test.py tests to check for consistency of results predictions. * Mark failing tests as skipped for now, until we can figure out why prediction results are not the same between the 3 frameworks (OPF, Aglo and Network API)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just have some questions before approving.
zip(classifierResult[1], classifierResult["actualValues"]), | ||
reverse=True | ||
)[0] | ||
print("1-step: {:16} ({:4.4}%)".format(value, probability * 100)) | ||
print("1-step: {:16} ({:4.4}%)".format(oneStep, oneStepConfidence * 100)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean to pass these in a different order?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean? I just renamed the variables so that they are consistent with the other examples. Am I missing something?
result = (oneStep, oneStepConfidence * 100, | ||
fiveStep, fiveStepConfidence * 100) | ||
print "1-step: {:16} ({:4.4}%)\t 5-step: {:16} ({:4.4}%)".format(*result) | ||
yield result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the purpose of this yield
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to access the data generated by runHotgym()
methods in all 3 complete-example.py
in my unit test - the thing testing wether the 3 frameworks output the same prediction results. I'm adding a yield
at the end of runHotgym()
to turn this method into a data generator (of HTM predictions) so that I can access prediction results in the tests.
I could have accumulated the prediction results in a list and returned the list. Let me know if you have a preference on how to do this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No preference, just curious.
"""Make sure the algorithm API example does not throw any exception""" | ||
sys.path.insert(0, os.path.join(self.examplesDir, "algo")) # Add to path | ||
_runExample() | ||
@unittest.skip |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are they all marked @unittest.skip
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because they are currently failing. This PR focuses on adding tests to see if OPF, Network and Algo API give the same results. They don't, so I am marking these tests as as skipped for now, until we figure out why we get different results (out of the scope of this PR and I need help with someone familiar with the algos)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add this in a comment?
These test to see if OPF, Network and Algorithm APIs give the same results. They
don't, so they are marked as skipped for now, until we figure out why we get
different results
…01-results # Resolved conflicts: # docs/examples/opf/complete-example.py # docs/examples/opf/create-model-example.py # docs/source/quick-start/network.rst # tests/unit/nupic/docs/examples_test.py
* `complete-example.py` should be `complete-network-example.py` * Fix name change (TPRegion.py -> TMRegion.py) * Restore tests/unit/nupic/docs/examples_test.py
Fixes #3566 |
@@ -153,12 +153,13 @@ def runHotgym(): | |||
) | |||
|
|||
# Print the best prediction for 1 step out. | |||
probability, value = sorted( | |||
oneStep, oneStepConfidence = sorted( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@marionleborgne Look here, the order was swapped in the call below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching that!
Fixes #3566
This change adds tests to check the prediction results of the 3 frameworks (OPF, Algo and Network API) as used in the new NuPIC docs examples.
I am marking these tests are skipped for now, since they are failing; the prediction results are different between OPF, Network API and Algorithm API. The random seed is the same for all 3 examples.
Blocked by #3557