Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NUP-2401: Check for prediction results discrepancies #3558

Merged
merged 25 commits into from
Apr 27, 2017

Conversation

marionleborgne
Copy link
Member

@marionleborgne marionleborgne commented Apr 20, 2017

Fixes #3566

This change adds tests to check the prediction results of the 3 frameworks (OPF, Algo and Network API) as used in the new NuPIC docs examples.

I am marking these tests are skipped for now, since they are failing; the prediction results are different between OPF, Network API and Algorithm API. The random seed is the same for all 3 examples.

Blocked by #3557

Marion Le Borgne added 20 commits April 4, 2017 16:58
…94-network-api

# Conflicts:
#	docs/examples/opf/model_params.py
* Use new network links created in NUP-2396
* Remove customCompute() usage
…94-network-api

# Conflicts:
#	docs/examples/opf/model_params.py
* Add unit test to make sure all 3 examples don't thow any exception
* Change single quotes to double quotes everywhere
* Remove utlity script to plot saved prediction results from all 3 examples
* Remove part where examples save predictions to file
* Rename networkapi to network for better readability
* Make the example code easier to follow in the quick-start section.
* RST documentation
* Code snippets
…les:

* Rename each complete-example.py module to something more
  specific to avoid conflicts when importing all examples in test.
* Make runHotgym() method yield prediction results to turn
  method into generator and minimally impact docs example code.
* Update examples_test.py tests to check for consistency
  of results predictions.
* Mark failing tests as skipped for now, until we can figure
  out why prediction results are not the same between the
  3 frameworks (OPF, Aglo and Network API)
Copy link
Member

@rhyolight rhyolight left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just have some questions before approving.

zip(classifierResult[1], classifierResult["actualValues"]),
reverse=True
)[0]
print("1-step: {:16} ({:4.4}%)".format(value, probability * 100))
print("1-step: {:16} ({:4.4}%)".format(oneStep, oneStepConfidence * 100))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to pass these in a different order?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean? I just renamed the variables so that they are consistent with the other examples. Am I missing something?

result = (oneStep, oneStepConfidence * 100,
fiveStep, fiveStepConfidence * 100)
print "1-step: {:16} ({:4.4}%)\t 5-step: {:16} ({:4.4}%)".format(*result)
yield result
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of this yield?

Copy link
Member Author

@marionleborgne marionleborgne Apr 26, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to access the data generated by runHotgym() methods in all 3 complete-example.py in my unit test - the thing testing wether the 3 frameworks output the same prediction results. I'm adding a yield at the end of runHotgym() to turn this method into a data generator (of HTM predictions) so that I can access prediction results in the tests.

I could have accumulated the prediction results in a list and returned the list. Let me know if you have a preference on how to do this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No preference, just curious.

"""Make sure the algorithm API example does not throw any exception"""
sys.path.insert(0, os.path.join(self.examplesDir, "algo")) # Add to path
_runExample()
@unittest.skip
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are they all marked @unittest.skip?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because they are currently failing. This PR focuses on adding tests to see if OPF, Network and Algo API give the same results. They don't, so I am marking these tests as as skipped for now, until we figure out why we get different results (out of the scope of this PR and I need help with someone familiar with the algos)

Copy link
Member

@rhyolight rhyolight Apr 26, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add this in a comment?

These test to see if OPF, Network and Algorithm APIs give the same results. They 
don't, so they are marked as skipped for now, until we figure out why we get 
different results

Marion Le Borgne added 4 commits April 26, 2017 14:49
…01-results

# Resolved conflicts:
#	docs/examples/opf/complete-example.py
#	docs/examples/opf/create-model-example.py
#	docs/source/quick-start/network.rst
#	tests/unit/nupic/docs/examples_test.py
* `complete-example.py` should be `complete-network-example.py`
* Fix name change (TPRegion.py -> TMRegion.py)
* Restore tests/unit/nupic/docs/examples_test.py
@marionleborgne
Copy link
Member Author

Fixes #3566

@@ -153,12 +153,13 @@ def runHotgym():
)

# Print the best prediction for 1 step out.
probability, value = sorted(
oneStep, oneStepConfidence = sorted(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@marionleborgne Look here, the order was swapped in the call below.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching that!

@rhyolight rhyolight merged commit f4a9074 into numenta:master Apr 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants