Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NUP-2401: Check for prediction results discrepancies #3558

Merged
merged 25 commits into from
Apr 27, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
80bb517
NUP-2394: use YAML file for model params
Apr 4, 2017
54a9d10
NUP-2394: network API quick-start example (WIP)
Apr 4, 2017
69841ff
NUP-2394: run inference outside network
Apr 5, 2017
6956e0c
NUP-2394: save and check all complete-examples.py predictions
Apr 5, 2017
956f569
Merge branch 'master' of https://github.com/numenta/nupic into NUP-23…
Apr 5, 2017
fb95987
NUP-2394: use YAML params in "algo" code example
Apr 6, 2017
dc3cdbf
NUP-2394: update comments of YAML params based on feedback
Apr 6, 2017
a771b84
NUP-2394: scripts to compare predictions results between the 3 code e…
Apr 6, 2017
93160cf
NUP-2394: Run classification inside network. Details:
Apr 8, 2017
4ce9edc
NUP-2394: Show RMSE in plot titles
Apr 8, 2017
9c77a66
Merge branch 'master' of https://github.com/numenta/nupic into NUP-23…
Apr 14, 2017
21d82b1
Merge branch 'master' of https://github.com/numenta/nupic into NUP-23…
Apr 17, 2017
b727b69
Code review feedback:
Apr 17, 2017
dc72590
NUP-2394: Fix YAML with new CLA model name (HTMPrediction)
Apr 17, 2017
fac2380
NUP-2394: make model_params camel case for consistency and update cod…
Apr 18, 2017
3c205af
NUP-2394: re-order network creation logic:
Apr 18, 2017
d7e8593
NUP-2394: fix indentation
Apr 18, 2017
0466c73
NUP-2405: quick-start guide for the network API:
Apr 18, 2017
829db9d
NUP-2405: Fix reference to old modelParams in OPF example:
Apr 18, 2017
6ad36cc
NUP-2401: unit test checking consistency of predictions in docs examp…
Apr 19, 2017
1bf77c5
NUP-2401: add comments in unittest.skip decorators
Apr 26, 2017
c98d85e
Merge branch 'master' of https://github.com/numenta/nupic into NUP-24…
Apr 26, 2017
d67f4e0
NUP-2401: make result example consistent with complete example.
Apr 26, 2017
4b8cd85
NUP-2401: Fix merge conflict issues:
Apr 26, 2017
3ea1d1a
NUP-2401: fix results order
Apr 26, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -153,11 +153,13 @@ def runHotgym():
)

# Print the best prediction for 1 step out.
probability, value = sorted(
oneStepConfidence, oneStep = sorted(
zip(classifierResult[1], classifierResult["actualValues"]),
reverse=True
)[0]
print("1-step: {:16} ({:4.4}%)".format(value, probability * 100))
print("1-step: {:16} ({:4.4}%)".format(oneStep, oneStepConfidence * 100))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to pass these in a different order?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean? I just renamed the variables so that they are consistent with the other examples. Am I missing something?


yield oneStep, oneStepConfidence * 100, None, None



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ def getPredictionResults(network, clRegionName):
N = classifierRegion.getSelf().maxCategoryCount
results = {step: {} for step in steps}
for i in range(len(steps)):
# stepProbabilities: probabilities for this prediction step only.
# stepProbabilities are probabilities for this prediction step only.
stepProbabilities = probabilities[i * N:(i + 1) * N - 1]
mostLikelyCategoryIdx = stepProbabilities.argmax()
predictedValue = actualValues[mostLikelyCategoryIdx]
Expand Down Expand Up @@ -143,11 +143,10 @@ def runHotgym():
fiveStep = results[5]["predictedValue"]
fiveStepConfidence = results[5]["predictionConfidence"]

print("1-step: {:16} ({:4.4}%)\t"
"5-step: {:16} ({:4.4}%)".format(oneStep,
oneStepConfidence * 100,
fiveStep,
fiveStepConfidence * 100))
result = (oneStep, oneStepConfidence * 100,
fiveStep, fiveStepConfidence * 100)
print "1-step: {:16} ({:4.4}%)\t 5-step: {:16} ({:4.4}%)".format(*result)
yield result
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of this yield?

Copy link
Member Author

@marionleborgne marionleborgne Apr 26, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to access the data generated by runHotgym() methods in all 3 complete-example.py in my unit test - the thing testing wether the 3 frameworks output the same prediction results. I'm adding a yield at the end of runHotgym() to turn this method into a data generator (of HTM predictions) so that I can access prediction results in the tests.

I could have accumulated the prediction results in a list and returned the list. Let me know if you have a preference on how to do this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No preference, just curious.



if __name__ == "__main__":
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,10 @@ def runHotgym():
fiveStep = bestPredictions[5]
fiveStepConfidence = allPredictions[5][fiveStep]

print("1-step: {:16} ({:4.4}%)\t"
"5-step: {:16} ({:4.4}%)".format(oneStep,
oneStepConfidence * 100,
fiveStep,
fiveStepConfidence * 100))
result = (oneStep, oneStepConfidence * 100,
fiveStep, fiveStepConfidence * 100)
print "1-step: {:16} ({:4.4}%)\t 5-step: {:16} ({:4.4}%)".format(*result)
yield result



Expand Down
9 changes: 3 additions & 6 deletions docs/examples/opf/results-example.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,6 @@
oneStepConfidence = allPredictions[1][oneStep]
fiveStepConfidence = allPredictions[5][fiveStep]

print("1-step: {:16} ({:4.4}%)\t5-step: {:16} ({:4.4}%)".format(
oneStep,
oneStepConfidence*100,
fiveStep,
fiveStepConfidence*100
))
result = (oneStep, oneStepConfidence * 100,
fiveStep, fiveStepConfidence * 100)
print "1-step: {:16} ({:4.4}%)\t 5-step: {:16} ({:4.4}%)".format(*result)
2 changes: 1 addition & 1 deletion docs/source/quick-start/algorithms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Here is the complete program we are going to use as an example. In sections
below, we'll break it down into parts and explain what is happening (without
some of the plumbing details).

.. literalinclude:: ../../examples/algo/complete-example.py
.. literalinclude:: ../../examples/algo/complete-algo-example.py


Encoding Data
Expand Down
2 changes: 1 addition & 1 deletion docs/source/quick-start/network.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Here is the complete program we are going to use as an example. In sections
below, we'll break it down into parts and explain what is happening (without
some of the plumbing details).

.. literalinclude:: ../../examples/network/complete-example.py
.. literalinclude:: ../../examples/network/complete-network-example.py

Network Parameters
^^^^^^^^^^^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion docs/source/quick-start/opf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Here is the complete program we are going to use as an example. In sections
below, we'll break it down into parts and explain what is happening (without
some of the plumbing details).

.. literalinclude:: ../../examples/opf/complete-example.py
.. literalinclude:: ../../examples/opf/complete-opf-example.py

Model Parameters
^^^^^^^^^^^^^^^^
Expand Down
160 changes: 132 additions & 28 deletions tests/unit/nupic/docs/examples_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,50 +24,154 @@
import os
import sys
import unittest2 as unittest
import numpy as np
import random

SEED = 42
random.seed(SEED)
np.random.seed(SEED)


def _runExample():
"""Import and run main function runHotgym() in complete-example.py"""
mod = __import__("complete-example", fromlist=["runHotgym"])
runHotgym = getattr(mod, 'runHotgym')
runHotgym()

def _getPredictionsGenerator(examplesDir, exampleName):
"""
Get predictions generator for one of the quick-start example.

.. note::

class ExamplesTest(unittest.TestCase):
"""Unit tests for all quick-start examples."""
The examples are not part of the nupic package so we need to manually
append the example module path to syspath.

:param examplesDir:
(str) path to the example parent directory.
:param exampleName:
(str) name of the example. E.g: "opf", "network", "algo".
:return predictionsGenerator:
(function) predictions generator functions.
"""

def setUp(self):
docsTestsPath = os.path.dirname(os.path.abspath(__file__))
self.examplesDir = os.path.join(docsTestsPath, os.path.pardir,
os.path.pardir, os.path.pardir,
os.path.pardir, "docs", "examples")
sys.path.insert(0, os.path.join(examplesDir, exampleName))
modName = "complete-%s-example" % exampleName
mod = __import__(modName, fromlist=["runHotgym"])
return getattr(mod, "runHotgym")


def testExamplesDirExists(self):
"""Make sure the ``examples`` directory is in the correct location"""
self.assertTrue(os.path.exists(self.examplesDir),
"Path to examples does not exist: %s" % self.examplesDir)

class ExamplesTest(unittest.TestCase):
"""Unit tests for all quick-start examples."""

examples = ["opf", "network", "algo"]
oneStepPredictions = {example: [] for example in examples}
oneStepConfidences = {example: [] for example in examples}
fiveStepPredictions = {example: [] for example in examples}
fiveStepConfidences = {example: [] for example in examples}

def testOPFExample(self):
"""Make sure the OPF example does not throw any exception"""
sys.path.insert(0, os.path.join(self.examplesDir, "opf")) # Add to path
_runExample()
docsTestsPath = os.path.dirname(os.path.abspath(__file__))
examplesDir = os.path.join(docsTestsPath, os.path.pardir,
os.path.pardir, os.path.pardir,
os.path.pardir, "docs", "examples")


def testNetworkAPIExample(self):
"""Make sure the network API example does not throw any exception"""
sys.path.insert(0, os.path.join(self.examplesDir, "network")) # Add to path
_runExample()
@classmethod
def setUpClass(cls):
"""Get the predictions and prediction confidences for all examples."""
for example in cls.examples:
predictionsGenerator = _getPredictionsGenerator(cls.examplesDir, example)
for (oneStepPrediction, oneStepConfidence,
fiveStepPrediction, fiveStepConfidence) in predictionsGenerator():
cls.oneStepPredictions[example].append(oneStepPrediction)
cls.oneStepConfidences[example].append(oneStepConfidence)
cls.fiveStepPredictions[example].append(fiveStepPrediction)
cls.fiveStepConfidences[example].append(fiveStepConfidence)


def testAlgoExample(self):
"""Make sure the algorithm API example does not throw any exception"""
sys.path.insert(0, os.path.join(self.examplesDir, "algo")) # Add to path
_runExample()
def testExamplesDirExists(self):
"""Make sure the examples directory is in the correct location"""
failMsg = "Path to examples does not exist: %s" % ExamplesTest.examplesDir
self.assertTrue(os.path.exists(ExamplesTest.examplesDir), failMsg)


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testNumberOfOneStepPredictions(self):
"""Make sure all examples output the same number of oneStepPredictions."""

self.assertEquals(len(ExamplesTest.oneStepPredictions["opf"]),
len(ExamplesTest.oneStepPredictions["algo"]))
self.assertEquals(len(ExamplesTest.oneStepPredictions["opf"]),
len(ExamplesTest.oneStepPredictions["network"]))


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testOneStepPredictionsOpfVsAlgo(self):
"""Make sure one-step predictions are the same for OPF and Algo API."""
for i in range(len(ExamplesTest.oneStepPredictions["opf"])):
self.assertEquals(ExamplesTest.oneStepPredictions["opf"][i],
ExamplesTest.oneStepPredictions["algo"][i])


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testOneStepPredictionsOpfVsNetwork(self):
"""Make sure one-step predictions are the same for OPF and Network API."""
for i in range(len(ExamplesTest.oneStepPredictions["opf"])):
self.assertEquals(ExamplesTest.oneStepPredictions["opf"][i],
ExamplesTest.oneStepPredictions["network"][i])


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testOneStepPredictionsAlgoVsNetwork(self):
"""Make sure one-step predictions are the same for Algo and Network API."""
for i in range(len(ExamplesTest.oneStepPredictions["algo"])):
self.assertEquals(ExamplesTest.oneStepPredictions["algo"][i],
ExamplesTest.oneStepPredictions["network"][i])


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testFiveStepPredictionsOpfVsNetwork(self):
"""Make sure five-step predictions are the same for OPF and Network API."""
for i in range(len(ExamplesTest.fiveStepPredictions["opf"])):
self.assertEquals(ExamplesTest.fiveStepPredictions["opf"][i],
ExamplesTest.fiveStepPredictions["network"][i])


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testOneStepConfidencesOpfVsAlgo(self):
"""Make sure one-step confidences are the same for OPF and Algo API."""
for i in range(len(ExamplesTest.oneStepConfidences["opf"])):
self.assertEquals(ExamplesTest.oneStepConfidences["opf"][i],
ExamplesTest.oneStepConfidences["algo"][i])


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testOneStepConfidencesOpfVsNetwork(self):
"""Make sure one-step confidences are the same for OPF and Network API."""
for i in range(len(ExamplesTest.oneStepConfidences["opf"])):
self.assertEquals(ExamplesTest.oneStepConfidences["opf"][i],
ExamplesTest.oneStepConfidences["network"][i])


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testOneStepConfidencesAlgoVsNetwork(self):
"""Make sure one-step confidences are the same for Algo and Network API."""
for i in range(len(ExamplesTest.oneStepConfidences["algo"])):
self.assertEquals(ExamplesTest.oneStepConfidences["algo"][i],
ExamplesTest.oneStepConfidences["network"][i])


@unittest.skip("Skip test until we figure out why we get different "
"results with OPF, Network and Algorithm APIs.")
def testFiveStepConfidencesOpfVsNetwork(self):
"""Make sure five-step confidences are the same for OPF and Network API."""
for i in range(len(ExamplesTest.fiveStepConfidences["opf"])):
self.assertEquals(ExamplesTest.fiveStepConfidences["opf"][i],
ExamplesTest.fiveStepConfidences["network"][i])



Expand Down