-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nback #2617
Conversation
- add "results" dir to path for saving files • composition.py - change CROSS_ENTROPY assignment of LinearCombination operation to 'cross-entropy'
- StandardOutputPorts: fix bug in get_port_dict in which name searched for with "is" rather than "=="
- loss_function -> loss_spec (to be consistent with autodiffcomposition.py
…Link into feat/binomial_distort
…Link into feat/binomial_distort
- docstring mods re: learning ª report.py - remove ReportLearning
- docstring mods re: learning ª report.py - remove ReportLearning
…Link into feat/binomial_distort � Conflicts: � psyneulink/core/globals/utilities.py
…Link into feat/binomial_distort
- test_ContentAddressableMemory_simple_distances: - add test for empty list in call to c.distances_by_field - add tests for [] in field_weights
• Project - get rid of keywords for MSE, SSE, L0, L1, etc. and rename as Loss.<*>.name - NOTE: keyword for CROSS_ENTROPY ('cross-entropy') persists, as it is used for DistanceMetrics and LinearCombination.operation • composition.py, learningfunctions.py loss_function -> loss_spec (for consistency with autodiff) • outputport.py - StandardOutputPorts: fix bug in get_port_dict in which name searched for with "is" rather than "==" • comparatormechanism.py - rename standard_output_port['sum'] as standard_output_port['SUM'] - NOTE: names of MSE and SSE output_ports use Loss.<*>.name, where as SUM uses SUM.upper() (instead of L0)
• Project - get rid of keywords for MSE, SSE, L0, L1, etc. and rename as Loss.<*>.name - NOTE: keyword for CROSS_ENTROPY ('cross-entropy') persists, as it is used for DistanceMetrics and LinearCombination.operation • composition.py, learningfunctions.py loss_function -> loss_spec (for consistency with autodiff) • outputport.py - StandardOutputPorts: fix bug in get_port_dict in which name searched for with "is" rather than "==" • comparatormechanism.py - rename standard_output_port['sum'] as standard_output_port['SUM'] - NOTE: names of MSE and SSE output_ports use Loss.<*>.name, where as SUM uses SUM.upper() (instead of L0)
This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):
See CI logs for the full diff. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CodeQL found more than 10 potential problems in the proposed changes. Check the Files changed tab for more details.
This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):
See CI logs for the full diff. |
This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):
See CI logs for the full diff. |
This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):
See CI logs for the full diff. |
This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):
See CI logs for the full diff. |
This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):
See CI logs for the full diff. |
This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):
See CI logs for the full diff. |
* origin/allclose-clean: tests: replace all unaffected assert np.allclose with np.testing.assert_allclose tests: replace assert np.allclose with np.testing.assert_allclose where tolerance fails tests: test_identicalness_of_control_and_gating: fix allclose shape tests: test_log: fix allclose shape tests: test_rpy.py test_multilayer: fix allclose tests: test_reset_run_2darray: fix allclose tests: test_transfer_mech_array_var_normal_array_noise: fix allclose tests: test_single_array: fix allclose tests: test_connect_outer_composition_to_only_input_node_in_inner_comp_option2: fix allclose tests: test_AGTUtility_valid: fix allclose tests: test_target_dict_spec_single_trial_scalar_and_lists_rl: fix allclose tests: test_transfer_mech_func: fix allclose tests: test_input_specification_multiple_nested_compositions: fix allclose tests: test_input_not_provided_to_run: fix allclose tests: test_example_11: fix allclose tests: test_buffer_standalone: fix allclose tests: test_assign_value: fix allclose tests: TestRunInputSpecifications::test_2_mechanisms_input_5: fix allclose tests: test_integrator_multiple_input: fix allclose tests: test_nested_composition_run: fix allclose tests: tests_output_port_variable_spec_composition: fix allclose tests: test_output_ports: fix allclose tests: test_LCAMechanism_threshold: fix allclose tests: test_user_def_func_numpy: match 2d output to 2d input tests: test_linear_combination_function_in_mechanism: fix allclose tests: test_connect_compositions_with_simple_states: fix allclose tests: TestInputSpecifications test_run_2_mechanisms_reuse_input: fix allclose tests: TestRun test_run_2_mechanisms_reuse_input: fix allclose tests: TestRunInputSpecifications test_run_2_mechanisms_reuse_input: fix allclose tests: test_nested_transfer_mechanism_composition_parallel shape mismatch Fix test_ddm_mechanism TestInputPorts Fix test_reset_run_array Fix TestReduce Fix test_reset_state_integrator_mechanism Fix more execute tests. Fix more composition.run() tests. Fix test_four_level_nested_transfer_mechanism_composition_parallel Fix test_processing_mechanism_function tests: allclose changes Fix log tests tests: allclose changes tests: allclose changes Fix test_nested_composition_execution Fix expected 3D outputs Fix test_run_no_inputs tests: allclose changes tests: allclose changes tests: allclose changes Fix expected results shape Fix test_xor_training_correctness tests: test_nested_composition_run_trials_inputs: undo disable test parametrizations tests: allclose changes Nback (PrincetonUniversity#2617) Fix/contentaddressablememory empty entry (PrincetonUniversity#2616) llvm: Add human readable name to _node_wrapper instances llvm: Use WeakRefDictionary for node wrappers llvm/builder_context: Use proxy object for _node_wrapper owning composition
* origin/devel: tests: remove asserts using zip loops where possible tests: replace all unaffected assert np.allclose with np.testing.assert_allclose tests: replace assert np.allclose with np.testing.assert_allclose where tolerance fails tests: test_gating_with_composition: fix allclose tests: test_log: fix allclose shape tests: test_rpy.py test_multilayer: fix allclose tests: test_reset_run_2darray: fix allclose tests: test_transfer_mech_array_var_normal_array_noise: fix allclose tests: test_single_array: fix allclose tests: test_connect_outer_composition_to_only_input_node_in_inner_comp_option2: fix allclose tests: test_AGTUtility_valid: fix allclose tests: test_target_dict_spec_single_trial_scalar_and_lists_rl: fix allclose tests: test_transfer_mech_func: fix allclose tests: test_input_specification_multiple_nested_compositions: fix allclose tests: test_input_not_provided_to_run: fix allclose tests: test_example_11: fix allclose tests: test_buffer_standalone: fix allclose tests: test_assign_value: fix allclose tests: TestRunInputSpecifications::test_2_mechanisms_input_5: fix allclose tests: test_integrator_multiple_input: fix allclose tests: tests_output_port_variable_spec_composition: fix allclose tests: test_output_ports: fix allclose tests: test_LCAMechanism_threshold: fix allclose tests: test_user_def_func_numpy: match 2d output to 2d input tests: test_linear_combination_function_in_mechanism: fix allclose tests: test_connect_compositions_with_simple_states: fix allclose tests: TestInputSpecifications test_run_2_mechanisms_reuse_input: fix allclose tests: TestRun test_run_2_mechanisms_reuse_input: fix allclose tests: TestRunInputSpecifications test_run_2_mechanisms_reuse_input: fix allclose tests: test_nested_transfer_mechanism_composition_parallel shape mismatch Fix test_ddm_mechanism TestInputPorts Fix test_reset_run_array Fix TestReduce Fix test_reset_state_integrator_mechanism Fix more execute tests. Fix more composition.run() tests. Fix test_four_level_nested_transfer_mechanism_composition_parallel Fix test_processing_mechanism_function tests: allclose changes Fix log tests tests: allclose changes Fix expected 3D outputs Fix test_run_no_inputs tests: allclose changes tests: allclose changes tests: allclose changes Fix expected results shape tests: test_nested_composition_run_trials_inputs: undo disable test parametrizations tests: allclose changes tests: test_documentation_models: reenable parametrizations github-actions: Add job running --benchmark-enable tests/AudodiffComposition: Add helper function to return the first set of learning results github-actions: Run all tests with --fp-precision=fp32 tests/Autodiff: Check for execution mode PyTorch in test_optimizer specs tests/control: FP32 should only be used in compiled tests tests/xor_training_identicalness: Convert to use autodiff_mode tests/MemoryFunction: Prefer np.testing.assert_equal to assert np.all(==) test/AutodiffComposition: Use np.testing.assert_equal instead of assert np.all(==) requirements: update beartype requirement from <0.13.0 to <0.14.0 (PrincetonUniversity#2625) requirements: update pytest requirement from <7.2.3 to <7.3.1 (PrincetonUniversity#2627) requirements: Drop grpcio-tools requirements: Add protobuf to dependency list {dev,tutorial}_requirements: Use sharp inequality for jupyter upper bound dev_requirements: Use sharp inequality of pytest-profiling upper bound requirements: Sort alphabetically requirements: Use sharp inequality for leabra-psyneulink upper bound requirements: Fix dill dependency upper bound requirements: Use sharp inequality for bear type upper bound Tentative learning branch (PrincetonUniversity#2623) tests/llvm/multiple_executions: Use np.testing.assert_allclose to check resutls (PrincetonUniversity#2621) requirements: update pandas requirement from <1.5.4 to <2.0.1 (PrincetonUniversity#2619) Add back rate validation in constructor. Add back validation I added for offset. Remove validation I added for offset. Add validate methods as kmantel suggests requirements: update pillow requirement from <9.5.0 to <9.6.0 (PrincetonUniversity#2618) Remove typecheck-decorator requirment. Fix some annotations. requirements: update networkx requirement from <3.1 to <3.2 (PrincetonUniversity#2620) Nback (PrincetonUniversity#2617) Fix/contentaddressablememory empty entry (PrincetonUniversity#2616) Fix some annotations on learning methods. Fix some issues with merge of composition. Fix type annotation on _get_port_value_labels Remove check_user_specified in places. Replace typecheck decorator with beartype Add typecheck decorator back temporarily llvm: Add human readable name to _node_wrapper instances llvm: Use WeakRefDictionary for node wrappers llvm/builder_context: Use proxy object for _node_wrapper owning composition remove redefinition of ENTROPY keyword Raise KeyError on MechanismList.__seitem__ fix some lgtm errors remove unused imports revert ci testing changes completely for now. Fix codestyle errors. Revert back to testing on Python 3.9. Fix specifcation of Python 3.10 in ci workflow Fixes for Python 3.7 Enable tests for python 3.10 in CI. Add beartype for runtime type checking. Replace typecheck import with beartype Removed all the tc.optional typecheck annotations replaced all typecheck enums with Literal Replace is_function_type with static type Replace is_pref_set with static type Replace parameter_spec with static type. Remove dead code in parameter_spec ran some regexes to replace easy type hints fix for some tests that expect runtime type checks Comment out all typecheck decorations.
d6d95f3 ("Nback (PrincetonUniversity#2617)") changed the use of cross entropy loss function to use "one hot" format for targts instead of the previously used index. This new format requires torch >= 1.12.0 [0] Fixes: d6d95f3 ("Nback (PrincetonUniversity#2617)") Closes: PrincetonUniversity#2665 [0] https://github.com/pytorch/pytorch/releases/tag/v1.12.0 Signed-off-by: Jan Vesely <[email protected]>
…ch<=1.11.x 1.12.0+ can use the CrossEntropyLoss instance directly. d6d95f3 ("Nback (PrincetonUniversity#2617)") changed the format of cross entropy target that requires torch >=1.12 Fixes: d6d95f3 ("Nback (PrincetonUniversity#2617)") Closes: PrincetonUniversity#2665 [0] https://github.com/pytorch/pytorch/releases/tag/v1.12.0 Signed-off-by: Jan Vesely <[email protected]>
…ch<=1.11.x 1.12.0+ can use the CrossEntropyLoss instance directly. d6d95f3 ("Nback (PrincetonUniversity#2617)") changed the format of cross entropy target that requires torch >=1.12 Fixes: d6d95f3 ("Nback (PrincetonUniversity#2617)") Closes: PrincetonUniversity#2665 [0] https://github.com/pytorch/pytorch/releases/tag/v1.12.0 Signed-off-by: Jan Vesely <[email protected]>
…torch<=1.11.x d6d95f3 ("Nback (PrincetonUniversity#2617)") changed the format of cross entropy target that requires torch >=1.12 1.12.0+ includes input handling path to consider inputs without batch dimension and can be used directly. [0,1,2] Fixes: d6d95f3 ("Nback (PrincetonUniversity#2617)") Closes: PrincetonUniversity#2665 [0] https://github.com/pytorch/pytorch/releases/tag/v1.12.0 [1] pytorch/pytorch#77653 [2] pytorch/pytorch@8881d7a Signed-off-by: Jan Vesely <[email protected]>
• Project
as it is used for DistanceMetrics and LinearCombination.operation
• composition.py, learningfunctions.py
• outputport.py
• comparatormechanism.py