Skip to content

Commit

Permalink
Refactor/mechanisms/ execute (#753)
Browse files Browse the repository at this point in the history
* • IntegratorMechanism (#742)

- added input_states and output_states to
    init and assign_args_to_params_dicts

• LinearMatrix
  - keyword:  fixed bug in which assignment of rows or cols
              could not handle scalars

* Fix/function/stability (#743)

* • IntegratorMechanism
  - added input_states and output_states to
    init and assign_args_to_params_dicts

• LinearMatrix
  - keyword:  fixed bug in which assignment of rows or cols
              could not handle scalars

* • Function
  Stability:
  - function: add PROB_INDICATOR option that assigns a value of 1
    to the probabilistically chosen option
  - _validate_params: moved call to super() to end to allow
      matrix param to be evaluated before continuing validation

* Refactor/context/structured (#744)

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* • Log
  LogCondition - subset of ContextFlags used to specified logPref

• Context
  ContextFlags - implemented as common set of context flags for:
  - initialization
  - execution tracking
  - logging
  ContextStatus - aliases to ContextFlags (for backward compatibility)

* • Log
  LogCondition - subset of ContextFlags used to specified logPref

• Context
  ContextFlags - implemented as common set of context flags for:
  - initialization
  - execution tracking
  - logging
  ContextStatus - aliases to ContextFlags (for backward compatibility)

* • Log
  LogCondition - subset of ContextFlags used to specified logPref

• Context
  ContextFlags - implemented as common set of context flags for:
  - initialization
  - execution tracking
  - logging
  ContextStatus - aliases to ContextFlags (for backward compatibility)

* -

* -

* -

* -

* -

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* -

* -

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* -

* -

* -

* -

* -

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* Refactor/context/deprecate init status (#745)

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* • Log
  LogCondition - subset of ContextFlags used to specified logPref

• Context
  ContextFlags - implemented as common set of context flags for:
  - initialization
  - execution tracking
  - logging
  ContextStatus - aliases to ContextFlags (for backward compatibility)

* • Log
  LogCondition - subset of ContextFlags used to specified logPref

• Context
  ContextFlags - implemented as common set of context flags for:
  - initialization
  - execution tracking
  - logging
  ContextStatus - aliases to ContextFlags (for backward compatibility)

* • Log
  LogCondition - subset of ContextFlags used to specified logPref

• Context
  ContextFlags - implemented as common set of context flags for:
  - initialization
  - execution tracking
  - logging
  ContextStatus - aliases to ContextFlags (for backward compatibility)

* -

* -

* -

* -

* -

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* -

* -

* • Context
  context.status replaced with specific field assignments
  (.initialization_phase, .execution_phase, and .source)

* -

* -

* -

* -

* -

* -

* Merge branches 'devel' and 'refactor/context/structured' of https://github.com/PrincetonUniversity/PsyNeuLink into refactor/context/structured

# Conflicts:
#	.idea/runConfigurations/Tests.xml
#	Scripts/McClure.py

* Merge branches 'devel' and 'refactor/context/structured' of https://github.com/PrincetonUniversity/PsyNeuLink into refactor/context/structured

# Conflicts:
#	.idea/runConfigurations/Tests.xml
#	Scripts/McClure.py

* -

* -

* • Context
  - consolidated init_status into context.initialization_status

* • Context
  - consolidated init_status into context.initialization_status

* • Context
  - consolidated init_status into context.initialization_status

* • Context (#746)

- initialization_status.UNSET -> initialization_status.INITALIZING
    where appropriate
  - delete InitStatus class

* test,function/LinearCombination: Rename second test function to prevent overwriting results

Test names should indicate tested feature.
Fixes: daebb62 ("Linear combination fix
(#734)")
Cherry-picked from devel-llvm: fdc686e

* tests,function/LinearCombination: Add tests with absent parameters

Cherry-picked from devel-llvm: 1dcca60

* Scheduling: fix bug where termination conditions persisted across calls to run

* Feat/mechanism/input target label dicts (#751)

* -

* -

* -

* -

* • Environment
  added _parse_input_labels() and _parse_target_labels() methods
  run(): modified to use _parse_input_labels and _parse_target_labels

* • Environment
  added _parse_input_labels() and _parse_target_labels() methods
  run(): modified to use _parse_input_labels and _parse_target_labels

* • Environment
  added _parse_input_labels() and _parse_target_labels() methods
  run(): modified to use _parse_input_labels and _parse_target_labels

* -

* -

* -

* -

* -

* • Mechanism
  added input_labels and output_labels propoerties
  added _get_state_value_labels() method

* • Merged with devel

* testing: correct pytest setup ovewrite, losing some settings

* testing: Resolve leftover merge conflicts, fixes #747

* Feat/mechanism/input target lable dicts (#752)

* -

* -

* -

* -

* • Environment
  added _parse_input_labels() and _parse_target_labels() methods
  run(): modified to use _parse_input_labels and _parse_target_labels

* • Environment
  added _parse_input_labels() and _parse_target_labels() methods
  run(): modified to use _parse_input_labels and _parse_target_labels

* • Environment
  added _parse_input_labels() and _parse_target_labels() methods
  run(): modified to use _parse_input_labels and _parse_target_labels

* -

* -

* -

* -

* -

* -

* • Mechanism
  added input_labels and output_labels propoerties
  added _get_state_value_labels() method

* • Merged with devel

* • Environment
  _parse_input_labels(): added error messages

* • Environment
  _parse_input_labels(): added error messages

* • Environment
  Run: removed _parse_target_labels (they are just the input_labels
       for a TARGET Mechanism

• Mechanism
  docstring revs for INPUT_LABELS_DICDT and OUTPUT_LABELS_DICT

* • Environment
  Run: removed _parse_target_labels (they are just the input_labels
       for a TARGET Mechanism

• Mechanism
  docstring revs for INPUT_LABELS_DICDT and OUTPUT_LABELS_DICT

* -

* -

* -

* -

* -

* • Component
  - _execute:  added **kwargs argument, passed to call to function
               to accomodate LearningMechanism that uses this

• AutoassociativeLearningMechanism
  - _execute calls super(LearningMechanism, self) to skip Learning

• LearningMechanism:
  _ execute calls super()._execute with **kwargs

* • Component
  - _execute:  added **kwargs argument, passed to call to function
               to accomodate LearningMechanism that uses this

• AutoassociativeLearningMechanism
  - _execute calls super(LearningMechanism, self) to skip Learning

• LearningMechanism:
  _ execute calls super()._execute with **kwargs

• PrecictionMechanism:
  _execute calls super()._execute

* • Component
  - _execute:  added **kwargs argument, passed to call to function
               to accomodate LearningMechanism and EVCControlMechanism,
               the functions for which expect additional arguments

• AutoassociativeLearningMechanism
  - _execute calls super(LearningMechanism, self) to skip Learning

• LearningMechanism:
  _ execute calls super()._execute with **kwargs

• PrecictionMechanism:
  _execute calls super()._execute

* • Component
  - _execute:  added **kwargs argument, passed to call to function
               to accomodate LearningMechanism and EVCControlMechanism,
               the functions for which expect additional arguments

• AutoassociativeLearningMechanism
  - _execute calls super(LearningMechanism, self) to skip Learning

• LearningMechanism:
  _ execute calls super()._execute with **kwargs

• PrecictionMechanism:
  _execute calls super()._execute
  • Loading branch information
jdcpni authored Apr 13, 2018
1 parent 67133ec commit 6a276e2
Show file tree
Hide file tree
Showing 59 changed files with 2,343 additions and 636 deletions.
3 changes: 2 additions & 1 deletion .idea/inspectionProfiles/Project_Default.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 3 additions & 2 deletions .idea/runConfigurations/EVC_Gratton.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 1 addition & 2 deletions .idea/runConfigurations/Make_HTML.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

19 changes: 0 additions & 19 deletions .idea/runConfigurations/Tests.xml

This file was deleted.

5 changes: 3 additions & 2 deletions .idea/runConfigurations/_Multilayer_Learning.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 3 additions & 3 deletions Scripts/Examples/EVC-Gratton.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,8 +145,8 @@

# Show graph of system (with control components)
# mySystem.show_graph(show_dimensions=pnl.ALL, show_projection_labels=True)
# mySystem.show_graph(show_control=True, show_projection_labels=True)
mySystem.show_graph(show_control=True, show_mechanism_structure=True, show_headers=False)
mySystem.show_graph(show_control=True, show_projection_labels=False)
# mySystem.show_graph(show_control=True, show_mechanism_structure=True, show_headers=False)

# configure EVC components
mySystem.controller.control_signals[0].intensity_cost_function = pnl.Exponential(rate=0.8046).function
Expand Down Expand Up @@ -193,7 +193,7 @@

mySystem.controller.reportOutputPref = True

Flanker_Rep.set_log_conditions((pnl.SLOPE, pnl.ContextStatus.CONTROL))
Flanker_Rep.set_log_conditions((pnl.SLOPE, pnl.ContextFlags.CONTROL))

mySystem.run(
num_trials=nTrials,
Expand Down
134 changes: 134 additions & 0 deletions Scripts/Examples/Multilayer-Learning FOR FIG.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
import functools
import numpy as np
import psyneulink as pnl

Input_Layer = pnl.TransferMechanism(
name='Input',
function=pnl.Logistic,
params={pnl.INPUT_LABELS_DICT:{'red': [-1, 30]}},
default_variable=np.zeros((2,)))

Hidden_Layer_1 = pnl.TransferMechanism(
name='Hidden1',
function=pnl.Logistic(),
default_variable=np.zeros((5,)))

Hidden_Layer_2 = pnl.TransferMechanism(
name='Hidden2',
function=pnl.Logistic(),
default_variable=[0, 0, 0, 0])

Output_Layer = pnl.TransferMechanism(
name='Output',
function=pnl.Logistic,
default_variable=[0, 0, 0])

Input_Weights_matrix = (np.arange(2 * 5).reshape((2, 5)) + 1) / (2 * 5)
Middle_Weights_matrix = (np.arange(5 * 4).reshape((5, 4)) + 1) / (5 * 4)
Output_Weights_matrix = (np.arange(4 * 3).reshape((4, 3)) + 1) / (4 * 3)

# This Projection will be used by the Process below by referencing it in the Process' pathway;
# note: sender and receiver args don't need to be specified
Input_Weights = pnl.MappingProjection(
name='Input Weights',
matrix=Input_Weights_matrix
)

# This Projection will be used by the Process below by assigning its sender and receiver args
# to mechanisms in the pathway
Middle_Weights = pnl.MappingProjection(
name='Middle Weights',
sender=Hidden_Layer_1,
receiver=Hidden_Layer_2,
matrix=Middle_Weights_matrix
)

# Treated same as Middle_Weights Projection
Output_Weights = pnl.MappingProjection(
name='Output Weights',
sender=Hidden_Layer_2,
receiver=Output_Layer,
matrix=Output_Weights_matrix
)

z = pnl.Process(
default_variable=[0, 0],
pathway=[
Input_Layer,
# The following reference to Input_Weights is needed to use it in the pathway
# since it's sender and receiver args are not specified in its declaration above
Input_Weights,
Hidden_Layer_1,
# Middle_Weights,
# No Projection specification is needed here since the sender arg for Middle_Weights
# is Hidden_Layer_1 and its receiver arg is Hidden_Layer_2
Hidden_Layer_2,
# Output_Weights,
# Output_Weights does not need to be listed for the same reason as Middle_Weights
# If Middle_Weights and/or Output_Weights is not declared above, then the Process
# will assign a default for rhe missing Projection
Output_Layer
],
clamp_input=pnl.SOFT_CLAMP,
learning=pnl.LEARNING,
target=[0, 0, 1],
prefs={
pnl.VERBOSE_PREF: False,
pnl.REPORT_OUTPUT_PREF: True
}
)


def print_header(system):
print("\n\n**** Time: ", system.scheduler_processing.clock.simple_time)


def show_target(system):
i = system.input
t = system.target_input_states[0].value
print('\nOLD WEIGHTS: \n')
print('- Input Weights: \n', Input_Weights.matrix)
print('- Middle Weights: \n', Middle_Weights.matrix)
print('- Output Weights: \n', Output_Weights.matrix)

print('\nSTIMULI:\n\n- Input: {}\n- Target: {}\n'.format(i, t))
print('ACTIVITY FROM OLD WEIGHTS: \n')
print('- Middle 1: \n', Hidden_Layer_1.value)
print('- Middle 2: \n', Hidden_Layer_2.value)
print('- Output:\n', Output_Layer.value)


mySystem = pnl.System(
processes=[z],
targets=[0, 0, 1],
learning_rate=2.0
)

# Log Middle_Weights of MappingProjection to Hidden_Layer_2
# Hidden_Layer_2.set_log_conditions('Middle Weights')
Middle_Weights.set_log_conditions('matrix')

mySystem.reportOutputPref = True
# Shows graph will full information:
# mySystem.show_graph(show_dimensions=pnl.ALL)
mySystem.show_graph()
# mySystem.show_graph(show_learning=pnl.ALL, show_dimensions=pnl.ALL, show_mechanism_structure=True)
# Shows minimal graph:
# mySystem.show_graph()


stim_list = {Input_Layer: ['red']}
target_list = {Output_Layer: [[0, 0, 1]]}

mySystem.run(
num_trials=10,
inputs=stim_list,
targets=target_list,
call_before_trial=functools.partial(print_header, mySystem),
call_after_trial=functools.partial(show_target, mySystem),
termination_processing={pnl.TimeScale.TRIAL: pnl.AfterNCalls(Output_Layer, 1)}
)

# Print out logged weights for Middle_Weights
# print('\nMiddle Weights (to Hidden_Layer_2): \n', Hidden_Layer_2.log.nparray(entries='Middle Weights', header=False))
print('\nMiddle Weights (to Hidden_Layer_2): \n', Middle_Weights.log.nparray(entries='matrix', header=False))
6 changes: 4 additions & 2 deletions Scripts/Examples/Multilayer-Learning.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
Input_Layer = pnl.TransferMechanism(
name='Input Layer',
function=pnl.Logistic,
params={pnl.INPUT_LABELS_DICT:{'red': [-1, 30]}},
default_variable=np.zeros((2,)))

Hidden_Layer_1 = pnl.TransferMechanism(
Expand Down Expand Up @@ -110,12 +111,13 @@ def show_target(system):
mySystem.reportOutputPref = True
# Shows graph will full information:
# mySystem.show_graph(show_dimensions=pnl.ALL)
mySystem.show_graph(show_learning=pnl.ALL, show_dimensions=pnl.ALL)
mySystem.show_graph(show_learning=True)
# mySystem.show_graph(show_learning=pnl.ALL, show_dimensions=pnl.ALL, show_mechanism_structure=True)
# Shows minimal graph:
# mySystem.show_graph()

stim_list = {Input_Layer: [[-1, 30]]}

stim_list = {Input_Layer: ['red']}
target_list = {Output_Layer: [[0, 0, 1]]}

mySystem.run(
Expand Down
18 changes: 18 additions & 0 deletions Scripts/Laura Stroop w EVC.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import numpy as np
import matplotlib.pyplot as plt
import psyneulink as pnl

ci = pnl.TransferMechanism(size=2, name='COLORS INPUT')
wi = pnl.TransferMechanism(size=2, name='WORDS INPUT')
ch = pnl.TransferMechanism(size=2, function=pnl.Logistic, name='COLORS HIDDEN')
wh = pnl.TransferMechanism(size=2, function=pnl.Logistic, name='WORDS HIDDEN')
tl = pnl.TransferMechanism(size=2, function=pnl.Logistic(gain=pnl.CONTROL), name='TASK CONTROL')
rl = pnl.LCA(size=2, function=pnl.Logistic, name='RESPONSE')
cp = pnl.Process(pathway=[ci, ch, rl])
wp = pnl.Process(pathway=[wi, wh, rl])
tc = pnl.Process(pathway=[tl, ch])
tw = pnl.Process(pathway=[tl,wh])
s = pnl.System(processes=[tc, tw, cp, wp],
controller=pnl.EVCControlMechanism(name='EVC Mechanimsm'),
monitor_for_control=[rl])
s.show_graph()
3 changes: 2 additions & 1 deletion Scripts/Laura Stroop.py
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,8 @@ def pass_threshold(mech1, mech2, thresh):
respond_green_accumulator.reinitialize(0)
respond_red_accumulator.reinitialize(0)
# now run test trial
my_Stroop.show_graph(show_mechanism_structure=pnl.VALUES)
my_Stroop.show_graph()
# my_Stroop.show_graph(show_mechanism_structure=pnl.VALUES)
my_Stroop.run(inputs=CN_incongruent_trial_input, termination_processing=terminate_trial)


Expand Down
Loading

0 comments on commit 6a276e2

Please sign in to comment.