Skip to content

Commit

Permalink
Feat/autodiff various (#2524)
Browse files Browse the repository at this point in the history
* Add autodiff save/load functionality

* -

* Update autodiffcomposition.py

* Update test_autodiffcomposition.py

* Merge branch 'devel' of https://github.com/PrincetonUniversity/PsyNeuLink into devel

* • Autodiff:
  - add save and load methods (from Samyak)
  - test_autodiffcomposition.py:
    add test_autodiff_saveload, but commented out for now, as it may be causing hanging on PR

* • Autodiff:
  - add save and load methods (from Samyak)
  - test_autodiffcomposition.py:
    add test_autodiff_saveload, but commented out for now, as it may be causing hanging on PR

* -

* -

* • pytorchcomponents.py:
  - pytorch_function_creator: add SoftMax

• transferfunctions.py:
  - disable changes to ReLU.derivative for now

* • utilities.py:
  - iscompatible:
    attempt to replace try and except, commented out for now

* -

* -

* • autodiffcomposition.py:
  - save and load: augment file and directory handling
  - exclude processing of any ModulatoryProjections

* -

* -

* -

* • autodiffcomposition.py
  save(): add projection.matrix.base = matrix
           (fixes test_autodiff_saveload)

* -

* • autodiffcomposition.py:
  - save: return path
• test_autodiffcomposition.py:
  - test_autodiff_saveload: modify to use current working directory rather than tmp

* • autodiffcomposition.py:
  - save() and load(): ignore CIM, learning, and other modulation-related projections

* • autodiffcomposition.py:
  - load(): change test for path (failing on Windows) from PosixPath to Path

* • autodiffcomposition.py:
  - add _runtime_learning_rate attribute
  - _build_pytorch_representation():
      use _runtime_learning_rate attribute for optimizer if provided in call to learn
      else use learning_rate specified at construction
• compositionrunner.py:
  - assign learning_rate to _runtime_learning_rate attribute if specified in call to learn

* -

* [skip ci]

* [skip ci]

* [skip ci]
• autodiffcomposition.py:
  load():  add testing for match of matrix shape

* [skip ci]
• N-back:
  - reset em after each run
  - save and load weights
  - torch epochs = batch size (number training stimuli) * num_epochs

* [skip ci]

* [skip ci]

* Feat/add pathway default matrix (#2518)

* • compositioninterfacemechanism.py:
  - _get_source_node_for_input_CIM:
        restore (modeled on _get_source_of_modulation_for_parameter_CIM) but NEEDS TESTS
  - _get_source_of_modulation_for_parameter_CIM: clean up comments, NEEDS TESTS

* -

* -

* -

* -

* -

* -

* • Nback
  - EM uses ContentAddressableMemory (instead of DictionaryMemory)
  - Implements FFN for comparison of current and retrieved stimulus and context

• Project:
  replace all instances of "RETREIVE" with "RETRIEVE"

* • objectivefunctions.py
  - add cosine_similarity (needs compiled version)

* • Project: make COSINE_SIMILARITY a synonym of COSINE
• nback_CAM_FFN:
  - refactor to implement FFN and task input
  - assign termination condition for execution that is dependent on control
  - ContentAddressableMemory: selection_function=SoftMax(output=MAX_INDICATOR,
                                                            gain=SOFT_MAX_TEMP)
• DriftOnASphereIntegrator:
  - add dimension as dependency for initializer parameter

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* • test_integrator.py:
  Added identicalness test for DriftOnASphereIntegrator agains nback-paper implementation.

* -

* -

* Parameters: allow _validate_ methods to reference other parameters (#2512)

* • Scripts:
  - Updated N-back to use objective_mechanism, with commented out code for version that doesn't use it once bug is fixed
  - Deleted N-back_WITH_OBJECTIVE_MECH.py

* • Scripts:
  - Updated N-back to use objective_mechanism, with commented out code for version that doesn't use it once bug is fixed
  - Deleted N-back_WITH_OBJECTIVE_MECH.py

* • N-back.py:
  - added stimulus generation per nback-paper protocol

* - N-back.py
  tstep(s) -> trial(s)

* -

* -

* • N-back.py
  - comp -> nback_model
  - implement stim_set() method

* -

* • N-back.py:
  - added training set generation

* -

* -

* • N-back.py
  - modularized script

* -

* -

* -

* -

* • showgraph.py:
  - _assign_processing_components(): fix bug in which nested graphs not highlighted in animation.

* • showgraph.py * composition.py
  - add further description of animation, including note that animation of nested Compostions is limited.

* • showgraph.py * composition.py
  - add animation to N-back doc

* • autodiffcomposition.py
  - __init__(): move pathways arg to beginning, to capture positional assignment (i.e. w/o kw)

* -

* • N-back.py
  - ffn: implement as autodiff; still needs small random initial weight assignment

* • pathway.py
  - implement default_projection attribute

* • pathway.py
  - implement default_projection attribute

* • utilities.py:
  random_matrxi:  refactored to allow negative values and use keyword ZERO_CENTER

* • projection.py
  RandomMatrix: added class that can be used to pass a function as matrix spec

* • utilities.py
  - RandomMatrix moved here from projection.py

• function.py
  - get_matrix():  added support for RandomMatrix spec

* • port.py
  - _parse_port_spec(): added support for RandomMatrix

* • port.py
  - _parse_port_spec(): added support for RandomMatrix

* • utilities.py
  - is_matrix(): modified to support random_matrix and RandomMatrix

* • composition.py
  - add_linear_processing_pathway: add support for default_matrix argument
     (replaces default for MappingProjection for any otherwise unspecified projections)
     though still not used.

* -

* - RandomMatrix: moved from Utilities to Function

* -

* [skip ci]

* [skip ci]

* [skip ci]
• N-back.py
  - clean up script

* [skip ci]
• N-back.py
  - further script clean-up

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]
• BeukersNBackModel.rst:
  - Overview written
  - Needs other sections completed

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]
• N-back.py:
  - replace functions of TransferMechanisms with ReLU
  - replace function of Decision Mechanisms with SoftMax
  - more doc cleanup

* [skip ci]
• N-back.py:
  - replace functions of TransferMechanisms with ReLU
  - replace function of Decision Mechanisms with SoftMax
  - more doc cleanup

* [skip ci]

* -

* -

* [skip ci]

* [skip ci]
• composition.py:
  implement default_projection_matrix in add_XXX_pathway() methods

* [skip ci]
• composition.py:
  implement default_projection_matrix in add_XXX_pathway() methods

* [skip ci]
• test_composition.py:
  - add test_pathway_tuple_specs()

* -

* -

* [skip ci]

* [skip ci]

* [skip ci]

* -

Co-authored-by: jdcpni <pniintel55>
Co-authored-by: Katherine Mantel <[email protected]>

* Feat/add pathway default matrix (#2519)

* • compositioninterfacemechanism.py:
  - _get_source_node_for_input_CIM:
        restore (modeled on _get_source_of_modulation_for_parameter_CIM) but NEEDS TESTS
  - _get_source_of_modulation_for_parameter_CIM: clean up comments, NEEDS TESTS

* -

* -

* -

* -

* -

* -

* • Nback
  - EM uses ContentAddressableMemory (instead of DictionaryMemory)
  - Implements FFN for comparison of current and retrieved stimulus and context

• Project:
  replace all instances of "RETREIVE" with "RETRIEVE"

* • objectivefunctions.py
  - add cosine_similarity (needs compiled version)

* • Project: make COSINE_SIMILARITY a synonym of COSINE
• nback_CAM_FFN:
  - refactor to implement FFN and task input
  - assign termination condition for execution that is dependent on control
  - ContentAddressableMemory: selection_function=SoftMax(output=MAX_INDICATOR,
                                                            gain=SOFT_MAX_TEMP)
• DriftOnASphereIntegrator:
  - add dimension as dependency for initializer parameter

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* -

* • test_integrator.py:
  Added identicalness test for DriftOnASphereIntegrator agains nback-paper implementation.

* -

* -

* Parameters: allow _validate_ methods to reference other parameters (#2512)

* • Scripts:
  - Updated N-back to use objective_mechanism, with commented out code for version that doesn't use it once bug is fixed
  - Deleted N-back_WITH_OBJECTIVE_MECH.py

* • Scripts:
  - Updated N-back to use objective_mechanism, with commented out code for version that doesn't use it once bug is fixed
  - Deleted N-back_WITH_OBJECTIVE_MECH.py

* • N-back.py:
  - added stimulus generation per nback-paper protocol

* - N-back.py
  tstep(s) -> trial(s)

* -

* -

* • N-back.py
  - comp -> nback_model
  - implement stim_set() method

* -

* • N-back.py:
  - added training set generation

* -

* -

* • N-back.py
  - modularized script

* -

* -

* -

* -

* • showgraph.py:
  - _assign_processing_components(): fix bug in which nested graphs not highlighted in animation.

* • showgraph.py * composition.py
  - add further description of animation, including note that animation of nested Compostions is limited.

* • showgraph.py * composition.py
  - add animation to N-back doc

* • autodiffcomposition.py
  - __init__(): move pathways arg to beginning, to capture positional assignment (i.e. w/o kw)

* -

* • N-back.py
  - ffn: implement as autodiff; still needs small random initial weight assignment

* • pathway.py
  - implement default_projection attribute

* • pathway.py
  - implement default_projection attribute

* • utilities.py:
  random_matrxi:  refactored to allow negative values and use keyword ZERO_CENTER

* • projection.py
  RandomMatrix: added class that can be used to pass a function as matrix spec

* • utilities.py
  - RandomMatrix moved here from projection.py

• function.py
  - get_matrix():  added support for RandomMatrix spec

* • port.py
  - _parse_port_spec(): added support for RandomMatrix

* • port.py
  - _parse_port_spec(): added support for RandomMatrix

* • utilities.py
  - is_matrix(): modified to support random_matrix and RandomMatrix

* • composition.py
  - add_linear_processing_pathway: add support for default_matrix argument
     (replaces default for MappingProjection for any otherwise unspecified projections)
     though still not used.

* -

* - RandomMatrix: moved from Utilities to Function

* -

* [skip ci]

* [skip ci]

* [skip ci]
• N-back.py
  - clean up script

* [skip ci]
• N-back.py
  - further script clean-up

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]
• BeukersNBackModel.rst:
  - Overview written
  - Needs other sections completed

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]
• N-back.py:
  - replace functions of TransferMechanisms with ReLU
  - replace function of Decision Mechanisms with SoftMax
  - more doc cleanup

* [skip ci]
• N-back.py:
  - replace functions of TransferMechanisms with ReLU
  - replace function of Decision Mechanisms with SoftMax
  - more doc cleanup

* [skip ci]

* -

* -

* [skip ci]

* [skip ci]
• composition.py:
  implement default_projection_matrix in add_XXX_pathway() methods

* [skip ci]
• composition.py:
  implement default_projection_matrix in add_XXX_pathway() methods

* [skip ci]
• test_composition.py:
  - add test_pathway_tuple_specs()

* -

* -

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]
• composition.py:
  - add_linear_processing_pathway: fixed bug when Reinforcement or TDLearning are specified

• test_composition.py:
  - test_pathway_tuple_specs:  add tests for Reinforcement and TDLearning

* • composition.py:
  - add_linear_processing_pathway: fixed bug when Reinforcement or TDLearning are specified

• test_composition.py:
  - test_pathway_tuple_specs:  add tests for Reinforcement and TDLearning

Co-authored-by: jdcpni <pniintel55>
Co-authored-by: Katherine Mantel <[email protected]>

* autodiff: Use most recent context while save/load

* tests/autodiff: Use portable path join

* autodiff: Add assertions for save/load

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* [skip ci]

* • autodiffcomposition, test_autodiff_saveload:
  - merged from feat/autodiff_save

* -

* -

* -

* • autodiffcomposition.py
  - fix path assignment bug

* -

Co-authored-by: SamKG <[email protected]>
Co-authored-by: Katherine Mantel <[email protected]>
  • Loading branch information
3 people authored Nov 9, 2022
1 parent e741055 commit 786cdd9
Show file tree
Hide file tree
Showing 29 changed files with 1,065 additions and 114 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ def construct_model(num_tasks, stim_size, context_size, hidden_size, display=Fal
hidden, decision],
name="WORKING MEMORY (fnn)")
comp = Composition(nodes=[stim, context, task, em, ffn, control],
name="N-Back Model")
name="N-back Model")
comp.add_projection(MappingProjection(), stim, input_current_stim)
comp.add_projection(MappingProjection(), context, input_current_context)
comp.add_projection(MappingProjection(), task, input_task)
Expand Down
537 changes: 537 additions & 0 deletions Scripts/Models (Under Development)/N-back/N-back.py

Large diffs are not rendered by default.

188 changes: 188 additions & 0 deletions Scripts/Models (Under Development)/N-back/Nback Notebook.ipynb

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -35,19 +35,17 @@
- the stim+context input vector (length 90) projects to a hidden layer (length 80);
- the task input vector (length 2) projects to a different hidden layer (length 80);
- those two hidden layers project (over fixed, nonlearnable, one-one-projections?) to a third hidden layer (length 80) that simply sums them;
- the third hidden layer projections to the length 2 output layer;
- the third hidden layer projects to the length 2 output layer;
- a softmax is taken over the output layer to determine the response.
- softmax temp on output/decision layer: 1
- confirm that ReLUs all use 0 thresholds and unit slope
- fix: were biases trained?
- training:
- learning rate: 0.001; epoch: 1 trial per epoch of training
- state_dict with weights (still needed)
- fix: state_dict with weights (still needed)
- get empirical stimulus sequences (still needed)
- put N-back script (with pointer to latest version on PNL) in nback-paper repo
- get rid of objective_mechanism (see "VERSION *WITHOUT* ObjectiveMechanism" under control(...) (fix bug)
- make termination processing part of the Composition definition (fix bug)
- pass learning_rate as parameter to train_network() (add feature)
- fix warnings on run
- fix: get rid of objective_mechanism (see "VERSION *WITHOUT* ObjectiveMechanism" under control(...)
- fix: warnings on run
- complete documentation in BeukersNbackModel.rst
- validate against nback-paper results
- after validation:
- try with STIM_SIZE = NUM_STIMS rather than 20 (as in nback-paper)
Expand All @@ -62,8 +60,6 @@
import numpy as np

# Settings for running script:
TRAIN = True
RUN = False
DISPLAY_MODEL = False # show visual graphic of model

# PARAMETERS -------------------------------------------------------------------------------------------------------
Expand All @@ -85,22 +81,22 @@
RETRIEVAL_HAZARD_RATE=0.04 # rate of re=sampling of em following non-match determination in a pass through ffn
RETRIEVAL_STIM_WEIGHT=.05 # weighting of stimulus field in retrieval from em
RETRIEVAL_CONTEXT_WEIGHT = 1-RETRIEVAL_STIM_WEIGHT # weighting of context field in retrieval from em
DECISION_SOFTMAX_TEMP=1/8 # express as gain # binarity of decision process
DECISION_SOFTMAX_TEMP=1

# Training parameters:
NUM_EPOCHS=10 # nback-paper: 400,000, one trial per epoch
LEARNING_RATE=0.1 # nback-paper: .001
NUM_EPOCHS=3 # nback-paper: 400,000 @ one trial per epoch = 2,500 @ 160 trials per epoch
LEARNING_RATE=0.01 # nback-paper: .001

# Execution parameters:
CONTEXT_DRIFT_RATE=.1 # drift rate used for DriftOnASphereIntegrator (function of Context mech) on each trial
NUM_TRIALS = 48 # number of stimuli presented in a trial sequence
NUM_TRIALS = 48 # number of stimuli presented in a trial sequence for a given nback_level during run
REPORT_OUTPUT = ReportOutput.OFF # Sets console output during run
REPORT_PROGRESS = ReportProgress.ON # Sets console progress bar during run
REPORT_LEARNING = ReportLearning.ON # Sets console progress bar during training
ANIMATE = True # {UNIT:EXECUTION_SET} # Specifies whether to generate animation of execution
REPORT_PROGRESS = ReportProgress.OFF # Sets console progress bar during run
REPORT_LEARNING = ReportLearning.OFF # Sets console progress bar during training
ANIMATE = False # {UNIT:EXECUTION_SET} # Specifies whether to generate animation of execution

# Names of Compositions and Mechanisms:
NBACK_MODEL = "N-Back Model"
NBACK_MODEL = "N-back Model"
FFN_COMPOSITION = "WORKING MEMORY (fnn)"
FFN_STIMULUS_INPUT = "CURRENT STIMULUS"
FFN_CONTEXT_INPUT = "CURRENT CONTEXT"
Expand Down Expand Up @@ -129,6 +125,8 @@ def construct_model(stim_size = STIM_SIZE,
decision_softmax_temp = DECISION_SOFTMAX_TEMP):
"""Construct nback_model"""

print(f'constructing {FFN_COMPOSITION}...')

# FEED FORWARD NETWORK -----------------------------------------

# inputs: encoding of current stimulus and context, retrieved stimulus and retrieved context,
Expand Down Expand Up @@ -161,23 +159,25 @@ def construct_model(stim_size = STIM_SIZE,
input_retrieved_context,
input_task},
hidden, decision],
RANDOM_WEIGHTS_INITIALIZATION,
RANDOM_WEIGHTS_INITIALIZATION,
),
name=FFN_COMPOSITION,
learning_rate=LEARNING_RATE
)

# FULL MODEL (Outer Composition, including input, EM and control Mechanisms) ------------------------

print(f'constructing {NBACK_MODEL}...')

# Stimulus Encoding: takes STIM_SIZE vector as input
stim = TransferMechanism(name=MODEL_STIMULUS_INPUT, size=STIM_SIZE)
stim = TransferMechanism(name=MODEL_STIMULUS_INPUT, size=stim_size)

# Context Encoding: takes scalar as drift step for current trial
context = ProcessingMechanism(name=MODEL_CONTEXT_INPUT,
function=DriftOnASphereIntegrator(
initializer=np.random.random(CONTEXT_SIZE-1),
initializer=np.random.random(context_size-1),
noise=context_drift_noise,
dimension=CONTEXT_SIZE))
dimension=context_size))

# Task: task one-hot indicating n-back (1, 2, 3 etc.) - must correspond to what ffn has been trained to do
task = ProcessingMechanism(name=MODEL_TASK_INPUT,
Expand All @@ -188,11 +188,11 @@ def construct_model(stim_size = STIM_SIZE,
# - uses Softmax to retrieve best matching input, subject to weighting of stimulus and context by STIM_WEIGHT
em = EpisodicMemoryMechanism(name=EM,
input_ports=[{NAME:"STIMULUS_FIELD",
SIZE:STIM_SIZE},
SIZE:stim_size},
{NAME:"CONTEXT_FIELD",
SIZE:CONTEXT_SIZE}],
SIZE:context_size}],
function=ContentAddressableMemory(
initializer=[[[0]*STIM_SIZE, [0]*CONTEXT_SIZE]],
initializer=[[[0]*stim_size, [0]*context_size]],
distance_field_weights=[retrieval_stimulus_weight,
retrieval_context_weight],
# equidistant_entries_select=NEWEST,
Expand All @@ -211,28 +211,30 @@ def construct_model(stim_size = STIM_SIZE,
# - continue trial
control = ControlMechanism(name=CONTROLLER,
default_variable=[[1]], # Ensure EM[store_prob]=1 at beginning of first trial
# # VERSION *WITH* ObjectiveMechanism:
# ---------
# VERSION *WITH* ObjectiveMechanism:
objective_mechanism=ObjectiveMechanism(name="OBJECTIVE MECHANISM",
monitor=decision,
# Outcome=1 if match, else 0
function=lambda x: int(x[0][1]>x[0][0])),
# Set ControlSignal for EM[store_prob]
function=lambda outcome: int(bool(outcome)
or (np.random.random() > retrieval_hazard_rate)),
# ---------
# # VERSION *WITHOUT* ObjectiveMechanism:
# monitor_for_control=decision,
# # Set Evaluate outcome and set ControlSignal for EM[store_prob]
# # - outcome is received from decision as one hot in the form: [[match, no-match]]
# function=lambda outcome: int(int(outcome[0][1]>outcome[0][0])
# or (np.random.random() > HAZARD_RATE)),
# or (np.random.random() > retrieval_hazard_rate)),
# ---------
control=(STORAGE_PROB, em))

nback_model = Composition(name=NBACK_MODEL,
nodes=[stim, context, task, ffn, em, control],
# # # Terminate trial if value of control is still 1 after first pass through execution
# # FIX: STOPS AFTER ~ NUMBER OF TRIALS (?90+); SHOULD BE: NUM_TRIALS*NUM_NBACK_LEVELS + 1
# termination_processing={TimeScale.TRIAL: And(Condition(lambda: control.value),
# AfterPass(0, TimeScale.TRIAL))},
# Terminate trial if value of control is still 1 after first pass through execution
termination_processing={TimeScale.TRIAL: And(Condition(lambda: control.value),
AfterPass(0, TimeScale.TRIAL))},
)
# # Terminate trial if value of control is still 1 after first pass through execution
# # FIX: ALL OF THE FOLLOWING STOP AFTER ~ NUMBER OF TRIALS (?90+); SHOULD BE: NUM_TRIALS*NUM_NBACK_LEVELS + 1
Expand All @@ -256,6 +258,7 @@ def construct_model(stim_size = STIM_SIZE,
# show_dimensions=True
)

print(f'full model constructed')
return nback_model

# ==========================================STIMULUS GENERATION =======================================================
Expand Down Expand Up @@ -445,18 +448,41 @@ def get_training_inputs(network, num_epochs, nback_levels):
TARGETS: {network.nodes[FFN_OUTPUT]: target},
EPOCHS: num_epochs}

return training_set
batch_size = len(target)
print(f'num trials (batch_size): {len(target)}')
return training_set, batch_size

# ======================================== MODEL EXECUTION ============================================================

def train_network(network,
learning_rate=LEARNING_RATE,
num_epochs=NUM_EPOCHS):
training_set = get_training_inputs(network=network, num_epochs=num_epochs, nback_levels=NBACK_LEVELS)
print(f"constructing training_set for '{network.name}'...")
training_set, batch_size = get_training_inputs(network=network,
num_epochs=num_epochs,
nback_levels=NBACK_LEVELS)
print(f'training_set constructed: {len(training_set)}')
print(f"\ntraining '{network.name}'...")
import timeit
start_time = timeit.default_timer()
network.learn(inputs=training_set,
minibatch_size=NUM_TRIALS,
minibatch_size=batch_size,
report_progress=REPORT_PROGRESS,
# report_learning=REPORT_LEARNING,
learning_rate=learning_rate,
execution_mode=ExecutionMode.LLVMRun)
stop_time = timeit.default_timer()
print(f"'{network.name}' trained")
training_time = stop_time-start_time
if training_time <= 60:
training_time_str = f'{int(training_time)} seconds'
else:
training_time_str = f'{int(training_time/60)} minutes'
print(f'training time: {training_time_str} for {num_epochs} epochs')
# path = network.save()
# print(f'saved weights sample: {network.nodes[FFN_HIDDEN].path_afferents[0].matrix.base[0][:3]}...')
# network.load(path)
# print(f'loaded weights sample: {network.nodes[FFN_HIDDEN].path_afferents[0].matrix.base[0][:3]}...')

def run_model(model,
context_drift_rate=CONTEXT_DRIFT_RATE,
Expand All @@ -465,65 +491,17 @@ def run_model(model,
report_progress=REPORT_PROGRESS,
animate=ANIMATE
):
print('nback_model executing...')
for nback_level in NBACK_LEVELS:
model.run(inputs=get_run_inputs(model, nback_level, context_drift_rate, num_trials),
# FIX: MOVE THIS TO MODEL CONSTRUCTION ONCE THAT WORKS
# Terminate trial if value of control is still 1 after first pass through execution
termination_processing={TimeScale.TRIAL: And(Condition(lambda: model.nodes[CONTROLLER].value),
AfterPass(0, TimeScale.TRIAL))}, # function arg
report_output=report_output,
report_progress=report_progress,
animate=animate
)
# FIX: RESET MEMORY HERE?
# print("Number of entries in EM: ", len(model.nodes[EM].memory))
assert len(model.nodes[EM].memory) == NUM_TRIALS*NUM_NBACK_LEVELS + 1


nback_model = construct_model()
print('nback_model constructed')
if TRAIN:
print('nback_model training...')
train_network(nback_model.nodes[FFN_COMPOSITION])
print('nback_model trained')
if RUN:
print('nback_model executing...')
run_model(nback_model)
if REPORT_PROGRESS == ReportProgress.ON:
print('\n')
print(f'nback_model done: {len(nback_model.results)} trials executed')

# ===========================================================================

# TEST OF SPHERICAL DRIFT:
# stims = np.array([x[0] for x in em.memory])
# contexts = np.array([x[1] for x in em.memory])
# cos = Distance(metric=COSINE)
# dist = Distance(metric=EUCLIDEAN)
# diffs = [np.sum([contexts[i+1] - contexts[1]]) for i in range(NUM_TRIALS)]
# diffs_1 = [np.sum([contexts[i+1] - contexts[i]]) for i in range(NUM_TRIALS)]
# diffs_2 = [np.sum([contexts[i+2] - contexts[i]]) for i in range(NUM_TRIALS-1)]
# dots = [[contexts[i+1] @ contexts[1]] for i in range(NUM_TRIALS)]
# dot_diffs_1 = [[contexts[i+1] @ contexts[i]] for i in range(NUM_TRIALS)]
# dot_diffs_2 = [[contexts[i+2] @ contexts[i]] for i in range(NUM_TRIALS-1)]
# angle = [cos([contexts[i+1], contexts[1]]) for i in range(NUM_TRIALS)]
# angle_1 = [cos([contexts[i+1], contexts[i]]) for i in range(NUM_TRIALS)]
# angle_2 = [cos([contexts[i+2], contexts[i]]) for i in range(NUM_TRIALS-1)]
# euclidean = [dist([contexts[i+1], contexts[1]]) for i in range(NUM_TRIALS)]
# euclidean_1 = [dist([contexts[i+1], contexts[i]]) for i in range(NUM_TRIALS)]
# euclidean_2 = [dist([contexts[i+2], contexts[i]]) for i in range(NUM_TRIALS-1)]
# print("STIMS:", stims, "\n")
# print("DIFFS:", diffs, "\n")
# print("DIFFS 1:", diffs_1, "\n")
# print("DIFFS 2:", diffs_2, "\n")
# print("DOT PRODUCTS:", dots, "\n")
# print("DOT DIFFS 1:", dot_diffs_1, "\n")
# print("DOT DIFFS 2:", dot_diffs_2, "\n")
# print("ANGLE: ", angle, "\n")
# print("ANGLE_1: ", angle_1, "\n")
# print("ANGLE_2: ", angle_2, "\n")
# print("EUCILDEAN: ", euclidean, "\n")
# print("EUCILDEAN 1: ", euclidean_1, "\n")
# print("EUCILDEAN 2: ", euclidean_2, "\n")

# n_back_model()
print(f'results: \n{model.results}')
34 changes: 34 additions & 0 deletions Scripts/Models (Under Development)/N-back/SphericalDrift Tests.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import numpy as np
from psyneulink import *

NUM_TRIALS = 48

stims = np.array([x[0] for x in em.memory])
contexts = np.array([x[1] for x in em.memory])
cos = Distance(metric=COSINE)
dist = Distance(metric=EUCLIDEAN)
diffs = [np.sum([contexts[i+1] - contexts[1]]) for i in range(NUM_TRIALS)]
diffs_1 = [np.sum([contexts[i+1] - contexts[i]]) for i in range(NUM_TRIALS)]
diffs_2 = [np.sum([contexts[i+2] - contexts[i]]) for i in range(NUM_TRIALS-1)]
dots = [[contexts[i+1] @ contexts[1]] for i in range(NUM_TRIALS)]
dot_diffs_1 = [[contexts[i+1] @ contexts[i]] for i in range(NUM_TRIALS)]
dot_diffs_2 = [[contexts[i+2] @ contexts[i]] for i in range(NUM_TRIALS-1)]
angle = [cos([contexts[i+1], contexts[1]]) for i in range(NUM_TRIALS)]
angle_1 = [cos([contexts[i+1], contexts[i]]) for i in range(NUM_TRIALS)]
angle_2 = [cos([contexts[i+2], contexts[i]]) for i in range(NUM_TRIALS-1)]
euclidean = [dist([contexts[i+1], contexts[1]]) for i in range(NUM_TRIALS)]
euclidean_1 = [dist([contexts[i+1], contexts[i]]) for i in range(NUM_TRIALS)]
euclidean_2 = [dist([contexts[i+2], contexts[i]]) for i in range(NUM_TRIALS-1)]
print("STIMS:", stims, "\n")
print("DIFFS:", diffs, "\n")
print("DIFFS 1:", diffs_1, "\n")
print("DIFFS 2:", diffs_2, "\n")
print("DOT PRODUCTS:", dots, "\n")
print("DOT DIFFS 1:", dot_diffs_1, "\n")
print("DOT DIFFS 2:", dot_diffs_2, "\n")
print("ANGLE: ", angle, "\n")
print("ANGLE_1: ", angle_1, "\n")
print("ANGLE_2: ", angle_2, "\n")
print("EUCILDEAN: ", euclidean, "\n")
print("EUCILDEAN 1: ", euclidean_1, "\n")
print("EUCILDEAN 2: ", euclidean_2, "\n")
Binary file not shown.
Empty file.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added Scripts/Models (Under Development)/ffn.wts.pnl
Binary file not shown.
Binary file added Scripts/Models (Under Development)/ffn.wts_01.pnl
Binary file not shown.
Binary file not shown.
Binary file added autodiff_composition_matrix_wts.pnl
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -1123,7 +1123,6 @@ def _function(self,
# then need to assign it to the default value
# If learning_rate was not specified for instance or composition, use default value
learning_rate = self._get_current_parameter_value(LEARNING_RATE, context)
# learning_rate = self.learning_rate
if learning_rate is None:
learning_rate = self.defaults.learning_rate
#
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1620,8 +1620,8 @@ def derivative(self, input, output=None, context=None):
# # MODIFIED 11/5/22 NEW:
# bias = self._get_current_parameter_value(BIAS, context)
# input = np.asarray(input).copy()
# input[(input-bias)>0] = gain
# input[(input-bias)<=0] = gain * leak
# input[(input - bias) > 0] = gain
# input[(input - bias) <= 0] = gain * leak
# MODIFIED 11/5/22 END

return input
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -466,7 +466,7 @@ class ContentAddressableMemory(MemoryFunction): # ------------------------------
An entry is stored and retrieved as an array containing a set of `fields <EpisodicMemoryMechanism_Memory_Fields>`
each of which is a 1d array. An array containing such entries can be used to initialize the contents of `memory
<ContentAddressableMemory.memory>` by providing it in the **initializer** argument of the ContentAddressableMemory's
constructor, or in a call to its `reset <ContentAddressableMemory.reset>` method. The current contents of `memory
constructor, or in a call to its `reset <ContentAddressableMemory.reset>` method. The current contents of `memory
<ContentAddressableMemory.memory>` can be inspected using the `memory <ContentAddressableMemory.memory>` attribute,
which returns a list containing the current entries, each as a list containing all fields for that entry. The
`memory_num_fields <ContentAddressableMemory.memory_num_fields>` contains the number of fields expected for each
Expand Down
Loading

0 comments on commit 786cdd9

Please sign in to comment.