Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nback #2540

Merged
merged 157 commits into from
Nov 19, 2022
Merged

Nback #2540

merged 157 commits into from
Nov 19, 2022

Conversation

jdcpni
Copy link
Collaborator

@jdcpni jdcpni commented Nov 18, 2022

NOTE: This PR has many changes to learning, as well as the nback script.

• nback.py:

  • executable implementation, though still needs some additional modifications and code support
    to match and validate against version in Beukers et al., 2022:
    • handle biases
    • implement dropout
    • LLVM code for cross entropy as operation in LinearCombination (_gen_llvm_combine)
    • LLVM code for differential of cross entropy in compiledloss.py (_gen_inject_loss_differential)
    • complete integration of statistical analysis of results
    • integrate SweetPea stimulus generation and Kane et al. stimuli

• transferfunctions.py:

  • SoftMax: correct derivative computations
  • ReLU: modified to use output (and infer input) if provided, else use input

• learningfunctions.py

  • BackPropagation:
    • add CROSS_ENTROPY loss
    • fix bug in which derivative for default loss (MSE) was computed using L0

• combinationfunctions.py:

  • LinearCombination: add CROSS_ENTROPY as operation
    • needs compiled version (see _gen_llvm_combine)

• pytorchmodelcreator.py:

  • _gen_llvm_training_function_body():
    • add support for loss_type = CROSS_ENTROPY
    • modify loss_spec to use keywords for loss

• compiledloss.py:

  • _gen_loss_function: add support for CROSS_ENTROPYLoss
    • needs to be debugged
    • need differential implemented

• llvm/init.py:

  • ExecutionMode: add ExecutionMode.PyTorch as synonym for ExecutionMode.Python

• composition.py

  • modify loss_spec to use keywords for loss
  • add Report for compiled mode
  • _create_terminal_backprop_learning_components:
    • support loss_function = CROSS_ENTROPY
    • fix bugs in handling of MSE, SSE and L0 loss, add explicit assignment of objective_mechanism output_ports for each

• autodiffcomposition.py and keywords.py

  • add keywords for loss functions
  • docstring: substantial reorganization

• ComparatorMechanism:

  • add L0 and CROSS_ENTROPY standard output_ports

• keywords.py:

  • add Loss(Enum) used for specifying learning
  • replace MSE, SSE, L0, L1, CROSS_ENTROPY, KL_DIV, NLL and POISSON_NLL with Loss members

• test_learning.py:

  • test_multilayer_truth:
    • fix bug in which SSE was assigned as loss, but oputput_port[MSE] was used for objective_mechanism
    • replace with explicit L0 loss and ouput_port[SUM] for objective_mechanism
    • parameterize to test for L0, SSE and MSE loss

SamKG and others added 30 commits November 1, 2022 17:03
…Link into devel

� Conflicts:
�	Scripts/Models (Under Development)/N-back.py
  - add save and load methods (from Samyak)
  - test_autodiffcomposition.py:
    add test_autodiff_saveload, but commented out for now, as it may be causing hanging on PR
  - add save and load methods (from Samyak)
  - test_autodiffcomposition.py:
    add test_autodiff_saveload, but commented out for now, as it may be causing hanging on PR
…Link into feat/autodiff_various

� Conflicts:
�	Scripts/Models (Under Development)/N-back.py
�	tests/mdf/model_basic.yml
  - pytorch_function_creator: add SoftMax

• transferfunctions.py:
  - disable changes to ReLU.derivative for now
  - iscompatible:
    attempt to replace try and except, commented out for now
…Link into devel

� Conflicts:
�	Scripts/Models (Under Development)/N-back.py
  - save and load: augment file and directory handling
  - exclude processing of any ModulatoryProjections
  save(): add projection.matrix.base = matrix
           (fixes test_autodiff_saveload)
  - save: return path
• test_autodiffcomposition.py:
  - test_autodiff_saveload: modify to use current working directory rather than tmp
  - save() and load(): ignore CIM, learning, and other modulation-related projections
…Link into devel

 Conflicts:
	.github/actions/install-pnl/action.yml
	.github/actions/on-branch/action.yml
	.github/workflows/pnl-ci-docs.yml
	.github/workflows/pnl-ci.yml
	.github/workflows/test-release.yml
	Scripts/Models (Under Development)/N-back.py
…Link into devel

 Conflicts:
	.github/actions/install-pnl/action.yml
	.github/actions/on-branch/action.yml
	.github/workflows/pnl-ci-docs.yml
	.github/workflows/pnl-ci.yml
	.github/workflows/test-release.yml
	Scripts/Models (Under Development)/N-back.py
  - test_xor_training_identicalness_standard_composition_vs_PyTorch_vs_LLVM:
      replaces test_xor_training_identicalness_standard_composition_vs_Autodiff
  - BackPropagation:
    - fix bug in which derivative for default loss (MSE) was computed using L0
    - add explicit specification for L0 loss

• composition.py:
  - _create_terminal_backprop_learning_components:
    - add explicit assignment of output_port[SUM] for L0 loss

• test_learning.py:
  - test_multilayer:
    - fix bug in which SSE was assigned as loss, but oputput_port[MSE] was used for objective_mechanism
    - replace with explicit L0 loss and ouput_port[SUM] for objective_mechanism
• learningfunctions.py
  - BackPropagation: fix bug in if statement for L0 loss
• learningfunctions.py
  - _create_non_terminal_backprop_learning_components:
    - fixed bug in which loss function for hidden layers was set to MSE rather than simple L0
• All tests pass
• test_learning.py:
  - test_multilayer_truth():  parameterize test with expected results
• test_learning.py:
  - test_multilayer_truth():  test for L0, SSE and MSE
• All tests pass
• All tests pass
• keywords.py
  - add Loss enum

• llvm.rst
  - add ExecutionMode

• Project
  - replace MSE, SSE, L0, L1, CROSS_ENTROPY, KL_DIV, NLL and POISSON_NLL with Loss enum members
• composition.py:
  - run(): add warning for use of PyTorch with Composition
• composition.py:
  - run(): commented out warning, as can't distinguish ExecutionMode.PyTorch from ExecutionMode.Python
@github-actions
Copy link

This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):

diff -r docs-base/AutodiffComposition.html docs-head/AutodiffComposition.html
221a222,224
> <li><p><a class="reference internal" href="#autodiffcomposition-llvm"><span class="std std-ref">LLVM mode</span></a></p></li>
> <li><p><a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">PyTorch mode</span></a></p></li>
> <li><p><a class="reference internal" href="#autodiffcomposition-nested-modulation"><span class="std std-ref">Nested Execution and Modulation</span></a></p></li>
223d225
< <li><p><a class="reference internal" href="#autodiffcomposition-nested-execution"><span class="std std-ref">Nested Execution</span></a></p></li>
227a230
> <li><p><a class="reference internal" href="#autodiffcomposition-examples"><span class="std std-ref">Examples</span></a></p></li>
234,243c237,245
< <div class="admonition warning">
< <p class="admonition-title">Warning</p>
< <p>As of PsyNeuLink 0.7.5, the API for using AutodiffCompositions has been slightly changed!
< Please see <a class="reference internal" href="RefactoredLearningGuide.html"><span class="doc">this link</span></a> for more details!</p>
< </div>
< <p>AutodiffComposition is a subclass of <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a> used to train feedforward neural network models through integration
< with <a class="reference external" href="https://pytorch.org/">PyTorch</a>, a machine learning library that executes considerably more quickly
< than using the <a class="reference internal" href="Composition.html#composition-learning-standard"><span class="std std-ref">standard implementation of learning</span></a> in a Composition, using its
< <a class="reference internal" href="Composition.html#composition-learning-methods"><span class="std std-ref">learning methods</span></a>. An AutodiffComposition is configured and run similarly to a standard
< Composition, with some exceptions that are described below.</p>
---
> <p>AutodiffComposition is a subclass of <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a> for constructing and training feedforward neural network
> either, using either direct compilation (to LLVM) or automatic conversion to <a class="reference external" href="https://pytorch.org/">PyTorch</a>,
> both of which considerably accelerate training (by as much as three orders of magnitude) compared to the
> <a class="reference internal" href="Composition.html#composition-learning-standard"><span class="std std-ref">standard implementation of learning</span></a> in a Composition.  Although an
> AutodiffComposition is constructed and executed in much the same way as a standard Composition, it largely restricted
> to feedforward neural networks using <a class="reference internal" href="Composition.html#composition-learning-supervised"><span class="std std-ref">supervised learning</span></a>, and in particular the
> the <a class="reference external" href="https://en.wikipedia.org/wiki/Backpropagation">backpropagation learning algorithm</a>. although it can be used for
> some forms of <a class="reference internal" href="Composition.html#composition-learning-unsupervised"><span class="std std-ref">unsupervised learning</span></a> that are supported in PyTorch (e.g.,
> <a class="reference external" href="https://github.com/giannisnik/som">self-organized maps</a>).</p>
247,251c249,257
< <p>An AutodiffComposition can be created by calling its constructor, and then adding <a class="reference internal" href="Component.html"><span class="doc">Components</span></a> using the
< standard <a class="reference internal" href="Composition.html#composition-creation"><span class="std std-ref">Composition methods</span></a> for doing so.  The constructor also includes an number of
< parameters that are specific to the AutodiffComposition. See <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">Class Reference</span></a> for a list of
< these parameters.</p>
< <div class="admonition warning">
---
> <p>An AutodiffComposition can be created by calling its constructor, and then adding <a class="reference internal" href="Component.html"><span class="doc">Components</span></a> using
> the standard <a class="reference internal" href="Composition.html#composition-creation"><span class="std std-ref">Composition methods</span></a> for doing so (e.g., <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_node" title="psyneulink.core.compositions.composition.Composition.add_node"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_node</span></code></a>,
> <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_projections" title="psyneulink.core.compositions.composition.Composition.add_projections"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_projection</span></code></a>,  <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_linear_processing_pathway" title="psyneulink.core.compositions.composition.Composition.add_linear_processing_pathway"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_linear_processing_pathway</span></code></a>, etc.).  The constructor also includes a number of parameters that are
> specific to the AutodiffComposition (see <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">Class Reference</span></a> for a list of these parameters,
> and <a class="reference internal" href="#autodiffcomposition-examples"><span class="std std-ref">examples</span></a> below).  Note that all of the Components in an AutodiffComposition
> must be able to be subject to <a class="reference internal" href="Composition.html#composition-learning"><span class="std std-ref">learning</span></a>, but cannot include any <a class="reference internal" href="Composition.html#composition-learning-components"><span class="std std-ref">learning components</span></a> themselves.  Specifically, it cannot include any <a class="reference internal" href="ModulatoryMechanism.html"><span class="doc">ModulatoryMechanisms</span></a>, <a class="reference internal" href="LearningProjection.html"><span class="doc">LearningProjections</span></a>, or the ObjectiveMechanism &lt;OBJECTIVE_MECHANISM&gt;`
> used to compute the loss for learning.</p>
> <blockquote>
> <div><div class="admonition warning" id="autodiff-learning-components-warning">
253,255c259,279
< <p>Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it has
< been run for the first time. Unlike an ordinary Composition, AutodiffComposition does not support this
< functionality.</p>
---
> <p>When an AutodiffComposition is constructed, it creates all of the learning Components
> that are needed, and thus <strong>cannot include</strong> any that are prespecified.</p>
> </div>
> </div></blockquote>
> <p>This means that an AutodiffComposition also cannot itself include a <a class="reference internal" href="Composition.html#composition-controller"><span class="std std-ref">controller</span></a> or any
> <a class="reference internal" href="ControlMechanism.html"><span class="doc">ControlMechanisms</span></a>.  However, it can include Mechanisms that are subject to modulatory control
> (see <a class="reference internal" href="ModulatorySignal.html#modulatorysignal-anatomy-figure"><span class="std std-ref">Figure</span></a>, and <a class="reference internal" href="ModulatorySignal.html#modulatorysignal-modulation"><span class="std std-ref">modulation</span></a>) by ControlMechanisms
> <em>outside</em> the Composition, including the controller of a Composition within which the AutodiffComposition is nested.
> That is, an AutodiffComposition can be <a class="reference internal" href="Composition.html#composition-nested"><span class="std std-ref">nested in a Composition</span></a> that has such other Components
> (see <a class="reference internal" href="#autodiffcomposition-nested-modulation"><span class="std std-ref">Nested Execution and Modulation</span></a> below).</p>
> <p>A few other restrictions apply to the construction and modification of AutodiffCompositions:</p>
> <blockquote>
> <div><div class="admonition hint">
> <p class="admonition-title">Hint</p>
> <p>AutodiffComposition does not (currently) support the <em>automatic</em> construction of separate bias parameters.
> Thus, when comparing a model constructed using an AutodiffComposition to a corresponding model in PyTorch, the
> <code class="xref any docutils literal notranslate"><span class="pre">bias</span></code> parameter of PyTorch modules should be set
> to <code class="xref any docutils literal notranslate"><span class="pre">False</span></code>.  Trainable biases <em>can</em> be specified explicitly in an AutodiffComposition by including a
> TransferMechanism that projects to the relevant Mechanism (i.e., implementing that layer of the network to
> receive the biases) using a <a class="reference internal" href="MappingProjection.html"><span class="doc">MappingProjection</span></a> with a <a class="reference internal" href="MappingProjection.html#psyneulink.core.components.projections.pathway.mappingprojection.MappingProjection.matrix" title="psyneulink.core.components.projections.pathway.mappingprojection.MappingProjection.matrix"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">matrix</span></code></a> parameter that
> implements a diagnoal matrix with values corresponding to the initial value of the biases.</p>
259,261c283,284
< <p>When comparing models built in PyTorch to those using AutodiffComposition,
< the <code class="xref any docutils literal notranslate"><span class="pre">bias</span></code> parameter of PyTorch modules
< should be set to <code class="xref any docutils literal notranslate"><span class="pre">False</span></code>, as AutodiffComposition does not currently support trainable biases.</p>
---
> <p>Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it
> has been executed. Unlike an ordinary Composition, AutodiffComposition does not support this functionality.</p>
262a286
> </div></blockquote>
267,269c291,361
< methods are the same as for a <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a>.</p>
< <p>The following is an example showing how to create a
< simple AutodiffComposition, specify its inputs and targets, and run it with learning enabled and disabled.</p>
---
> methods are the same as for a <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a>.  However, the <strong>execution_mode</strong> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a>
> method has different effects than for a standard Composition, that determine whether it uses <a class="reference internal" href="#autodiffcomposition-llvm"><span class="std std-ref">LLVM compilation</span></a> or <a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">translation to PyTorch</span></a> to execute learning.
> This <a class="reference internal" href="Composition.html#composition-compilation-table"><span class="std std-ref">table</span></a> provides a summary and comparison of these different modes of execution,
> that are described in greater detail below.</p>
> <section id="llvm-mode">
> <span id="autodiffcomposition-llvm"></span><h3><em>LLVM mode</em><a class="headerlink" href="#llvm-mode" title="Permalink to this headline">¶</a></h3>
> <p>This is specified by setting <strong>execution_mode</strong> = <a class="reference internal" href="LLVM.html#psyneulink.core.llvm.__init__.ExecutionMode.LLVMRun" title="psyneulink.core.llvm.__init__.ExecutionMode.LLVMRun"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">ExecutionMode.LLVMRun</span></code></a> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> method
> of an AutodiffCompositon.  This provides the fastest performance, but is limited to <a class="reference internal" href="Composition.html#composition-learning-supervised"><span class="std std-ref">supervised learning</span></a> using the <a class="reference internal" href="LearningFunctions.html#psyneulink.core.components.functions.learningfunctions.BackPropagation" title="psyneulink.core.components.functions.learningfunctions.BackPropagation"><code class="xref any py py-class docutils literal notranslate"><span class="pre">BackPropagation</span></code></a> algorithm. This can be run using standard forms of
> loss, including mean squared error (MSE) and cross entropy, by specifying this in the <strong>loss_spec</strong> argument of
> the constructor (see <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">AutodiffComposition</span></a> for additional details, and
> <a class="reference internal" href="Composition.html#composition-compiled-modes"><span class="std std-ref">Compilation Modes</span></a> for more information about executing a Composition in compiled mode.</p>
> <blockquote>
> <div><div class="admonition note">
> <p class="admonition-title">Note</p>
> <p>Specifying <code class="xref any docutils literal notranslate"><span class="pre">ExecutionMode.LLVMRUn</span></code> in either the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> and <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.run" title="psyneulink.core.compositions.composition.Composition.run"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">run</span></code></a>
> methods of an AutodiffComposition causes it to (attempt to) use compiled execution in both cases; this is
> because LLVM compilation supports the use of modulation in PsyNeuLink models (as compared to <a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">PyTorch mode</span></a>; see <a class="reference internal" href="#autodiffcomposition-pytorch-note"><span class="std std-ref">note</span></a> below).</p>
> </div>
> </div></blockquote>
> </section>
> <section id="pytorch-mode">
> <span id="autodiffcomposition-pytorch"></span><h3><em>PyTorch mode</em><a class="headerlink" href="#pytorch-mode" title="Permalink to this headline">¶</a></h3>
> <p>This is specified by setting <a href="#id1"><span class="problematic" id="id2">**</span></a>execution_mode = <a class="reference internal" href="LLVM.html#psyneulink.core.llvm.__init__.ExecutionMode.PyTorch" title="psyneulink.core.llvm.__init__.ExecutionMode.PyTorch"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">ExecutionMode.PyTorch</span></code></a> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> method of
> an AutodiffCompositon (see <a class="reference internal" href="BasicsAndPrimer.html#basicsandprimer-rumelhart-model"><span class="std std-ref">example</span></a> in <a class="reference internal" href="BasicsAndPrimer.html"><span class="doc">Basics and Primer</span></a>).  This automatically
> translates the AutodiffComposition to a <a class="reference external" href="https://pytorch.org">PyTorch</a> model an
...

See CI logs for the full diff.

Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeQL found more than 10 potential problems in the proposed changes. Check the Files changed tab for more details.

@lgtm-com
Copy link
Contributor

lgtm-com bot commented Nov 18, 2022

This pull request introduces 18 alerts and fixes 2 when merging 98e6525 into 89d2561 - view on LGTM.com

new alerts:

  • 5 for Unused local variable
  • 5 for Unused import
  • 4 for Testing equality to None
  • 2 for Unreachable code
  • 1 for Comparison using is when operands support `__eq__`
  • 1 for Syntax error

fixed alerts:

  • 1 for Unused local variable
  • 1 for Unused import

Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. It looks like GitHub code scanning with CodeQL is already set up for this repo, so no further action is needed 🚀. For more information, please check out our post on the GitHub blog.

  - clean up test_xor_training_identicalness_standard_composition_vs_PyTorch_and_LLVM
@github-actions
Copy link

This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):

diff -r docs-base/AutodiffComposition.html docs-head/AutodiffComposition.html
221a222,224
> <li><p><a class="reference internal" href="#autodiffcomposition-llvm"><span class="std std-ref">LLVM mode</span></a></p></li>
> <li><p><a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">PyTorch mode</span></a></p></li>
> <li><p><a class="reference internal" href="#autodiffcomposition-nested-modulation"><span class="std std-ref">Nested Execution and Modulation</span></a></p></li>
223d225
< <li><p><a class="reference internal" href="#autodiffcomposition-nested-execution"><span class="std std-ref">Nested Execution</span></a></p></li>
227a230
> <li><p><a class="reference internal" href="#autodiffcomposition-examples"><span class="std std-ref">Examples</span></a></p></li>
234,243c237,245
< <div class="admonition warning">
< <p class="admonition-title">Warning</p>
< <p>As of PsyNeuLink 0.7.5, the API for using AutodiffCompositions has been slightly changed!
< Please see <a class="reference internal" href="RefactoredLearningGuide.html"><span class="doc">this link</span></a> for more details!</p>
< </div>
< <p>AutodiffComposition is a subclass of <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a> used to train feedforward neural network models through integration
< with <a class="reference external" href="https://pytorch.org/">PyTorch</a>, a machine learning library that executes considerably more quickly
< than using the <a class="reference internal" href="Composition.html#composition-learning-standard"><span class="std std-ref">standard implementation of learning</span></a> in a Composition, using its
< <a class="reference internal" href="Composition.html#composition-learning-methods"><span class="std std-ref">learning methods</span></a>. An AutodiffComposition is configured and run similarly to a standard
< Composition, with some exceptions that are described below.</p>
---
> <p>AutodiffComposition is a subclass of <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a> for constructing and training feedforward neural network
> either, using either direct compilation (to LLVM) or automatic conversion to <a class="reference external" href="https://pytorch.org/">PyTorch</a>,
> both of which considerably accelerate training (by as much as three orders of magnitude) compared to the
> <a class="reference internal" href="Composition.html#composition-learning-standard"><span class="std std-ref">standard implementation of learning</span></a> in a Composition.  Although an
> AutodiffComposition is constructed and executed in much the same way as a standard Composition, it largely restricted
> to feedforward neural networks using <a class="reference internal" href="Composition.html#composition-learning-supervised"><span class="std std-ref">supervised learning</span></a>, and in particular the
> the <a class="reference external" href="https://en.wikipedia.org/wiki/Backpropagation">backpropagation learning algorithm</a>. although it can be used for
> some forms of <a class="reference internal" href="Composition.html#composition-learning-unsupervised"><span class="std std-ref">unsupervised learning</span></a> that are supported in PyTorch (e.g.,
> <a class="reference external" href="https://github.com/giannisnik/som">self-organized maps</a>).</p>
247,251c249,257
< <p>An AutodiffComposition can be created by calling its constructor, and then adding <a class="reference internal" href="Component.html"><span class="doc">Components</span></a> using the
< standard <a class="reference internal" href="Composition.html#composition-creation"><span class="std std-ref">Composition methods</span></a> for doing so.  The constructor also includes an number of
< parameters that are specific to the AutodiffComposition. See <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">Class Reference</span></a> for a list of
< these parameters.</p>
< <div class="admonition warning">
---
> <p>An AutodiffComposition can be created by calling its constructor, and then adding <a class="reference internal" href="Component.html"><span class="doc">Components</span></a> using
> the standard <a class="reference internal" href="Composition.html#composition-creation"><span class="std std-ref">Composition methods</span></a> for doing so (e.g., <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_node" title="psyneulink.core.compositions.composition.Composition.add_node"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_node</span></code></a>,
> <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_projections" title="psyneulink.core.compositions.composition.Composition.add_projections"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_projection</span></code></a>,  <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_linear_processing_pathway" title="psyneulink.core.compositions.composition.Composition.add_linear_processing_pathway"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_linear_processing_pathway</span></code></a>, etc.).  The constructor also includes a number of parameters that are
> specific to the AutodiffComposition (see <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">Class Reference</span></a> for a list of these parameters,
> and <a class="reference internal" href="#autodiffcomposition-examples"><span class="std std-ref">examples</span></a> below).  Note that all of the Components in an AutodiffComposition
> must be able to be subject to <a class="reference internal" href="Composition.html#composition-learning"><span class="std std-ref">learning</span></a>, but cannot include any <a class="reference internal" href="Composition.html#composition-learning-components"><span class="std std-ref">learning components</span></a> themselves.  Specifically, it cannot include any <a class="reference internal" href="ModulatoryMechanism.html"><span class="doc">ModulatoryMechanisms</span></a>, <a class="reference internal" href="LearningProjection.html"><span class="doc">LearningProjections</span></a>, or the ObjectiveMechanism &lt;OBJECTIVE_MECHANISM&gt;`
> used to compute the loss for learning.</p>
> <blockquote>
> <div><div class="admonition warning" id="autodiff-learning-components-warning">
253,255c259,279
< <p>Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it has
< been run for the first time. Unlike an ordinary Composition, AutodiffComposition does not support this
< functionality.</p>
---
> <p>When an AutodiffComposition is constructed, it creates all of the learning Components
> that are needed, and thus <strong>cannot include</strong> any that are prespecified.</p>
> </div>
> </div></blockquote>
> <p>This means that an AutodiffComposition also cannot itself include a <a class="reference internal" href="Composition.html#composition-controller"><span class="std std-ref">controller</span></a> or any
> <a class="reference internal" href="ControlMechanism.html"><span class="doc">ControlMechanisms</span></a>.  However, it can include Mechanisms that are subject to modulatory control
> (see <a class="reference internal" href="ModulatorySignal.html#modulatorysignal-anatomy-figure"><span class="std std-ref">Figure</span></a>, and <a class="reference internal" href="ModulatorySignal.html#modulatorysignal-modulation"><span class="std std-ref">modulation</span></a>) by ControlMechanisms
> <em>outside</em> the Composition, including the controller of a Composition within which the AutodiffComposition is nested.
> That is, an AutodiffComposition can be <a class="reference internal" href="Composition.html#composition-nested"><span class="std std-ref">nested in a Composition</span></a> that has such other Components
> (see <a class="reference internal" href="#autodiffcomposition-nested-modulation"><span class="std std-ref">Nested Execution and Modulation</span></a> below).</p>
> <p>A few other restrictions apply to the construction and modification of AutodiffCompositions:</p>
> <blockquote>
> <div><div class="admonition hint">
> <p class="admonition-title">Hint</p>
> <p>AutodiffComposition does not (currently) support the <em>automatic</em> construction of separate bias parameters.
> Thus, when comparing a model constructed using an AutodiffComposition to a corresponding model in PyTorch, the
> <code class="xref any docutils literal notranslate"><span class="pre">bias</span></code> parameter of PyTorch modules should be set
> to <code class="xref any docutils literal notranslate"><span class="pre">False</span></code>.  Trainable biases <em>can</em> be specified explicitly in an AutodiffComposition by including a
> TransferMechanism that projects to the relevant Mechanism (i.e., implementing that layer of the network to
> receive the biases) using a <a class="reference internal" href="MappingProjection.html"><span class="doc">MappingProjection</span></a> with a <a class="reference internal" href="MappingProjection.html#psyneulink.core.components.projections.pathway.mappingprojection.MappingProjection.matrix" title="psyneulink.core.components.projections.pathway.mappingprojection.MappingProjection.matrix"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">matrix</span></code></a> parameter that
> implements a diagnoal matrix with values corresponding to the initial value of the biases.</p>
259,261c283,284
< <p>When comparing models built in PyTorch to those using AutodiffComposition,
< the <code class="xref any docutils literal notranslate"><span class="pre">bias</span></code> parameter of PyTorch modules
< should be set to <code class="xref any docutils literal notranslate"><span class="pre">False</span></code>, as AutodiffComposition does not currently support trainable biases.</p>
---
> <p>Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it
> has been executed. Unlike an ordinary Composition, AutodiffComposition does not support this functionality.</p>
262a286
> </div></blockquote>
267,269c291,361
< methods are the same as for a <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a>.</p>
< <p>The following is an example showing how to create a
< simple AutodiffComposition, specify its inputs and targets, and run it with learning enabled and disabled.</p>
---
> methods are the same as for a <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a>.  However, the <strong>execution_mode</strong> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a>
> method has different effects than for a standard Composition, that determine whether it uses <a class="reference internal" href="#autodiffcomposition-llvm"><span class="std std-ref">LLVM compilation</span></a> or <a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">translation to PyTorch</span></a> to execute learning.
> This <a class="reference internal" href="Composition.html#composition-compilation-table"><span class="std std-ref">table</span></a> provides a summary and comparison of these different modes of execution,
> that are described in greater detail below.</p>
> <section id="llvm-mode">
> <span id="autodiffcomposition-llvm"></span><h3><em>LLVM mode</em><a class="headerlink" href="#llvm-mode" title="Permalink to this headline">¶</a></h3>
> <p>This is specified by setting <strong>execution_mode</strong> = <a class="reference internal" href="LLVM.html#psyneulink.core.llvm.__init__.ExecutionMode.LLVMRun" title="psyneulink.core.llvm.__init__.ExecutionMode.LLVMRun"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">ExecutionMode.LLVMRun</span></code></a> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> method
> of an AutodiffCompositon.  This provides the fastest performance, but is limited to <a class="reference internal" href="Composition.html#composition-learning-supervised"><span class="std std-ref">supervised learning</span></a> using the <a class="reference internal" href="LearningFunctions.html#psyneulink.core.components.functions.learningfunctions.BackPropagation" title="psyneulink.core.components.functions.learningfunctions.BackPropagation"><code class="xref any py py-class docutils literal notranslate"><span class="pre">BackPropagation</span></code></a> algorithm. This can be run using standard forms of
> loss, including mean squared error (MSE) and cross entropy, by specifying this in the <strong>loss_spec</strong> argument of
> the constructor (see <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">AutodiffComposition</span></a> for additional details, and
> <a class="reference internal" href="Composition.html#composition-compiled-modes"><span class="std std-ref">Compilation Modes</span></a> for more information about executing a Composition in compiled mode.</p>
> <blockquote>
> <div><div class="admonition note">
> <p class="admonition-title">Note</p>
> <p>Specifying <code class="xref any docutils literal notranslate"><span class="pre">ExecutionMode.LLVMRUn</span></code> in either the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> and <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.run" title="psyneulink.core.compositions.composition.Composition.run"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">run</span></code></a>
> methods of an AutodiffComposition causes it to (attempt to) use compiled execution in both cases; this is
> because LLVM compilation supports the use of modulation in PsyNeuLink models (as compared to <a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">PyTorch mode</span></a>; see <a class="reference internal" href="#autodiffcomposition-pytorch-note"><span class="std std-ref">note</span></a> below).</p>
> </div>
> </div></blockquote>
> </section>
> <section id="pytorch-mode">
> <span id="autodiffcomposition-pytorch"></span><h3><em>PyTorch mode</em><a class="headerlink" href="#pytorch-mode" title="Permalink to this headline">¶</a></h3>
> <p>This is specified by setting <a href="#id1"><span class="problematic" id="id2">**</span></a>execution_mode = <a class="reference internal" href="LLVM.html#psyneulink.core.llvm.__init__.ExecutionMode.PyTorch" title="psyneulink.core.llvm.__init__.ExecutionMode.PyTorch"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">ExecutionMode.PyTorch</span></code></a> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> method of
> an AutodiffCompositon (see <a class="reference internal" href="BasicsAndPrimer.html#basicsandprimer-rumelhart-model"><span class="std std-ref">example</span></a> in <a class="reference internal" href="BasicsAndPrimer.html"><span class="doc">Basics and Primer</span></a>).  This automatically
> translates the AutodiffComposition to a <a class="reference external" href="https://pytorch.org">PyTorch</a> model an
...

See CI logs for the full diff.

@lgtm-com
Copy link
Contributor

lgtm-com bot commented Nov 18, 2022

This pull request introduces 11 alerts and fixes 6 when merging a46be70 into 89d2561 - view on LGTM.com

new alerts:

  • 5 for Unused import
  • 3 for Unused local variable
  • 2 for Testing equality to None
  • 1 for Comparison using is when operands support `__eq__`

fixed alerts:

  • 3 for Unused local variable
  • 1 for Unused import
  • 1 for Unreachable code
  • 1 for Wrong number of arguments in a call

Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. It looks like GitHub code scanning with CodeQL is already set up for this repo, so no further action is needed 🚀. For more information, please check out our post on the GitHub blog.

@coveralls
Copy link

coveralls commented Nov 18, 2022

Coverage Status

Coverage decreased (-0.8%) to 83.501% when pulling 787e8a4 on nback into 89d2561 on devel.

  - clean up test_xor_training_identicalness_standard_composition_vs_PyTorch_and_LLVM
@github-actions
Copy link

This PR causes the following changes to the html docs (ubuntu-latest-3.7-x64):

diff -r docs-base/AutodiffComposition.html docs-head/AutodiffComposition.html
221a222,224
> <li><p><a class="reference internal" href="#autodiffcomposition-llvm"><span class="std std-ref">LLVM mode</span></a></p></li>
> <li><p><a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">PyTorch mode</span></a></p></li>
> <li><p><a class="reference internal" href="#autodiffcomposition-nested-modulation"><span class="std std-ref">Nested Execution and Modulation</span></a></p></li>
223d225
< <li><p><a class="reference internal" href="#autodiffcomposition-nested-execution"><span class="std std-ref">Nested Execution</span></a></p></li>
227a230
> <li><p><a class="reference internal" href="#autodiffcomposition-examples"><span class="std std-ref">Examples</span></a></p></li>
234,243c237,245
< <div class="admonition warning">
< <p class="admonition-title">Warning</p>
< <p>As of PsyNeuLink 0.7.5, the API for using AutodiffCompositions has been slightly changed!
< Please see <a class="reference internal" href="RefactoredLearningGuide.html"><span class="doc">this link</span></a> for more details!</p>
< </div>
< <p>AutodiffComposition is a subclass of <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a> used to train feedforward neural network models through integration
< with <a class="reference external" href="https://pytorch.org/">PyTorch</a>, a machine learning library that executes considerably more quickly
< than using the <a class="reference internal" href="Composition.html#composition-learning-standard"><span class="std std-ref">standard implementation of learning</span></a> in a Composition, using its
< <a class="reference internal" href="Composition.html#composition-learning-methods"><span class="std std-ref">learning methods</span></a>. An AutodiffComposition is configured and run similarly to a standard
< Composition, with some exceptions that are described below.</p>
---
> <p>AutodiffComposition is a subclass of <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a> for constructing and training feedforward neural network
> either, using either direct compilation (to LLVM) or automatic conversion to <a class="reference external" href="https://pytorch.org/">PyTorch</a>,
> both of which considerably accelerate training (by as much as three orders of magnitude) compared to the
> <a class="reference internal" href="Composition.html#composition-learning-standard"><span class="std std-ref">standard implementation of learning</span></a> in a Composition.  Although an
> AutodiffComposition is constructed and executed in much the same way as a standard Composition, it largely restricted
> to feedforward neural networks using <a class="reference internal" href="Composition.html#composition-learning-supervised"><span class="std std-ref">supervised learning</span></a>, and in particular the
> the <a class="reference external" href="https://en.wikipedia.org/wiki/Backpropagation">backpropagation learning algorithm</a>. although it can be used for
> some forms of <a class="reference internal" href="Composition.html#composition-learning-unsupervised"><span class="std std-ref">unsupervised learning</span></a> that are supported in PyTorch (e.g.,
> <a class="reference external" href="https://github.com/giannisnik/som">self-organized maps</a>).</p>
247,251c249,257
< <p>An AutodiffComposition can be created by calling its constructor, and then adding <a class="reference internal" href="Component.html"><span class="doc">Components</span></a> using the
< standard <a class="reference internal" href="Composition.html#composition-creation"><span class="std std-ref">Composition methods</span></a> for doing so.  The constructor also includes an number of
< parameters that are specific to the AutodiffComposition. See <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">Class Reference</span></a> for a list of
< these parameters.</p>
< <div class="admonition warning">
---
> <p>An AutodiffComposition can be created by calling its constructor, and then adding <a class="reference internal" href="Component.html"><span class="doc">Components</span></a> using
> the standard <a class="reference internal" href="Composition.html#composition-creation"><span class="std std-ref">Composition methods</span></a> for doing so (e.g., <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_node" title="psyneulink.core.compositions.composition.Composition.add_node"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_node</span></code></a>,
> <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_projections" title="psyneulink.core.compositions.composition.Composition.add_projections"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_projection</span></code></a>,  <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.add_linear_processing_pathway" title="psyneulink.core.compositions.composition.Composition.add_linear_processing_pathway"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">add_linear_processing_pathway</span></code></a>, etc.).  The constructor also includes a number of parameters that are
> specific to the AutodiffComposition (see <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">Class Reference</span></a> for a list of these parameters,
> and <a class="reference internal" href="#autodiffcomposition-examples"><span class="std std-ref">examples</span></a> below).  Note that all of the Components in an AutodiffComposition
> must be able to be subject to <a class="reference internal" href="Composition.html#composition-learning"><span class="std std-ref">learning</span></a>, but cannot include any <a class="reference internal" href="Composition.html#composition-learning-components"><span class="std std-ref">learning components</span></a> themselves.  Specifically, it cannot include any <a class="reference internal" href="ModulatoryMechanism.html"><span class="doc">ModulatoryMechanisms</span></a>, <a class="reference internal" href="LearningProjection.html"><span class="doc">LearningProjections</span></a>, or the ObjectiveMechanism &lt;OBJECTIVE_MECHANISM&gt;`
> used to compute the loss for learning.</p>
> <blockquote>
> <div><div class="admonition warning" id="autodiff-learning-components-warning">
253,255c259,279
< <p>Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it has
< been run for the first time. Unlike an ordinary Composition, AutodiffComposition does not support this
< functionality.</p>
---
> <p>When an AutodiffComposition is constructed, it creates all of the learning Components
> that are needed, and thus <strong>cannot include</strong> any that are prespecified.</p>
> </div>
> </div></blockquote>
> <p>This means that an AutodiffComposition also cannot itself include a <a class="reference internal" href="Composition.html#composition-controller"><span class="std std-ref">controller</span></a> or any
> <a class="reference internal" href="ControlMechanism.html"><span class="doc">ControlMechanisms</span></a>.  However, it can include Mechanisms that are subject to modulatory control
> (see <a class="reference internal" href="ModulatorySignal.html#modulatorysignal-anatomy-figure"><span class="std std-ref">Figure</span></a>, and <a class="reference internal" href="ModulatorySignal.html#modulatorysignal-modulation"><span class="std std-ref">modulation</span></a>) by ControlMechanisms
> <em>outside</em> the Composition, including the controller of a Composition within which the AutodiffComposition is nested.
> That is, an AutodiffComposition can be <a class="reference internal" href="Composition.html#composition-nested"><span class="std std-ref">nested in a Composition</span></a> that has such other Components
> (see <a class="reference internal" href="#autodiffcomposition-nested-modulation"><span class="std std-ref">Nested Execution and Modulation</span></a> below).</p>
> <p>A few other restrictions apply to the construction and modification of AutodiffCompositions:</p>
> <blockquote>
> <div><div class="admonition hint">
> <p class="admonition-title">Hint</p>
> <p>AutodiffComposition does not (currently) support the <em>automatic</em> construction of separate bias parameters.
> Thus, when comparing a model constructed using an AutodiffComposition to a corresponding model in PyTorch, the
> <code class="xref any docutils literal notranslate"><span class="pre">bias</span></code> parameter of PyTorch modules should be set
> to <code class="xref any docutils literal notranslate"><span class="pre">False</span></code>.  Trainable biases <em>can</em> be specified explicitly in an AutodiffComposition by including a
> TransferMechanism that projects to the relevant Mechanism (i.e., implementing that layer of the network to
> receive the biases) using a <a class="reference internal" href="MappingProjection.html"><span class="doc">MappingProjection</span></a> with a <a class="reference internal" href="MappingProjection.html#psyneulink.core.components.projections.pathway.mappingprojection.MappingProjection.matrix" title="psyneulink.core.components.projections.pathway.mappingprojection.MappingProjection.matrix"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">matrix</span></code></a> parameter that
> implements a diagnoal matrix with values corresponding to the initial value of the biases.</p>
259,261c283,284
< <p>When comparing models built in PyTorch to those using AutodiffComposition,
< the <code class="xref any docutils literal notranslate"><span class="pre">bias</span></code> parameter of PyTorch modules
< should be set to <code class="xref any docutils literal notranslate"><span class="pre">False</span></code>, as AutodiffComposition does not currently support trainable biases.</p>
---
> <p>Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it
> has been executed. Unlike an ordinary Composition, AutodiffComposition does not support this functionality.</p>
262a286
> </div></blockquote>
267,269c291,361
< methods are the same as for a <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a>.</p>
< <p>The following is an example showing how to create a
< simple AutodiffComposition, specify its inputs and targets, and run it with learning enabled and disabled.</p>
---
> methods are the same as for a <a class="reference internal" href="Composition.html"><span class="doc">Composition</span></a>.  However, the <strong>execution_mode</strong> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a>
> method has different effects than for a standard Composition, that determine whether it uses <a class="reference internal" href="#autodiffcomposition-llvm"><span class="std std-ref">LLVM compilation</span></a> or <a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">translation to PyTorch</span></a> to execute learning.
> This <a class="reference internal" href="Composition.html#composition-compilation-table"><span class="std std-ref">table</span></a> provides a summary and comparison of these different modes of execution,
> that are described in greater detail below.</p>
> <section id="llvm-mode">
> <span id="autodiffcomposition-llvm"></span><h3><em>LLVM mode</em><a class="headerlink" href="#llvm-mode" title="Permalink to this headline">¶</a></h3>
> <p>This is specified by setting <strong>execution_mode</strong> = <a class="reference internal" href="LLVM.html#psyneulink.core.llvm.__init__.ExecutionMode.LLVMRun" title="psyneulink.core.llvm.__init__.ExecutionMode.LLVMRun"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">ExecutionMode.LLVMRun</span></code></a> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> method
> of an AutodiffCompositon.  This provides the fastest performance, but is limited to <a class="reference internal" href="Composition.html#composition-learning-supervised"><span class="std std-ref">supervised learning</span></a> using the <a class="reference internal" href="LearningFunctions.html#psyneulink.core.components.functions.learningfunctions.BackPropagation" title="psyneulink.core.components.functions.learningfunctions.BackPropagation"><code class="xref any py py-class docutils literal notranslate"><span class="pre">BackPropagation</span></code></a> algorithm. This can be run using standard forms of
> loss, including mean squared error (MSE) and cross entropy, by specifying this in the <strong>loss_spec</strong> argument of
> the constructor (see <a class="reference internal" href="#autodiffcomposition-class-reference"><span class="std std-ref">AutodiffComposition</span></a> for additional details, and
> <a class="reference internal" href="Composition.html#composition-compiled-modes"><span class="std std-ref">Compilation Modes</span></a> for more information about executing a Composition in compiled mode.</p>
> <blockquote>
> <div><div class="admonition note">
> <p class="admonition-title">Note</p>
> <p>Specifying <code class="xref any docutils literal notranslate"><span class="pre">ExecutionMode.LLVMRUn</span></code> in either the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> and <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.run" title="psyneulink.core.compositions.composition.Composition.run"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">run</span></code></a>
> methods of an AutodiffComposition causes it to (attempt to) use compiled execution in both cases; this is
> because LLVM compilation supports the use of modulation in PsyNeuLink models (as compared to <a class="reference internal" href="#autodiffcomposition-pytorch"><span class="std std-ref">PyTorch mode</span></a>; see <a class="reference internal" href="#autodiffcomposition-pytorch-note"><span class="std std-ref">note</span></a> below).</p>
> </div>
> </div></blockquote>
> </section>
> <section id="pytorch-mode">
> <span id="autodiffcomposition-pytorch"></span><h3><em>PyTorch mode</em><a class="headerlink" href="#pytorch-mode" title="Permalink to this headline">¶</a></h3>
> <p>This is specified by setting <a href="#id1"><span class="problematic" id="id2">**</span></a>execution_mode = <a class="reference internal" href="LLVM.html#psyneulink.core.llvm.__init__.ExecutionMode.PyTorch" title="psyneulink.core.llvm.__init__.ExecutionMode.PyTorch"><code class="xref any py py-attr docutils literal notranslate"><span class="pre">ExecutionMode.PyTorch</span></code></a> in the <a class="reference internal" href="Composition.html#psyneulink.core.compositions.composition.Composition.learn" title="psyneulink.core.compositions.composition.Composition.learn"><code class="xref any py py-meth docutils literal notranslate"><span class="pre">learn</span></code></a> method of
> an AutodiffCompositon (see <a class="reference internal" href="BasicsAndPrimer.html#basicsandprimer-rumelhart-model"><span class="std std-ref">example</span></a> in <a class="reference internal" href="BasicsAndPrimer.html"><span class="doc">Basics and Primer</span></a>).  This automatically
> translates the AutodiffComposition to a <a class="reference external" href="https://pytorch.org">PyTorch</a> model an
...

See CI logs for the full diff.

@lgtm-com
Copy link
Contributor

lgtm-com bot commented Nov 19, 2022

This pull request introduces 10 alerts and fixes 6 when merging 787e8a4 into 89d2561 - view on LGTM.com

new alerts:

  • 5 for Unused import
  • 3 for Unused local variable
  • 2 for Testing equality to None

fixed alerts:

  • 3 for Unused local variable
  • 1 for Unused import
  • 1 for Unreachable code
  • 1 for Wrong number of arguments in a call

Heads-up: LGTM.com's PR analysis will be disabled on the 5th of December, and LGTM.com will be shut down ⏻ completely on the 16th of December 2022. It looks like GitHub code scanning with CodeQL is already set up for this repo, so no further action is needed 🚀. For more information, please check out our post on the GitHub blog.

@jdcpni jdcpni merged commit fc13a63 into devel Nov 19, 2022
@jdcpni jdcpni deleted the nback branch November 19, 2022 02:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants