Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix docs in ReadTheDocs #208

Merged
merged 6 commits into from
Aug 13, 2018
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 8 additions & 34 deletions docs/examples/custom_optimization_loop.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,24 +23,7 @@ class. 2. Operate on it according to our custom algorithm with the help
of the ``Topology`` class; and 3. Update the ``Swarm`` class with the
new attributes.

.. code:: ipython3

import sys
# Change directory to access the pyswarms module
sys.path.append('../')

.. code:: ipython3

print('Running on Python version: {}'.format(sys.version))


.. parsed-literal::

Running on Python version: 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0]


.. code:: ipython3
.. code-block:: python

# Import modules
import numpy as np
Expand All @@ -64,7 +47,7 @@ Now, the global best PSO pseudocode looks like the following (adapted
from `A. Engelbrecht, "Computational Intelligence: An Introduction,
2002 <https://www.wiley.com/en-us/Computational+Intelligence%3A+An+Introduction%2C+2nd+Edition-p-9780470035610>`__):

.. code:: python
.. code-block:: python

# Python-version of gbest algorithm from Engelbrecht's book
for i in range(iterations):
Expand All @@ -90,7 +73,7 @@ Let's make a 2-dimensional swarm with 50 particles that will optimize
the sphere function. First, let's initialize the important attributes in
our algorithm

.. code:: ipython3
.. code-block:: python

my_topology = Star() # The Topology Class
my_options = {'c1': 0.6, 'c2': 0.3, 'w': 0.4} # arbitrarily set
Expand All @@ -99,14 +82,14 @@ our algorithm
print('The following are the attributes of our swarm: {}'.format(my_swarm.__dict__.keys()))


.. parsed-literal::
.. code::

The following are the attributes of our swarm: dict_keys(['position', 'velocity', 'n_particles', 'dimensions', 'options', 'pbest_pos', 'best_pos', 'pbest_cost', 'best_cost', 'current_cost'])


Now, let's write our optimization loop!

.. code:: ipython3
.. code-block:: python

iterations = 100 # Set 100 iterations
for i in range(iterations):
Expand All @@ -133,7 +116,7 @@ Now, let's write our optimization loop!
print('The best position found by our swarm is: {}'.format(my_swarm.best_pos))


.. parsed-literal::
.. code::

Iteration: 1 | my_swarm.best_cost: 0.0180
Iteration: 21 | my_swarm.best_cost: 0.0023
Expand All @@ -147,15 +130,15 @@ Now, let's write our optimization loop!
Of course, we can just use the ``GlobalBestPSO`` implementation in
PySwarms (it has boundary support, tolerance, initial positions, etc.):

.. code:: ipython3
.. code-block:: python

from pyswarms.single import GlobalBestPSO

optimizer = GlobalBestPSO(n_particles=50, dimensions=2, options=my_options) # Reuse our previous options
optimizer.optimize(f, iters=100, print_step=20, verbose=2)


.. parsed-literal::
.. code::

INFO:pyswarms.single.global_best:Iteration 1/100, cost: 0.025649680624878678
INFO:pyswarms.single.global_best:Iteration 21/100, cost: 0.00011046719760866999
Expand All @@ -167,12 +150,3 @@ PySwarms (it has boundary support, tolerance, initial positions, etc.):
Final cost: 0.0001
Best value: [0.007417861777661566, 0.004421058167808941]





.. parsed-literal::

(7.457042867564255e-05, array([0.00741786, 0.00442106]))


64 changes: 11 additions & 53 deletions docs/examples/inverse_kinematics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,23 +6,7 @@ In this example, we are going to use the ``pyswarms`` library to solve a
it as an optimization problem. We will use the ``pyswarms`` library to
find an *optimal* solution from a set of candidate solutions.

.. code:: python

import sys
# Change directory to access the pyswarms module
sys.path.append('../')

.. code:: python

print('Running on Python version: {}'.format(sys.version))


.. parsed-literal::

Running on Python version: 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 07:18:10) [MSC v.1900 32 bit (Intel)]


.. code:: python
.. code-block:: python

# Import modules
import numpy as np
Expand All @@ -35,26 +19,8 @@ find an *optimal* solution from a set of candidate solutions.
%load_ext autoreload
%autoreload 2

.. code:: python

%%html
<style>
table {margin-left: 0 !important;}
</style>
# Styling for the text below



.. raw:: html

<style>
table {margin-left: 0 !important;}
</style>
# Styling for the text below


Introduction
============
------------

Inverse Kinematics is one of the most challenging problems in robotics.
The problem involves finding an optimal *pose* for a manipulator given
Expand All @@ -77,7 +43,7 @@ trying to solve the problem for 6 or even more DOF can lead to
challenging algebraic problems.

IK as an Optimization Problem
=============================
-----------------------------

In this implementation, we are going to use a *6-DOF Stanford
Manipulator* with 5 revolute joints and 1 prismatic joint. Furthermore,
Expand Down Expand Up @@ -117,7 +83,7 @@ And for our end-tip position we define the target vector
We can then start implementing our optimization algorithm.

Initializing the Swarm
======================
~~~~~~~~~~~~~~~~~~~~~~

The main idea for PSO is that we set a swarm :math:`\mathbf{S}` composed
of particles :math:`\mathbf{P}_n` into a search space in order to find
Expand Down Expand Up @@ -147,7 +113,7 @@ generate the :math:`N-1` particles using a uniform distribution which is
controlled by the hyperparameter :math:`\epsilon`.

Finding the global optimum
==========================
~~~~~~~~~~~~~~~~~~~~~~~~~~

In order to find the global optimum, the swarm must be moved. This
movement is then translated by an update of the current position given
Expand All @@ -174,7 +140,7 @@ point :math:`[-2,2,3]` as our target for which we want to find an
optimal pose of the manipulator. We start by defining a function to get
the distance from the current position to the target position:

.. code:: python
.. code-block:: python

def distance(query, target):
x_dist = (target[0] - query[0])**2
Expand All @@ -194,7 +160,7 @@ values and a list of the respective maximal values. The rest can be
handled with variables. Additionally, we define the joint lengths to be
3 units long:

.. code:: python
.. code-block:: python

swarm_size = 20
dim = 6 # Dimension of X
Expand All @@ -214,7 +180,7 @@ for that. So we define a function that calculates these. The function
uses the rotation angle and the extension :math:`d` of a prismatic joint
as input:

.. code:: python
.. code-block:: python

def getTransformMatrix(theta, d, a, alpha):
T = np.array([[np.cos(theta) , -np.sin(theta)*np.cos(alpha) , np.sin(theta)*np.sin(alpha) , a*np.cos(theta)],
Expand All @@ -228,7 +194,7 @@ Now we can calculate the transformation matrix to obtain the end tip
position. For this we create another function that takes our vector
:math:`\mathbf{X}` with the joint variables as input:

.. code:: python
.. code-block:: python

def get_end_tip_position(params):
# Create the transformation matrices for the respective joints
Expand All @@ -252,7 +218,7 @@ actual function that we want to optimize. We just need to calculate the
distance between the position of each swarm particle and the target
point:

.. code:: python
.. code-block:: python

def opt_func(X):
n_particles = X.shape[0] # number of particles
Expand All @@ -265,7 +231,7 @@ Running the algorithm

Braced with these preparations we can finally start using the algorithm:

.. code:: python
.. code-block:: python

%%time
# Call an instance of PSO
Expand Down Expand Up @@ -295,13 +261,6 @@ Braced with these preparations we can finally start using the algorithm:
Final cost: 0.0000
Best value: [ -2.182725 1.323111 1.579636 ...]



.. parsed-literal::

Wall time: 13.6 s


Now let’s see if the algorithm really worked and test the output for
``joint_vars``:

Expand All @@ -314,7 +273,6 @@ Now let’s see if the algorithm really worked and test the output for

[-2. 2. 3.]


Hooray! That’s exactly the position we wanted the tip to be in. Of
course this example is quite primitive. Some extensions of this idea
could involve the consideration of the current position of the
Expand Down
11 changes: 11 additions & 0 deletions docs/examples/tutorials.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
Tutorials
=========
Below are some examples describing how the PySwarms API works.
If you wish to check the actual Jupyter Notebooks, please go to this `link <https://github.com/ljvmiranda921/pyswarms/tree/master/examples>`_

.. toctree::

basic_optimization
custom_objective_function
custom_optimization_loop
visualization
11 changes: 4 additions & 7 deletions docs/examples/usecases.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,9 @@
Use-case examples
=================
Below are some of the applications where PySwarms can be used.
Use-cases
=========
Below are some examples on how to use PSO in different applications.
If you wish to check the actual Jupyter Notebooks, please go to this `link <https://github.com/ljvmiranda921/pyswarms/tree/master/examples>`_

.. toctree::

basic_optimization
train_neural_network
custom_optimization_loop
feature_subset_selection
visualization
inverse_kinematics
Loading