Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tried to fix the English of the first few paras #29050

Merged
merged 1 commit into from
Sep 12, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions doc/src/manual/parallel-computing.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,22 +7,22 @@ the different levels of parallelism offered by Julia. We can divide them in thre
2. Multi-Threading
3. Multi-Core or Distributed Processing

We will first consider Julia [Tasks (aka Coroutines)](@ref man-tasks) and other modules that rely on the Julia runtime library, that allow to suspend and resume computations with full control of inter-`Tasks` communication without having to manually interface with the operative system's scheduler.
Julia also allows to communicate between `Tasks` through operations like [`wait`](@ref) and [`fetch`](@ref).
Communication and data synchronization is managed through [`Channel`](@ref)s, which are the conduit
that allows inter-`Tasks` communication.
We will first consider Julia [Tasks (aka Coroutines)](@ref man-tasks) and other modules that rely on the Julia runtime library, that allow us to suspend and resume computations with full control of inter-`Tasks` communication without having to manually interface with the operating system's scheduler.
Julia also supports communication between `Tasks` through operations like [`wait`](@ref) and [`fetch`](@ref).
Communication and data synchronization is managed through [`Channel`](@ref)s, which are the conduits
that provide inter-`Tasks` communication.

Julia also supports experimental multi-threading, where execution is forked and an anonymous function is run across all
threads.
Described as a fork-join approach, parallel threads are branched off and they all have to join the Julia main thread to make serial execution continue.
Known as the fork-join approach, parallel threads execute independently, and must ultimately be joined in Julia's main thread to allow serial execution to continue.
Multi-threading is supported using the `Base.Threads` module that is still considered experimental, as Julia is
not fully thread-safe yet. In particular segfaults seem to emerge for I\O operations and task switching.
As an un up-to-date reference, keep an eye on [the issue tracker](https://github.com/JuliaLang/julia/issues?q=is%3Aopen+is%3Aissue+label%3Amultithreading).
not yet fully thread-safe. In particular segfaults seem to occur during I\O operations and task switching.
As an up-to-date reference, keep an eye on [the issue tracker](https://github.com/JuliaLang/julia/issues?q=is%3Aopen+is%3Aissue+label%3Amultithreading).
Multi-Threading should only be used if you take into consideration global variables, locks and
atomics, so we will explain it later.
atomics, all of which are explained later.

In the end we will present Julia's way to distributed and parallel computing. With scientific computing
in mind, Julia natively implements interfaces to distribute a process through multiple cores or machines.
In the end we will present Julia's approach to distributed and parallel computing. With scientific computing
in mind, Julia natively implements interfaces to distribute a process across multiple cores or machines.
Also we will mention useful external packages for distributed programming like `MPI.jl` and `DistributedArrays.jl`.

# Coroutines
Expand Down