- Introduction
- Design Goals
- An Execution Flow Tour
- Host Request Processing
- Intuitive User Experience
- Concurrent Invocation of Functions
- Managed Dependencies
- Durable Functions Prototype
Azure Functions is a serverless compute service that allows users to run code on-demand without having to provision or manage infrastructure. This project is to enable PowerShell in Azure Functions 2.0 as a first-class citizen using the latest version of PowerShell Core.
Azure Functions 2.0 introduced a new language worker model to offer better language extensibility. Basically, the Functions runtime was split into the host and language worker processes. The host manages the function events in general, and the language worker process executes functions for a specific language. These two pieces communicate via gRPC as the messaging layer using a well-defined contract written in Protocol Buffers. To enable PowerShell in Azure Functions 2.0, a PowerShell language worker needs to be implemented, and this document discusses its design.
- Implement the worker side contract to receive, process and respond to requests from the host. The worker process must create a gRPC client to connect to the gRPC server in the host process. That establishes a bidirectional channel for sending/receiving requests and responses between the host and worker processes.
- Expose the Functions concepts with an intuitive experience that is familiar to PowerShell users. This includes how to deal with Functions input/output bindings in the PowerShell way; what implicit type casting is needed for the input/output data; how to offer the Functions logging capability in a PowerShell friendly manner; how regular scripts (.ps1) and script modules (.psm1) fit in with the Functions, and more.
- Support concurrent function execution within single PowerShell worker. It's a requirement for a language worker to be able to handle concurrent function invocations. PowerShell is a single-threaded scripting language by default, but concurrency can be achieved by having multiple Runspaces in the same process.
- Make it easy to extend the worker to support Durable Functions. Durable Functions is an extension of Azure Functions that lets you write stateful functions in a serverless environment. It does so by managing state, checkpoints, and restarts for you. PowerShell worker doesn't support Durable Functions yet, but a prototype has been done to support its Function Chaining pattern.
Actions on the red nodes are described below
- Read for the next incoming request.
- Upon receiving the first FunctionLoadRequest, the worker performs a one-time initialization based on the Function App's metadata, to populate the pool with the first PowerShell Manager instance. The Function App's user profile 'profile.ps1' will be executed in the PowerShell Manager's constructor.
- Per every FunctionLoadRequest received, the worker loads the metadata of the Function, including the function name and ID, script path, entry point name, input/output binding info, script parameters and more. The worker also does pre-validation by analyzing the script based on the binding info.
- Per every InvocationRequest received, the worker checks out a PowerShell Manager from the pool for the subsequent invocation. A PowerShell Manager manages a PowerShell Runspace and is responsible for invoking the script defined in a Function.
- The worker retrieves the metadata of the Function from the Function Loader.
- The worker creates a Task for doing the real invocation. The task will be scheduled on a thread-pool thread, and the worker's main thread will go on straightly to read for the next request, non-blockingly.
- After an invocation, the PowerShell Manager used for the it will be reset and returned to the pool.
- Write out the response to the host. (Logging is discussed later, which also goes through gRPC)
Today, a language worker must receive and respond to three types of requests throughout its life-cycle: WorkerInitRequest, FunctionLoadRequest, and InvocationRequest.
Some background context on the Function App. A Function App may consist of one or multiple functions, each being a sub-folder under the Function App root directory. A function must have a 'function.json' present in its folder, which defines the location of the executable code as well as the input and output bindings for the function. A Function App has the 1:N mapping with the corresponding language worker - one language worker serves one Function App exclusively while one Function App could be served by multiple language workers due to auto-scale.
The host process sends only one WorkerInitRequest right after starting up the language worker process. It's for the language worker to perform static initialization. The PowerShell worker simply returns a WorkerInitResponse with the "OK" status upon receiving this request, because the static initialization is taken care of by constructors.
The host process sends a FunctionLoadRequest for every function declared in a Function App. This request carries information about a specific function as well as the Function App. For each of such a request, the PowerShell worker will extract and save the function's metadata, perform some pre-validation on the function script based on the binding information received, and finally send back a FunctionLoadResponse with the "OK" or "Fail" status depending on the pre-validation.
The pre-validation currently includes
- Check if the function script has any parsing errors;
- Check if there is 1-to-1 mapping between script parameters and output bindings (More about this requirement is discussed in the "Input and Output Binding" section below);
- Check if the settings in 'function.json' are semantically supported by the PowerShell worker (More about this is discussed in the "scriptFile and entryPoint" section below).
Until the first FunctionLoadRequest arrives, a language worker doesn't know anything about the Function App. For the PowerShell worker, there is some additional initialization that depends on the Function App root directory, including setting up some well-known paths and creating the first PowerShell Manager instance in the pool. This additional work is done upon receiving the first such request.
The host process sends an InvocationRequest every time a function is triggered to run. This request contains the ID of the function to be invoked, the input binding data, and the metadata about the trigger that fired this invocation. Upon receiving such a request, the PowerShell worker will retrieve the metadata about the function from the Function Loader using the function ID, and then check out an available PowerShell Manager instance from the PowerShell Manager Pool. After that, a task will be created to perform the real invocation asynchronously on a thread-pool thread, and the main thread is freed up to process the next incoming request.
The real invocation happens mainly in the PowerShell Manager. It transforms the input binding data to the appropriate types that are easy to work with in PowerShell, and then invokes the function script with the transformed value as arguments. Once the invocation finishes, it collects the results for the output bindings. The output results will then be converted to the pre-defined gRPC data type, and sent to the Host within an InvocationResponse.
As part of the InvocationRequest, the host also sends TraceContext which contains the Traceparent (the current Activity id), Tracestate and Attributes (the current map of tags from Activity). Users can access this by specifying it as parameter as shown in the example below.
param($req, $TraceContext)
$activityId = $TraceContext.Traceparent
$traceState = $TraceContext.Tracestate
$attributes = $TraceContext.Attributes
Please refer to TraceContext for more details on TraceContext and it's properties. The parameters can be used in conjunction with Application Insights SDK to enable distributed tracing scenarios.
The top priority is to make PowerShell Functions offer an intuitive user experience that is already familiar to PowerShell users. To achieve this, concepts of Azure Functions need to be exposed in a PowerShell friendly way. Also, unique functionalities are added to PowerShell Functions due to its characteristics.
The 'scriptFile' and 'entryPoint' are properties in 'function.json' for referencing the code to run when a function is triggered - 'scriptFile' points to the file that contains the code and 'entryPoint' potentially points to an execution unit within the file, such as a method or function. They are not mandatory. By default, the host will look for a 'run.ps1' file under the function folder when the two properties are missing. If that file exists, the host will implicitly make 'scriptFile' point to it and set 'entryPoint' to an empty string, which means the 'run.ps1' itself should be invoked as a single execution unit.
The 'scriptFile' can also point to a '.psm1' module file, with 'entryPoint' pointing to a function defined in the module. In this case, when the PowerShell Azure Function is triggered, the module file will be loaded and then the function pointed by 'entryPoint' will be invoked. This is very useful in a scenario where helper scripts are shared by the functions of the same Function App. Instead of having redundant copies of the shared scripts in multiple 'run.ps1' files under each function folder, all scripts can be put in a '.psm1' module file right under the Function App root, including the entry point functions and the helper functions. Then you just need to set the 'scriptFile' and 'entryPoint' properties in each of the 'function.json' file to point to the script module file and the respective entry point functions. This helps reduce redundancy and make the PowerShell Azure Function scripts easy to be maintained.
In PowerShell, a '.ps1' script file is mostly a single execution unit by convention, while a '.psm1' module file, on the other hand, is a group of execution units (functions) for the purpose of reuse. Given that, the PowerShell language worker allows 'entryPoint' to be specified only if 'scriptFile' points to a '.psm1' module file. This is enforced by the pre-validation that happens in processing a FunctionLoadRequest.
The following are examples of two 'function.json' files. The first one depends on the 'run.ps1' file existing in the same folder; the second one explicitly specifies the entry point function to run.
// invoke 'run.ps1' by default
{
"disabled": false,
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
// invoke a function in the module file
{
"disabled": false,
"scriptFile": "../azfunction.psm1",
"entryPoint": "Run",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
Input and output bindings are defined in the 'function.json' file for every function within a Function App. The input bindings describe the trigger of the function and the additional data sources to be consumed by the function. The output bindings describe where the data produced from the function are to be sent or saved.
Naturally, input bindings are represented by parameters in a PowerShell Function script. A parameter is defined for each input binding of the function to accept the data payload, and the parameter name is the same as the input binding. Pre-validation during the processing of FunctionLoadRequest makes sure there is 1-to-1 mapping between the script parameters and the output bindings_._ In addition, an optional parameter named 'TriggerMetadata' can be used for a function script to get the metadata properties about the trigger. Different triggers come with different metadata properties. For example, the blob trigger provides information about the URI of the incoming blob and more.
For output bindings, Azure Functions allows the results to be passed out in two ways - implicitly via the 'return' statement of a language, or explicitly by associating the value with an output binding name. The former fits in with most languages naturally, but not PowerShell because of two characteristics of the PowerShell language:
-
A 'return' statement is not the only place in PowerShell to actually return a result. Unlike the other supported languages in Azure Functions, a PowerShell script can write results to the output stream at any point of the execution, via various ways, and the 'return' statement is just one of them. After invoking a PowerShell script, all objects in the output stream will be returned to the caller in a collection, and by then, there is no way to know which of them were actually from the 'return' statement.
-
A collection object written to the output stream in PowerShell will be unraveled by default. This means the output stream will end up with the elements of the collection, instead of the original collection object. This behavior is very convenient for chaining commands in pipeline, but in the Azure Functions context, the caller will lose the information about the real return type, which makes it hard to do the post data transformation properly aiming at the function's output target.
Given the ambiguity resulted by the 'return' in PowerShell, we decide to only support explicitly passing out output data from a PowerShell function script, at least for now. Two cmdlets are created to help with that: 'Push-OutputBinding' and 'Get-OutputBinding', and they are from the helper module we have for the PowerShell language worker: 'Microsoft.Azure.Functions.PowerShellWorker'.
Push-OutputBinding [-Name] <String> [-Value] <Object> [-Clobber] [<CommonParameters>]
Get-OutputBinding [[-Name] <String[]>] [-Purge] [<CommonParameters>]
Within the PowerShell function script, 'Push-OutputBinding -Name -Value' should be used to associate data with an output binding using the name. It basically saves the <Name, Value> pair in a Hashtable in the module. After the invocation of the script, the PowerShell worker will call 'Get-OutputBinding -Purge' to retrieve the output binding data and the '-Purge' parameter instructs to clean up the data afterwards, so no leftovers for the next function invocation.
The following is an example of a 'function.json' file and the corresponding PowerShell function script.
// function.json
{
"disabled": false,
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
<#
run.ps1
#>
# Trigger the function by running Invoke-RestMethod :
# (via get method): Invoke-RestMethod `
# -Uri http://localhost:7071/api/MyHttpTrigger?Name=Joe
# (via post method): Invoke-RestMethod `
# -Uri http://localhost:7071/api/MyHttpTrigger `
# -Method Post `
# -Body (ConvertTo-Json @{ Name="Joe" }) `
# -Headers @{'Content-Type' = 'application/json' }`
# Input bindings are passed in via param block.
param($req, $TriggerMetadata)
# You can interact with query parameters, the body of the request, etc.
$name = $req.Query.Name
if (-not $name) { $name = $req.Body.Name }
$status = 200
$body = "Hello " + $name
# You associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
StatusCode = $status
Body = $body
})
Data received from the host are raw RPC data, so they must be converted to the proper types that make the most sense in the PowerShell context. Similarly, data produced by the Function script need to be converted to the raw RPC data before a response is sent on wire. There are two levels of data transformation. As is shown in the picture below, (1) and (4) are the more general type conversions between the RPC and .NET data, while (2) and (3) are more specific data conversions based on the parameter types of the to-be-invoked script and the input/output binding information.
Upon receiving the raw data, the conversion from RPC to .NET is shown in the following table.
[incoming] RPC data type (.NET type) | [incoming] Result .NET data |
---|---|
JSON (string) | Hashtable * via 'JsonObject.ConvertFromJson' |
Bytes (ByteString) | byte[] via 'ByteString.ToByteArray' |
Stream (ByteString) | byte[] via 'ByteString.ToByteArray' |
Http (RpcHttp) | HttpRequestContext * |
String (string) | string (unchanged) |
Double (double) | double (unchanged) |
Int (long) | long (unchanged) |
* For Json payload, the worker converts it to a Hashtable, using the same API that 'ConvertFrom-Json' command depends on to ensure the consistent behavior. We choose to convert the json string to Hashtable instead of a PSObject based on two considerations:
- Hashtable is lightweight. Its key-value pairs are accessible in the same way as object properties in PowerShell, and it's easy to work with directly in C#.
- Though not common, the Json payload is possible to contain empty string keys or keys differ only in case. The keys can be preserved when converting to a case-sensitive Hashtable.
PowerShell is a case-insensitive language. To keep the consistent experience, the worker first tries to use a case-insensitive Hashtable, but will turn to a case-sensitive one when the attempt fails.
* For the incoming Http payload, the worker extracts the metadata and constructs an immutable object of the type 'HttpRequestContext', which is a custom type defined in the worker.
Before sending out response, the conversion from .NET to RPC is shown in the following table.
[outgoing] .NET data | [outgoing] RPC data type (.NET type) |
---|---|
byte[] | Bytes (ByteString) via 'ByteString.CopyFrom' |
Stream | Stream (ByteString) via 'ByteString.FromStream' |
string when it's a valid Json string | JSON (string) via 'JsonObject.ConvertToJson' |
string when it's NOT a valid Json string | String (string - unchanged) |
HttpResponseContext * | Http (RpcHttp) |
double | Double (double) |
long | Int (long) |
int | Int (long) via casting to long |
All other objects | JSON (string) via 'JsonObject.ConvertToJson' |
* For http output binding, the PowerShell worker defines the custom type 'HttpResponseContext' to represent the http response data. A 'RpcHttp' object is then constructed using the response data.
Before invoking a function script, the input data may be further transformed based on the input binding information and the parameter type. Today, we take advantage of this step to allow the flexible consumption of blob input. When using a blob trigger, the blob content received is always 'Bytes', even if the blob contains plain text only. If the corresponding parameter in the script is of the type 'string', then we convert the byte array to a string using the UTF-8 encoding when passing argument for the invocation. Similarly, if the parameter type is 'Stream', then the byte array will be wrapped into a 'MemoryStream' before passed in as argument. Examples are shown as follows.
<#
run.ps1 - consume the blob content as string
because the blob is in fact a JSON file
#>
param([string]$blobData, $TriggerMetadata)
$json = CovnertFrom-Json -InputObject $blobData
...
<#
run.ps1 - consume the blob content as Stream
because the blob is actually a zip file
#>
param([IO.Stream]$blobData, $TriggerMetadata)
$zip = [IO.Compression.ZipArchive]::new($blobData)
...
After the script invocation finishes, the produced output data may be transformed properly to what the language worker expects based on the output binding information. By having this step, we allow the Hashtable to be used directly for the HTTP response output despite the worker expects an object of the type 'HttpResponseContext'. After retrieving the output data, the worker validates the output for a HTTP response, and convert the result to 'HttpResponseContext' as needed in this step. The difference in a function script is shown as follows.
<#
run.ps1 - no post-invocation data conversion
#>
param($req, $TriggerMetadata)
...
Push-OutputBinding -Name res -Value `
([HttpResponseContext]@{
StatusCode = $status
Body = $body
})
<#
run.ps1 - with post-invocation data conversion
#>
param($req, $TriggerMetadata)
...
Push-OutputBinding -Name res -Value `
@{
StatusCode = $status
Body = $body
}
With more feedback, we can do more useful data conversions in (2) and (3) based on the parameter types and input/output bindings, which would further enhance the user experience.
Azure Functions supports 6 logging levels. In the new language worker model, logs are sent to the host by the language worker through the gRPC channel. In PowerShell, the logging capability is made available to the user's script by introducing the cmdlet 'Write-AzFuncLog' to the helper module.
Write-AzFuncLog [-Message] <string> [-Level] <string> [<CommonParameters>]
PowerShell users are already familiar with writing diagnostic messages to different PowerShell streams, via the convenient 'Write-*' cmdlets, such as 'Write-Error', 'Write-Verbose' and 'Write-Information'. We want to keep the same experience in the PowerShell Azure Function, and we can do it by intercepting the PowerShell streams and sending records in the streams to the host as logs. By doing that, a user can use existing scripts in Azure Functions without worrying about the diagnostic messages get lost.
The following is a mapping between the PowerShell built-in 'Write-*' cmdlet and the log levels used for logging in PowerShell Azure Function.
Write-* cmdlet (the target stream) | Log level |
---|---|
Write-Verbose (verbose stream) | Trace |
Write-Progress (progress stream) | Trace |
Write-Debug (debug stream) | Debug |
Write-Information (information stream) | Information |
Write-Host (information stream) | Information |
Write-Warning (warning stream) | Warning |
Write-Error (error stream) | Error |
Write-Output (output stream) * | Information |
* The output stream is special in two folds:
- Objects written to the output stream will be piped to the downstream command, if there is one.
- When invoking a script, the invocation result is a collection of objects from the output stream.
As has been discussed in the "Input and Output Binding" section, the returned value of an invocation is currently not treated as the output binding value for a Function. But the objects written to the output stream are still valuable and should be made visible to the users. Therefore, we need to log the output objects using the default PowerShell formatting so that users will see the output objects displayed in the log in the same way as in a PowerShell console.
There are 2 requirements for logging the output object:
- The logging of output objects should happen in a streaming manner. Namely, an object should be logged right after it's written to the output stream. This is required so that the logs can retain the same order of execution.
- The output objects must remain in the output stream unchanged after being logged. Though not today, we may enable passing output binding value via 'return' in future with the proper semantics, so we still want to collect the returned values after the script invocation.
The solution is to introduce another cmdlet 'Trace-PipelineObject' to the helper module, which behaves similarly to 'Tee-Object' by splitting the incoming pipeline objects to two directions.
Trace-PipelineObject [-InputObject] <Object> [<CommonParameters>]
When getting a pipeline object from the upstream command, 'Trace-PipelineObject' first pushes the object through a steppable pipeline that is an equivalent of 'Out-String | Write-Information'. 'Out-String' will convert the object to a string using the default PowerShell formatting. 'Write-Information' will then log the string with the 'information' log level. After processing with the steppable pipeline, 'Trace-PipelineObject' writes the pipeline object back to the output stream intact.
Note that, even though 'Trace-PipelineObject' is exposed from the helper module, it is used by the language worker and not supposed to be invoked by users directly in their function scripts. It doesn't do any harm when used directly in a function script, but also doesn't do anything meaningful.
With the help of 'Trace-PipelineObject', all that the PowerShell language worker needs is to chain 'Trace-PipelineObject' after the Function script in a pipeline before the invocation. The following is an example to show the streaming logging for objects written to the output stream.
<#
run.ps1 - Write output between verbose logging
#>
# Input bindings are passed in via param block.
param($req, $TriggerMetadata)
Write-Verbose "Start to invoke ..." -Verbose
Get-Process | Select-Object -First 5
# You can interact with query parameters, the body of the request, etc.
$name = $req.Query.Name
if (-not $name) { $name = $req.Body.Name }
Write-Verbose "Name passed in: $name." -Verbose
# You associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
StatusCode = 200
Body = $body
})
// Verbose and output logs are in the order of execution
Executing 'Functions.MyHttpTrigger' (Reason='This function was programmatically called via the host APIs.', Id=2242c6a8-a5a6-4156-813d-71fda654210b)
VERBOSE: Start to invoke ...
OUTPUT:
OUTPUT: NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
OUTPUT: ------ ----- ----- ------ -- -- -----------
OUTPUT: 57 34.80 26.57 615.28 11060 1 AlertusDesktopAlert
OUTPUT: 24 15.88 21.28 4.83 14672 1 ApplicationFrameHost
OUTPUT: 8 2.12 6.92 0.00 9988 0 AppVShNotify
OUTPUT: 17 3.43 8.65 0.00 4556 0 armsvc
OUTPUT: 44 95.29 97.70 42.31 880312 1 AzureStorageEmulator
VERBOSE: Name passed in: joe.
Executed 'Functions.MyHttpTrigger' (Succeeded, Id=2242c6a8-a5a6-4156-813d-71fda654210b)
The 'PSModulePath' environment variable stores the paths to the locations of modules that are installed on the disk. PowerShell uses this variable to locate modules during operations like module loading and command auto-discovery. The default module path used by the PowerShell language worker consists of two directory paths: 'func-app-module-folder;worker-module-folder'
- Function App module folder - the folder 'Modules' right under the Function App root directory.
- Worker module folder - the folder 'Modules' right under the application base of the worker.
As of today, a few modules are bundled with the PowerShell language worker in the worker-level module folder, including the PowerShell built-in modules, the worker helper module, and 'ThreadJob'. A PowerShell Function App can bundle extra modules itself, in the Function App module folder.
Like many other shell languages, PowerShell has the concept of the user profile, which is a 'profile.ps1' file that runs on the startup of PowerShell to do some user specific initialization in the session. It's a very convenient feature for running some preparation code that is pre-requisite for all subsequent work. The profile concept is familiar to PowerShell users and would also be useful in Azure Functions scenarios.
For example, a recent update of Azure PowerShell module renamed all the Azure PowerShell cmdlets to replace the 'AzureRm' prefix with 'Az', such as 'Login-AzAccount'. At the meantime, a new cmdlet 'Enable-AzureRmAlias' was added to make the original names aliases of the new names, so that existing scripts that use the old names can work without changes as long as 'Enable-AzureRmAlias' is called beforehand. In case that functions of a PowerShell Function App uses existing scripts that depend on the old Azure cmdlet names, 'Enable-AzureRmAlias' has to be placed at the beginning of every 'run.ps1' of those functions, which is redundant and cumbersome. A user profile would be the perfect place for running the one-time pre-requisite code like 'Enable-AzureRmAlias'.
In a PowerShell Function App, a 'profile.ps1' file can be optionally placed right under the Function App folder. If the file exists, it will be executed by a PowerShell Manager instance during its instantiation so that the user-specific initialization can be done in the associated Runspace. When there are multiple PowerShell Manager instances in the pool, 'profile.ps1' will be executed multiple times, once per a PowerShell Manager.
The following is an example of the profile that we use as a template.
# Azure Functions profile.ps1
#
# This profile.ps1 will get executed every "cold start" of your Function App.
# "cold start" occurs when:
#
# * A Function App starts up for the very first time
# * A Function App starts up after being de-allocated due to inactivity
#
# You can define helper functions, run commands, or specify environment variables
# NOTE: any variables defined that are not environment variables will get reset after the first execution
# Authenticate with Azure PowerShell using MSI.
# Remove this if you are not planning on using MSI or Azure PowerShell.
if ($env:MSI_SECRET -and (Get-Module -ListAvailable Az.Accounts)) {
Connect-AzAccount -Identity
}
# Uncomment the next line to enable legacy AzureRm alias in Azure PowerShell.
# Enable-AzureRmAlias
# You can also define functions or aliases that can be referenced in any of your PowerShell functions.
When a PowerShell script is invoked, it runs on a single thread within a Runspace. Every PowerShell Manager instance wraps a Runspace and it can invoke only one script at a time. To support concurrent invocations, the PowerShell language worker has a PowerShell Manager Pool, which hosts reusable PowerShell Manager instances.
The default pool size is currently one, but it's configurable through an environment variable named 'PSWorkerInProcConcurrencyUpperBound', which the user can set in the Application Settings of a Function App. PowerShell Manager instances are created in the pool in a lazy way. It starts with one PowerShell Manager instance initially. Upon receiving an InvocationRequest, the worker's main thread will try checking out a PowerShell Manager instance from the pool for the subsequent invocation. At this point, if the pool has no available instance and it hasn't reached its size limit, then a new PowerShell Manager instance will be created and returned from the pool. In this way, additional resources are allocated only if it's necessary.
Once a PowerShell Manager instance is checked out, the worker's main thread will create a task with it for the actual invocation, which will happen in a thread-pool thread. The worker's main thread itself will immediately go processing the next incoming request. After the function invocation is done, the PowerShell Manager instance will be reset and reclaimed by the pool, waiting to be checked out again for the next invocation.
Note that, checking out a PowerShell Manager instance from the pool is a blocking operation. When the number of concurrent invocations reaches the pool size, the worker's main thread will block on the checkout operation for the next invocation request until a PowerShell Manager instance becomes available. The initial design was to let the checkout operation happen in the thread-pool thread too, so the main thread would just create a task and go back processing the next request. However, it turned out to result in high lock contention due to too many tasks competing during the checkout operation. Therefore, the design was changed to make the checkout operation happen only in main thread, and a task gets created only after a PowerShell Manager instance becomes available.
The goal is to let the user declare the dependencies required by functions, and rely on the service automatically locating and installing the dependencies from the PowerShell Gallery or other sources, taking care of selecting the proper versions, and automatically upgrading the dependencies to the latest versions (if allowed by the version specifications provided by the user).
Dependencies are declared in the requirements.psd1 file (manifest) as a collection of pairs (<name>, <version specification>). Currently, the version specification should either be an exact and complete version, or strictly match the following pattern: <major version>.*
. So, a typical manifest may look like this:
@{
'Az' = '2.*'
'PSDepend' = '0.*'
'Pester' = '5.0.0-alpha3'
}
When the <major version>.*
format is used, the worker will retrieve the latest available module version (within the specified major version) from the PowerShell Gallery, ignoring prerelease versions.
When the exact version is specified, the worker will retrieve the specified version only, ignoring any other version. Prerelease versions are allowed in this case.
The number of entries in the requirements.psd1 file should not exceed 10. This limit is not user-configurable.
Installing and upgrading dependencies should be performed automatically, without requiring any interaction with the user, and without interfering with the currently running functions. This represents an important design challenge. In a different context, dependencies could be stored on a single location on the file system, managed by regular PowerShell tools (Install-Module
/Save-Module
, PSDepend
, etc.), while having the same file system location added to PSModulePath to make all the modules available to scripts running on this machine. This is what PowerShell users normally do, and this approach looks attractive because it is simple and conventional. However, in the contexts where multiple independent workers load modules and execute scripts concurrently, and at the same time some module versions are being added, upgraded, or removed, this simple approach causes many known problems. The root causes of these problems are in the fundamentals of PowerShell and PowerShell modules design. The managed dependencies design in Azure Functions must take this into account. The problems will be solved if we satisfy the following conditions:
- Only one writer at a time. No concurrent executions of
Save-Module
,PSDepend
, or anything else that could perform changes on the same target file path*_. - Atomic updates. All workers executing a PowerShell script should always observe a state of dependency files that is a result of a successful and complete execution of
Save-Module
or a similar tool. The workers should never observe any partial results. - Immutable view. As soon as a set of dependency files is exposed to a worker for loading module purposes for the first time, this set of files should never change during the life time of this worker. Deletions, additions, or modifications are not acceptable.
The main design idea is to partition the installed dependencies in such a way that every PowerShell worker gets exactly one complete and immutable set of dependencies for the lifetime of this worker (until restart). The same set of dependencies can be shared by multiple PowerShell workers, but each worker is strictly tied to a single set.
On the first function load request, the PowerShell worker reads the manifest and installs all the required dependencies into a dedicated folder on a file storage shared between all PowerShell workers within the function app. The subsequent function invocation requests are blocked until the installation of all the dependencies is completed successfully. After the successful installation, the PowerShell worker insert the path of the folder to the 'PSModulePath' variable of the PowerShell worker process, so that they become available to all functions running on this worker, and allows function invocation requests to proceed. The path to this folder is inserted into 'PSModulePath' before the functionAppRoot\Modules
path, so that the managed dependencies folder is discovered first when modules are imported. This entire set of dependencies becomes an immutable snapshot: once created and exposed to PowerShell workers, it never changes: no modules or module versions are ever added to this snapshot or removed from this snapshot.
On the next function load request (normally received on a worker start), the worker reads the manifest and checks if the latest snapshot contains the dependencies satisfying the manifest. If the latest snapshot is acceptable, the worker makes 'PSModulePath' point to this snapshot folder and allows the next function invocation proceed immediately. At the same time, the worker starts a background installation of all the dependencies into a new folder, which after successful completion becomes the latest snapshot. At this point, other starting workers will be able to find and use the new snapshot. The workers that were already running when the new snapshot was installed will be able to use it after restart.
A snapshot is considered acceptable if it contains any version allowed by the manifest for each required dependency. For example, Az 2.1 is allowed by the manifest containing 'Az' = '2.*'
, so the snapshot containing Az 2.1 will be considered acceptable, even if Az 2.2 is published on the PowerShell Gallery. As a result, the next function invocation will be allowed to proceed with Az 2.1 without waiting for any other Az version to be installed, and without even trying to contact the PowerShell Gallery for discovering the latest module version. All these activities can be performed in the background, without blocking function invocations.
However, if the latest snapshot is not acceptable (i.e. it does not contain module versions required by the manifest), the worker starts installing the dependencies into a new snapshot, and all the subsequent function invocation requests are blocked, waiting for the new snapshot installation to complete.
When a snapshot installation starts, the dependencies are first installed into a folder with a name following a special pattern (*.ri
), so that this snapshot is not picked up by any worker prematurely, before the installation is complete. After successful completion, the content of the snapshot is compared with the content of the latest installed snapshot (using module names and versions only). If the new snapshot is different, it is atomically promoted by renaming the folder to follow a different pattern (*.r
), which indicates to other workers that this snapshot is ready to use. However, if the new snapshot is equivalent to the latest installed snapshot (all module names and versions are equal), the new snapshot is immediately removed instead of being promoted, and the creation time of the latest installed snapshot is updated to the current time. If the installation fails or cannot complete for any reason (for example, the worker restarts, crashes, or gets decommissioned), the folder stays in the installing state until removed.
Background snapshot installation is initiated on worker start, and then every MDMaxBackgroundUpgradePeriod
, which makes sure that new snapshots eventually get installed even if workers do not restart for long time.
Background snapshot installation is not performed if there is any installing or installed snapshot with the creation timestamp newer than MDMinBackgroundUpgradePeriod
, in order to avoid excessive snapshot downloads.
Every MDNewSnapshotCheckPeriod
, each worker checks if there is any snapshot installed after the one that is currently used by this worker. If it finds a newer snapshot, it exits with a special process exit code (200
) that makes the Functions Host restart this worker without counting this situation as an error, and the new worker process picks up the latest installed snapshot.
Incomplete and old snapshots that are no longer in use are periodically removed from the file storage. In order to allow detecting unused snapshots, each PowerShell worker keeps "touching" a file named .used
at the root of the used snapshot folder every MDHeartbeatPeriod
. Before and after installing any new snapshot, every PowerShell worker looks for unused snapshots by checking the folder creation time and the .used
file modification time. If both these time values are older than (MDHeartbeatPeriod
+ MDOldSnapshotHeartbeatMargin
), the snapshot is considered unused, so the PowerShell worker removes it. The latest MDMinNumberOfSnapshotsToKeep
snapshots will never be removed, regardless of usage.
MDHeartbeatPeriod
, MDOldSnapshotHeartbeatMargin
, MDNewSnapshotCheckPeriod
, MDMinBackgroundUpgradePeriod
, MDMaxBackgroundUpgradePeriod
, and MDMinNumberOfSnapshotsToKeep
are environment variables configurable via Application Settings of a Function App. MDHeartbeatPeriod
, MDOldSnapshotHeartbeatMargin
, MDNewSnapshotCheckPeriod
, MDMinBackgroundUpgradePeriod
, and MDMaxBackgroundUpgradePeriod
are expected to contain strings in the format that can be parsed by System.TimeSpan.Parse
method.
No other changes are ever performed to the snapshots that were once installed.
In this design, upgrading dependencies is conceptually decoupled from executing functions. Upgrading dependencies can be performed on any schedule by one or multiple agents, (almost) without any coordination between each other or with the workers executing functions. This allows us to make independent decisions on whether to run it from a separate service or keep it in PowerShell workers, and schedule the upgrade as often as we want. For now, upgrading dependencies is still performed by the PowerShell workers, just to avoid the overhead of deploying and maintaining a separate service. However, the design keeps this option open.
See How to try experimental Durable PowerShell functions to try this out.
The Durable Functions is essentially the stateful workflow in a serverless environment. It consists of an orchestrator function and one or more activity functions. Durable Functions runs in a completely asynchronous way: the orchestrator function will stop after triggering an activity function, and later gets restarted once the activity function finishes execution. The actions of triggering activity functions and the activity function results will be saved along the way. After the orchestrator function is restarted, it replays the execution from the very beginning, but will skip the actions that are already done and use the results directly from the saved logs.
The Durable Functions targets five use patterns that usually require long-running jobs, and they are:
- Function Chaining - schedule and wait for jobs one by one.
- Fan-out/Fan-in - schedule jobs, then wait for all or any to finish.
- Async HTTP APIs - trigger a function by an HTTP call that returns a status endpoint for polling.
- Monitoring - flexible timer trigger that can manage its own life-cycle based on status change.
- Human Interaction - introduce "external event" for human interaction.
We had a prototype of Durable Functions in PowerShell worker to enable the 'Function Chaining' pattern. A few problems we faced in the prototype were:
- Allow the user to start an activity function from within the orchestration function.
- Make the orchestrator function able to be stopped at certain point safely.
- Make the PowerShell worker able to replay an orchestrator function, namely skipping the actions that are done in previous runs/replays based on the save logs.
They were solved by introducing the new cmdlet "Invoke-DurableActivity" plus having the worker invoke the orchestrator function with the async PowerShell API.
Internally, the worker shares context information with the cmdlet, including the existing saved logs sent from the host about the running orchestrator function and a wait handler - let's call it A - which the cmdlet will set after it triggers an activity function. The async API used to start the orchestrator function returns another wait handler - let's call it B - which will be set when the invocation finishes. Then the invoking thread will call 'WaitHandler.WaitAny' on those two wait handlers.
- If the call to 'WaitAny' returns because the wait handler A was set, then that means an activity function was just triggered, and the orchestrator function should be stopped now (it will be triggered again later after the activity function finishes). So, in this case, the invoking thread will stop the orchestrator function that is running asynchronously.
- If the call to 'WaitAny' returns because the wait handler B was set, then that means the orchestrator function has run to its completion.
The cmdlet 'Invoke-DurableActivity' has the following syntax, the '-FunctionName' being the name of the activity function to invoke and '-Input' being the argument to the activity function.
Invoke-DurableActivity [-FunctionName] <string> [[-Input] <Object>] [<CommonParameters>]
When it's invoked to trigger an activity function, it first checks the existing logs shared by the worker to see if this invocation of the activity function has already done previously. If so, the cmdlet simply returns the result. If not, the cmdlet will
- trigger the activity function by sending a message to the host;
- set the wait handler shared by the worker to notify the worker that the activity function is triggered;
- wait on a private wait handler that will only be set when the 'StopProcessing' method of the cmdlet is called. That method gets called only when the pipeline where this cmdlet is running in is being stopped.
The third step is very important in this stop-and-replay model of Durable Functions, because when stopping an invocation that is running asynchronously, we don't want that to interrupt arbitrary code execution that is happening in the pipeline. By having the cmdlet 'Invoke-DurableActivity' wait for 'StopProcessing' to be called, we know for sure that the pipeline execution pauses at a safe place, ready for being stopped by the invoking thread.
The following is an example of PowerShell Durable Function that runs in the Function Chaining pattern:
<#
orchestrator function
#>
param($context)
$output = @()
$output += Invoke-DurableActivity -FunctionName "E1_SayHello" -Input "Tokyo"
$output += Invoke-DurableActivity -FunctionName "E1_SayHello" -Input "Seattle"
$output += Invoke-DurableActivity -FunctionName "E1_SayHello" -Input "London"
return $output
<#
activity function
E1_SayHello
#>
param($name)
return "Hello $name"
As you may notice, the above PowerShell Durable Function scripts use 'return'. This is because both the orchestrator and activity functions are special - they don't have output bindings and the returned value of invocation is the only way to pass states back to the host. This is why we mentioned in the "Input and Output Binding" section that the 'return' statement syntax for passing around output binding values is currently not supported, but may be considered in future with the appropriate semantics.
This document discusses the design of the PowerShell language worker as in its status quo. This is an on-going project, and more iterations will be awaiting us to add new features. This document will be updated regularly with the progress of the project.