Skip to content

Commit

Permalink
Cherry-pick 7 changes into 1.14.1 (#14762)
Browse files Browse the repository at this point in the history
This cherry picks the following 7 changes into 1.14.1:


1b7f654

b539c36

12d9117

ff3aed8

3d79b1f

c0d2472

e9ec4c0

---------

Signed-off-by: Cliff Woolley <[email protected]>
Co-authored-by: Sheil Kumar <[email protected]>
Co-authored-by: Ye Wang <[email protected]>
Co-authored-by: Ubuntu <[email protected]>
Co-authored-by: Lei Zhang <[email protected]>
Co-authored-by: Misha Chornyi <[email protected]>
Co-authored-by: Cliff Woolley <[email protected]>
Co-authored-by: cao lei <[email protected]>
Co-authored-by: Lei Cao <[email protected]>
Co-authored-by: Tianlei Wu <[email protected]>
Co-authored-by: Vincent Wang <[email protected]>
Co-authored-by: Jian Chen <[email protected]>
  • Loading branch information
12 people authored Feb 23, 2023
1 parent 073332b commit 3e4a471
Show file tree
Hide file tree
Showing 106 changed files with 2,927 additions and 781 deletions.
4 changes: 4 additions & 0 deletions cmake/onnxruntime_rocm_hipify.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@ set(contrib_ops_excluded_files
"bert/fast_gelu_impl.h"
"bert/fast_gelu.cc"
"bert/fast_gelu.h"
"bert/relative_attn_bias.cc"
"bert/relative_attn_bias.h"
"bert/relative_attn_bias_impl.cu"
"bert/relative_attn_bias_impl.h"
"bert/skip_layer_norm.cc"
"bert/skip_layer_norm.h"
"bert/skip_layer_norm_impl.cu"
Expand Down
63 changes: 59 additions & 4 deletions docs/ContribOperators.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Do not modify directly.*
* <a href="#com.microsoft.FusedConv">com.microsoft.FusedConv</a>
* <a href="#com.microsoft.FusedGemm">com.microsoft.FusedGemm</a>
* <a href="#com.microsoft.FusedMatMul">com.microsoft.FusedMatMul</a>
* <a href="#com.microsoft.GatedRelativePositionBias">com.microsoft.GatedRelativePositionBias</a>
* <a href="#com.microsoft.GatherND">com.microsoft.GatherND</a>
* <a href="#com.microsoft.Gelu">com.microsoft.Gelu</a>
* <a href="#com.microsoft.GemmFastGelu">com.microsoft.GemmFastGelu</a>
Expand Down Expand Up @@ -152,7 +153,7 @@ This version of the operator has been available since version 1 of the 'com.micr
<dd>Attention mask with shape (batch_size, 1, max_sequence_length, max_sequence_length), (batch_size, total_sequence_length) or (batch_size, sequence_length, total_sequence_length), or index with shape (batch_size) or (2 * batch_size)</dd>
<dt><tt>past</tt> (optional) : T</dt>
<dd>past state for key and value with shape (2, batch_size, num_heads, past_sequence_length, head_size)When past_present_share_buffer is set, its shape is (2, batch_size, num_heads, max_sequence_length, head_size)</dd>
<dt><tt>extra_add</tt> (optional) : T</dt>
<dt><tt>relative_position_bias</tt> (optional) : T</dt>
<dd>additional add to QxK' with shape (batch_size, num_heads, sequence_length, total_sequence_length)</dd>
<dt><tt>past_sequence_length</tt> (optional) : M</dt>
<dd>When past_present_share_buffer is used, it is required to specify past_sequence_length (could be 0).</dd>
Expand Down Expand Up @@ -1608,6 +1609,58 @@ This version of the operator has been available since version 1 of the 'com.micr
</dl>


### <a name="com.microsoft.GatedRelativePositionBias"></a><a name="com.microsoft.gatedrelativepositionbias">**com.microsoft.GatedRelativePositionBias**</a>

query_layer = (query_layer + query_bias).reshape(batch_size, seq_len, num_heads, head_size).transpose(1, 2)
gate_u, gate_r = torch.sigmoid(
self.gate_ur_linear(query_layer).view(batch_size, num_head, seq_len, 2, D/2).sum(-1, keepdim=False)
).chunk(2, dim=-1)
gate_u_1 = gate_u * (gate_r * self.eco_a - 1.0) + 2.0
rel_pos_bias = gate_u_1 * rel_pos

#### Version

This version of the operator has been available since version 1 of the 'com.microsoft' operator set.

#### Attributes

<dl>
<dt><tt>num_heads</tt> : int (required)</dt>
<dd>Number of attention heads</dd>
</dl>

#### Inputs

<dl>
<dt><tt>query_layer</tt> : T</dt>
<dd>tensor with shape (batch_size, seq_len, num_heads x head_size)</dd>
<dt><tt>query_bias</tt> : T</dt>
<dd>1-d tensor with shape (num_heads x head_size)</dd>
<dt><tt>rel_pos</tt> : T</dt>
<dd>tensor with shape (1, num_head, seq_len, seq_len)</dd>
<dt><tt>weight</tt> : T</dt>
<dd>gemm weight for the gated_ur_linear, shape (head_size, D), D is divisible by 2</dd>
<dt><tt>bias</tt> : T</dt>
<dd>bias for the gated_ur_linear, shape (D)</dd>
<dt><tt>eco_a</tt> : T</dt>
<dd>tensor of shape (1, num_heads, 1, 1)</dd>
</dl>

#### Outputs

<dl>
<dt><tt>output</tt> : T</dt>
<dd>output tensor with shape (batch_size, num_heads, seq_len, seq_len)</dd>
</dl>

#### Type Constraints

<dl>
<dt><tt>T</tt> : tensor(float), tensor(float16)</dt>
<dd>Constrain input and output types to float tensors.</dd>
</dl>


### <a name="com.microsoft.GatherND"></a><a name="com.microsoft.gathernd">**com.microsoft.GatherND**</a>

Given `data` tensor of rank r >= 1, and `indices` tensor of rank q >= 1, gather
Expand Down Expand Up @@ -2222,7 +2275,7 @@ This version of the operator has been available since version 1 of the 'com.micr
<dd>Number of attention heads</dd>
</dl>

#### Inputs (2 - 5)
#### Inputs (2 - 6)

<dl>
<dt><tt>query</tt> : T</dt>
Expand All @@ -2235,6 +2288,8 @@ This version of the operator has been available since version 1 of the 'com.micr
<dd>Bias tensor with shape (hidden_size + hidden_size + v_hidden_size) from input projection</dd>
<dt><tt>key_padding_mask</tt> (optional) : M</dt>
<dd>Key padding mask with shape (batch_size) or (batch_size, kv_sequence_length)</dd>
<dt><tt>relative_position_bias</tt> (optional) : T</dt>
<dd>relative position bias: addition to QxK' with shape (batch_size, num_heads, sequence_length, total_sequence_length) or (1, num_heads, sequence_length, total_sequence_length)</dd>
</dl>

#### Outputs
Expand Down Expand Up @@ -3221,7 +3276,7 @@ This version of the operator has been available since version 1 of the 'com.micr
left-side padding, mask_index has shape (2 * batch_size), where the values are the exclusive end positions followed by
the inclusive start positions. When unidirectional is 1, and each token only attend to previous tokens. For GPT-2, both past
and present state are optional. Present state could appear in output even when past state is not in input.
Current version does not support past/present, extra_add and qkv_hidden_sizes.
Current version does not support past/present, relative_position_bias and qkv_hidden_sizes.
TODO: Support them if needed in the future.

#### Version
Expand Down Expand Up @@ -3286,7 +3341,7 @@ This version of the operator has been available since version 1 of the 'com.micr
<dd>Attention mask with shape (batch_size, 1, max_sequence_length, max_sequence_length), (batch_size, past_sequence_length + sequence_length)or (batch_size, sequence_length, past_sequence_length + sequence_length), or index with shape (batch_size) or (2 * batch_size).</dd>
<dt><tt>past</tt> (optional) : Q</dt>
<dd>past state for key and value with shape (2, batch_size, num_heads, past_sequence_length, head_size).</dd>
<dt><tt>extra_add</tt> (optional) : S</dt>
<dt><tt>relative_position_bias</tt> (optional) : S</dt>
<dd>additional add to QxK' with shape (batch_size, num_heads, sequence_length, sequence_length).</dd>
</dl>

Expand Down
Loading

0 comments on commit 3e4a471

Please sign in to comment.