Skip to content

Commit

Permalink
Adding sparse attention news index item (microsoft#376)
Browse files Browse the repository at this point in the history
Co-authored-by: Jeff Rasley <[email protected]>
  • Loading branch information
arashashari and jeffra authored Sep 10, 2020
1 parent 59ce90d commit c76769c
Showing 1 changed file with 15 additions and 0 deletions.
15 changes: 15 additions & 0 deletions docs/_posts/2020-09-08-sparse-attention-news.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
layout: single
title: "Powering 10x longer sequences and 6x faster execution through DeepSpeed Sparse Attention"
excerpt: ""
categories: news
new_post: true
date: 2020-09-09 00:00:00
---

DeepSpeed offers sparse attention kernels, an instrumental technology to support long sequences of model inputs, whether for text, image, or sound. Compared with the classic dense Transformers, it powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution with comparable accuracy. It also outperforms state-of-the-art sparse implementations with 1.5-3x faster execution. Furthermore, our sparse kernels support efficient execution of flexible sparse format and empower users to innovate on their custom sparse structures.

* Brief overview, see our [press release]({{ site.press_release_v3 }}).
* Detailed technology deep dive, see our [blog post](https://www.deepspeed.ai/news/2020/09/08/sparse-attention.html).
* Tutorial on how to use sparse attention, see our [Sparse attention tutorial](https://www.deepspeed.ai/tutorials/sparse-attention/).
* The source code for our sparse attention kernels can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed) and BERT pre-training code useing sparse attention can be found in the [DeepSpeedExamples repo](https://github.com/microsoft/deepspeedexamples).

0 comments on commit c76769c

Please sign in to comment.