diff --git a/src/content/changelogs/pipelines.yaml b/src/content/changelogs/pipelines.yaml
new file mode 100644
index 000000000000000..086cef981a08afa
--- /dev/null
+++ b/src/content/changelogs/pipelines.yaml
@@ -0,0 +1,11 @@
+---
+link: "/pipelines/reference/changelog/"
+productName: Pipelines
+productLink: "/pipelines/"
+productArea: Developer Platform
+productAreaLink: "/pipelines/"
+entries:
+ - publish_date: "2025-01-30"
+ title: Pipelines is now in public beta.
+ description: |-
+ Pipelines, a new product to ingest and store real time streaming data, is now in public beta. The public beta is avaiable to any user with a [free or paid Workers plan](/workers/platform/pricing/). Create a Pipeline, and you'll be able to post data to it via HTTP or from a Cloudflare Worker. Pipelines handle batching, buffering, and partitioning the data, before writing it to an R2 bucket of your choice. It's useful to collect clickstream data, or ingest logs from a service. Start building with our [get started guide](/pipelines/getting-started/).
diff --git a/src/content/docs/pipelines/configuration/batching.mdx b/src/content/docs/pipelines/configuration/batching.mdx
new file mode 100644
index 000000000000000..9176675d682195b
--- /dev/null
+++ b/src/content/docs/pipelines/configuration/batching.mdx
@@ -0,0 +1,31 @@
+---
+pcx_content_type: concept
+title: Batching
+sidebar:
+ order: 10
+---
+
+Pipelines automatically batches requests that are received via HTTP or from a Worker. Batching helps reduce the number of output files written to your destination, which can make them more efficient to query.
+
+There are three ways to define how requests are batched:
+
+1. `batch-max-mb`: The maximum amount of data that will be batched, in megabytes. Default is 10 MB, maximum is 100 MB.
+2. `batch-max-rows`: The maximum number of rows or events in a batch before data is written. Default, and maximum, is 10,000 rows.
+3. `batch-max-seconds`: The maximum duration of a batch before data is written, in seconds. Default is 15 seconds, maximum is 600 seconds.
+
+Pipelines batch definitions are hints. A pipeline will follow these hints closely, but batches will not be exact.
+
+All three batch definitions work together. Whichever limit is reached first triggers the delivery of a batch.
+
+For example, a `batch-max-mb` = 100 MB and a `batch-max-seconds` = 600 means that if 100 MB of events are posted to the Pipeline, the batch will be delivered. However, if it takes longer than 600 seconds for 100 MB of events to be posted, a batch of all the messages that were posted during those 600 seconds will be created and delivered.
+
+
+## Batch settings
+
+You can configure the following batch-level settings to adjust how Pipelines create a batch:
+
+| Setting | Default | Minimum | Maximum |
+| ----------------------------------------- | ----------- | --------- | ----------- |
+| Maximum Batch Size `batch-max-mb` | 10 MB | 0.001 MB | 100 MB |
+| Maximum Batch Timeout `batch-max-seconds` | 15 seconds | 0 seconds | 600 seconds |
+| Maximum Batch Rows `batch-max-rows` | 10,000 rows | 1 row | 10,000 rows |
diff --git a/src/content/docs/pipelines/configuration/index.mdx b/src/content/docs/pipelines/configuration/index.mdx
new file mode 100644
index 000000000000000..06fe350e080ded6
--- /dev/null
+++ b/src/content/docs/pipelines/configuration/index.mdx
@@ -0,0 +1,12 @@
+---
+title: Configuration
+pcx_content_type: navigation
+sidebar:
+ order: 4
+ group:
+ hideIndex: true
+---
+
+import { DirectoryListing } from "~/components"
+
+
\ No newline at end of file
diff --git a/src/content/docs/pipelines/configuration/partition-filenames.mdx b/src/content/docs/pipelines/configuration/partition-filenames.mdx
new file mode 100644
index 000000000000000..04d5f89e2dde1fb
--- /dev/null
+++ b/src/content/docs/pipelines/configuration/partition-filenames.mdx
@@ -0,0 +1,30 @@
+---
+pcx_content_type: concept
+title: Partitions and Prefixes
+sidebar:
+ order: 11
+
+---
+
+## Partitions
+Partitioning organizes data into directories based on specific fields to improve query performance. It helps by reducing the amount of data scanned for queries, enabling faster reads. By default, Pipelines partitions data by event date. This will be customizable in the future.
+
+For example, the output from a Pipeline in your R2 bucket might look like this:
+```sh
+- event_date=2024-09-06/hr=15/37db9289-15ba-4e8b-9231-538dc7c72c1e-15.json.gz
+- event_date=2024-09-06/hr=15/37db9289-15ba-4e8b-9231-538dc7c72c1e-15.json.gz
+```
+
+## Prefix
+You can specify an optional prefix for all the output files stored in your specified R2 bucket. The data will remain partitioned by date.
+
+To modify the prefix for a Pipeline using Wrangler:
+```sh
+wrangler pipelines update --prefix "test"
+```
+
+All the output records generated by your pipeline will be stored under the prefix "test", and will look like this:
+```sh
+- test/event_date=2024-09-06/hr=15/37db9289-15ba-4e8b-9231-538dc7c72c1e-15.json.gz
+- test/event_date=2024-09-06/hr=15/37db9289-15ba-4e8b-9231-538dc7c72c1e-15.json.gz
+```
diff --git a/src/content/docs/pipelines/examples/index.mdx b/src/content/docs/pipelines/examples/index.mdx
new file mode 100644
index 000000000000000..f92017ab10cd4b4
--- /dev/null
+++ b/src/content/docs/pipelines/examples/index.mdx
@@ -0,0 +1,12 @@
+---
+title: Examples
+pcx_content_type: navigation
+sidebar:
+ order: 4
+ group:
+ hideIndex: false
+---
+
+import { DirectoryListing } from "~/components"
+
+
\ No newline at end of file
diff --git a/src/content/docs/pipelines/get-started.mdx b/src/content/docs/pipelines/get-started.mdx
new file mode 100644
index 000000000000000..4f7eaf671048f52
--- /dev/null
+++ b/src/content/docs/pipelines/get-started.mdx
@@ -0,0 +1,100 @@
+---
+title: Get started
+pcx_content_type: get-started
+sidebar:
+ order: 2
+head:
+ - tag: title
+ content: Get started
+---
+
+import { Render, PackageManagers } from "~/components";
+
+Pipelines let you ingest real-time data streams, such as click events on a website, or logs from a service. You can send data to a Pipeline from a Worker, or via HTTP. Pipelines handle batching requests and scales in response to your workload. Finally, Pipelines deliver the output into R2 as JSON files, automatically handling partitioning and compression for efficient querying.
+
+By following this guide, you will:
+
+1. Create your first Pipeline.
+2. Connect it to your R2 bucket.
+3. Post data to it via HTTP.
+4. Verify the output file written to R2.
+
+:::note
+
+Pipelines is in **public beta**, and any developer with a [paid Workers plan](/workers/platform/pricing/#workers) can start using Pipelines immediately.
+
+:::
+
+## Prerequisites
+
+To use Pipelines, you will need:
+
+
+
+## 1. Set up an R2 bucket
+
+Pipelines let you ingest records in real time, and load them into an R2 bucket. Create a bucket by following the [get started guide for R2](/r2/get-started/). Save the bucket name for the next step.
+
+## 2. Create a Pipeline
+
+To create a Pipeline using Wrangler, run the following command in a the terminal, and specify:
+
+- The name of your Pipeline
+- The name of the R2 bucket you created in step 1
+
+```sh
+npx wrangler pipelines create [PIPELINE-NAME] --r2 [R2-BUCKET-NAME]
+```
+
+After running this command, you'll be prompted to authorize Cloudflare Workers Pipelines to create R2 API tokens on your behalf. These tokens are required by your Pipeline. Your Pipeline uses the tokens when loading data into your bucket. You can approve the request through the browser link which will open automatically.
+
+If you prefer not to authenticate this way, you may pass your [R2 API Tokens](/r2/api/s3/tokens/) to Wrangler:
+```sh
+npx wrangler pipelines create [PIPELINE-NAME] --r2 [R2-BUCKET-NAME] --access-key-id [ACCESS-KEY-ID] --secret-access-key [SECRET-ACCESS-KEY]
+```
+
+When choosing a name for your Pipeline:
+
+1. Ensure it is descriptive and relevant to the type of events you intend to ingest. You cannot change the name of the Pipeline after creating it.
+2. Pipeline names must be between 1 and 63 characters long.
+3. The name cannot contain special characters outside dashes (`-`).
+4. The name must start and end with a letter or a number.
+
+Once you create your Pipeline, you will receive a HTTP endpoint which you can post data to. You should see output as shown below:
+
+```sh output
+🌀 Authorizing R2 bucket "[R2-BUCKET-NAME]"
+🌀 Creating pipeline named "[PIPELINE-NAME]"
+✅ Successfully created pipeline [PIPELINE-NAME] with ID [PIPELINE-ID]
+
+You can now send data to your pipeline with:
+ curl "https://.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'
+```
+
+## 3. Post data to your pipeline
+
+Use a curl command in your terminal to post an array of JSON objects to the endpoint you received in step 1.
+
+```sh
+curl -H "Content-Type:application/json" \
+ -d '[{"account_id":"test", "other_data": "test"},{"account_id":"test","other_data": "test2"}]' \
+
+```
+
+Once the Pipeline successfully accepts the data, you will receive a success message.
+
+Pipelines handle batching the data, so you can continue posting data to the Pipeline. Once a batch is filled up, the data will be partitioned by date, and written to your R2 bucket.
+
+## 4. Verify in R2
+
+Go to the R2 bucket you created in step 1 via [the Cloudflare dashboard](https://dash.cloudflare.com/). You should see a prefix for today's date. Click through, and you will see a file created containing the JSON data you posted in step 3.
+
+## Summary
+
+By completing this guide, you have:
+
+- Created a Pipeline
+- Connected the Pipeline with an R2 bucket as destination.
+- Posted data to the R2 bucket via HTTP.
+- Verified the output in the R2 bucket.
+
diff --git a/src/content/docs/pipelines/index.mdx b/src/content/docs/pipelines/index.mdx
new file mode 100644
index 000000000000000..da12a8b314e5c0b
--- /dev/null
+++ b/src/content/docs/pipelines/index.mdx
@@ -0,0 +1,61 @@
+---
+title: Overview
+type: overview
+pcx_content_type: overview
+sidebar:
+ order: 1
+ badge:
+ text: Beta
+head:
+ - tag: title
+ content: Pipelines
+---
+
+import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct } from "~/components";
+
+
+
+Ingest and load real time data streams to R2, using Cloudflare Pipelines.
+
+
+
+
+
+Pipelines lets you ingest and load real time data streams into R2, without managing any infrastructure. You can send data to a Pipeline data via HTTP, or from a Worker. Your Pipeline will handle batching the data, generating compressed JSON files, and delivering the files to an R2 bucket.
+
+Refer to the [get started guide](/pipelines/get-started) to start building with Pipelines.
+
+***
+## Features
+
+
+Create your first Pipeline, and send data to it.
+
+
+
+Each Pipeline generates an HTTP endpoint to use for ingestion
+
+
+
+Pipelines buffer records, before creating JSON files and delivering them to R2.
+
+
+***
+
+## More resources
+
+
+
+
+Learn about Pipelines limits.
+
+
+
+Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.
+
+
+
+Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.
+
+
+
\ No newline at end of file
diff --git a/src/content/docs/pipelines/observability/index.mdx b/src/content/docs/pipelines/observability/index.mdx
new file mode 100644
index 000000000000000..c1576788609ddc5
--- /dev/null
+++ b/src/content/docs/pipelines/observability/index.mdx
@@ -0,0 +1,12 @@
+---
+title: Observability
+pcx_content_type: navigation
+sidebar:
+ order: 5
+ group:
+ hideIndex: true
+---
+
+import { DirectoryListing } from "~/components"
+
+
\ No newline at end of file
diff --git a/src/content/docs/pipelines/observability/metrics.mdx b/src/content/docs/pipelines/observability/metrics.mdx
new file mode 100644
index 000000000000000..e5487c0341f7ac0
--- /dev/null
+++ b/src/content/docs/pipelines/observability/metrics.mdx
@@ -0,0 +1,66 @@
+---
+pcx_content_type: concept
+title: Metrics
+sidebar:
+ order: 10
+
+---
+
+Pipelines metrics are split across three different nodes under `viewer` > `accounts`. Refer to [Explore the GraphQL schema](/analytics/graphql-api/getting-started/explore-graphql-schema/) to learn how to navigate a GraphQL schema and discover which data are available.
+
+To learn more about the GraphQL Analytics API, refer to [GraphQL Analytics API](/analytics/graphql-api/).
+
+You can use the GraphQL API to measure metrics for data ingested, as well as data delivered.
+
+## Write GraphQL queries
+
+Examples of how to explore your Pipelines metrics.
+
+### Measure total bytes & records ingested over time period
+
+```graphql
+query PipelineIngestion($accountTag: string!, $pipelineId: string!, $datetimeStart: Time!, $datetimeEnd: Time!) {
+ viewer {
+ accounts(filter: {accountTag: $accountTag}) {
+ pipelinesIngestionAdaptiveGroups(
+ limit: 10000
+ filter: {
+ pipelineId: $pipelineId
+ datetime_geq: $datetimeStart
+ datetime_leq: $datetimeEnd
+ }
+
+ )
+ {
+ sum {
+ ingestedBytes,
+ ingestedRecords,
+ }
+ }
+ }
+ }
+}
+```
+
+### Measure volume of data delivered
+
+```graphql
+query PipelineDelivery($accountTag: string!, $queueId: string!, $datetimeStart: Time!, $datetimeEnd: Time!) {
+ viewer {
+ accounts(filter: {accountTag: $accountTag}) {
+ pipelinesDeliveryAdaptiveGroups(
+ limit: 10000
+ filter: {
+ pipelineId: $queueId
+ datetime_geq: $datetimeStart
+ datetime_leq: $datetimeEnd
+ }
+ ) {
+ sum {
+ deliveredBytes,
+ }
+ }
+ }
+ }
+}
+```
\ No newline at end of file
diff --git a/src/content/docs/pipelines/pipelines-api.mdx b/src/content/docs/pipelines/pipelines-api.mdx
new file mode 100644
index 000000000000000..09c4cfbbe132130
--- /dev/null
+++ b/src/content/docs/pipelines/pipelines-api.mdx
@@ -0,0 +1,7 @@
+---
+pcx_content_type: navigation
+title: Pipelines REST API
+sidebar:
+ order: 10
+
+---
\ No newline at end of file
diff --git a/src/content/docs/pipelines/reference/changelog.mdx b/src/content/docs/pipelines/reference/changelog.mdx
new file mode 100644
index 000000000000000..e3fcb90c64098cc
--- /dev/null
+++ b/src/content/docs/pipelines/reference/changelog.mdx
@@ -0,0 +1,15 @@
+---
+pcx_content_type: changelog
+title: Changelog
+changelog_file_name:
+ - pipelines
+sidebar:
+ order: 99
+
+---
+
+import { ProductChangelog } from "~/components"
+
+{/* */}
+
+
\ No newline at end of file
diff --git a/src/content/docs/pipelines/reference/index.mdx b/src/content/docs/pipelines/reference/index.mdx
new file mode 100644
index 000000000000000..a6f575945f80a9d
--- /dev/null
+++ b/src/content/docs/pipelines/reference/index.mdx
@@ -0,0 +1,12 @@
+---
+pcx_content_type: navigation
+title: Platform
+sidebar:
+ order: 8
+ group:
+ hideIndex: true
+---
+
+import { DirectoryListing } from "~/components"
+
+
\ No newline at end of file
diff --git a/src/content/docs/pipelines/reference/limits.mdx b/src/content/docs/pipelines/reference/limits.mdx
new file mode 100644
index 000000000000000..016cadeaf36a095
--- /dev/null
+++ b/src/content/docs/pipelines/reference/limits.mdx
@@ -0,0 +1,23 @@
+---
+pcx_content_type: concept
+title: Limits
+sidebar:
+ order: 2
+---
+
+import { Render } from "~/components"
+
+:::note
+
+Many of these limits will increase during Pipelines' public beta period. [Follow our changelog](/pipelines/reference/changelog/) to keep up with the changes.
+
+:::
+
+
+| Feature | Limit |
+| --------------------------------------------- | ------------------------------------------------------------- |
+| Requests per second | 10,000 |
+| Maximum payload per request | 1 MB |
+| Maximum batch size | 100 MB |
+| Maximum batch records | 10,000,000 |
+| Maximum batch duration | 300s |
diff --git a/src/content/docs/pipelines/reference/pricing.mdx b/src/content/docs/pipelines/reference/pricing.mdx
new file mode 100644
index 000000000000000..3f2050626157260
--- /dev/null
+++ b/src/content/docs/pipelines/reference/pricing.mdx
@@ -0,0 +1,11 @@
+---
+pcx_content_type: concept
+title: Pricing
+sidebar:
+ order: 1
+head:
+ - tag: title
+ content: Pipelines Pricing
+---
+
+TODO
\ No newline at end of file
diff --git a/src/content/docs/pipelines/reference/wrangler-commands.mdx b/src/content/docs/pipelines/reference/wrangler-commands.mdx
new file mode 100644
index 000000000000000..53a355123c785e2
--- /dev/null
+++ b/src/content/docs/pipelines/reference/wrangler-commands.mdx
@@ -0,0 +1,8 @@
+---
+pcx_content_type: navigation
+title: Wrangler commands
+external_link: /workers/wrangler/commands/#pipelines
+sidebar:
+ order: 80
+
+---
\ No newline at end of file
diff --git a/src/content/docs/pipelines/sources/http.mdx b/src/content/docs/pipelines/sources/http.mdx
new file mode 100644
index 000000000000000..3840a89b074bb2d
--- /dev/null
+++ b/src/content/docs/pipelines/sources/http.mdx
@@ -0,0 +1,61 @@
+---
+title: HTTP
+pcx_content_type: concept
+sidebar:
+ order: 1
+head:
+ - tag: title
+ content: Pipeline Source - HTTP
+---
+
+import { Render, PackageManagers } from "~/components";
+
+Pipelines support ingesting data via HTTP. When you create a new Pipeline, you'll receive an HTTP endpoint that you can make post requests to.
+
+
+```sh
+$ npx wrangler pipelines create [PIPELINE-NAME] --r2 [R2-BUCKET-NAME] --access-key-id [ACCESS-KEY-ID] --secret-access-key [SECRET-ACCESS-KEY]
+
+🌀 Creating pipeline named "[PIPELINE-NAME]"
+✅ Successfully created pipeline [PIPELINE-NAME] with ID [PIPELINE-ID]
+
+You can now send data to your pipeline with:
+ curl "https://.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'
+```
+
+## Turning HTTP ingestion off
+By default, ingestion via HTTP is turned on for all Pipelines. You can turn it off by setting `--http false` when creating or updating a Pipeline.
+
+```sh
+$ npx wrangler pipelines create [PIPELINE-NAME] --r2 [R2-BUCKET-NAME] --access-key-id [ACCESS-KEY-ID] --secret-access-key [SECRET-ACCESS-KEY] --http false
+```
+
+Ingestion URLs are tied to your Pipeline ID. Turning HTTP off, and then turning it back on, will not change the URL.
+
+## Authentication
+You can secure your HTTP ingestion endpoint using Cloudflare API tokens. By default, authentication is turned off. To enable authentication, use `--authentication true` while creating or updating a Pipeline.
+
+```
+$ npx wrangler pipelines create [PIPELINE-NAME] --r2 [R2-BUCKET-NAME] --access-key-id [ACCESS-KEY-ID] --secret-access-key [SECRET-ACCESS-KEY] --authentication true
+```
+
+Once authentication is turned on, you will need to include a Cloudflare API token in your request headers.
+
+### Get API token
+1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
+2. Navigate to your [API Keys](https://dash.cloudflare.com/profile/api-tokens)
+3. Select *Create Token*
+4. Choose the template for Workers Pipelines. Click on *continue to summary*, and finally on *create token*. Make sure to copy the API token, and save it securely.
+
+### Making authenticated requests
+Include the API token you created in the previous step in the headers for your request:
+
+```sh
+curl https://.pipelines.cloudflare.com
+ -H "Content-Type: application/json" \
+ -H "Authorization: Bearer ${API_TOKEN}" \
+ -d '[
+ {"key1": "value1", "key2": "value2"},
+ {"key1": "value3", "key2": "value4"}
+ ]'
+```
diff --git a/src/content/docs/pipelines/sources/index.mdx b/src/content/docs/pipelines/sources/index.mdx
new file mode 100644
index 000000000000000..2ddffca7bfa595b
--- /dev/null
+++ b/src/content/docs/pipelines/sources/index.mdx
@@ -0,0 +1,12 @@
+---
+pcx_content_type: navigation
+title: Sources
+sidebar:
+ order: 3
+ group:
+ hideIndex: true
+---
+
+import { DirectoryListing } from "~/components"
+
+
\ No newline at end of file
diff --git a/src/content/docs/pipelines/sources/worker.mdx b/src/content/docs/pipelines/sources/worker.mdx
new file mode 100644
index 000000000000000..cacf034813fc1fa
--- /dev/null
+++ b/src/content/docs/pipelines/sources/worker.mdx
@@ -0,0 +1,121 @@
+---
+title: Workers
+pcx_content_type: concept
+sidebar:
+ order: 2
+head:
+ - tag: title
+ content: Pipeline Source - Worker
+---
+
+import { Render, PackageManagers } from "~/components";
+
+# Send records from a Worker
+
+You can send records to your Pipeline directly from a [Cloudflare Worker](/workers/). To do so, you need to:
+1. Create a Worker
+2. Create a Pipeline
+3. Add your Pipeline as a binding in your Workers' `wrangler.toml` file
+4. Write your Worker, to send records to your Pipeline
+5. Deploy your Worker
+6. Verify in R2
+
+## 1. Create a Worker
+Create a Cloudflare Worker if you don't already have one. This Worker will send records to your Pipeline.
+
+To create a Worker, run:
+
+
+
+
+
+This will create a new directory, which will include both a `src/index.ts` Worker script, and a [`wrangler.toml`](/workers/wrangler/configuration/) configuration file. Navigate into the newly created directory:
+
+```sh
+cd pipeline-worker
+```
+
+## 2. Create a Pipeline
+Create a new Pipeline, if you don't already have one. If this is your first time using Pipelines, follow the instructions in the [get started guide](/pipelines/get-started).
+
+By default, Worker bindings are enabled on all Pipelines. Keep track of the name you gave your Pipeline in this stage; we'll use it in the next step.
+
+## 3. Add a Binding
+To connect your Worker to your Pipeline, you need to create a binding. [Bindings](/workers/runtime-apis/bindings/) allow you to grant specific capabilities to your Worker.
+
+Open your newly generated `wrangler.toml` configuration file and add the following:
+
+```toml
+[[pipelines]]
+ binding = "MY_PIPELINE"
+ pipeline = ""
+```
+
+Replace `` with the name of the Pipeline you created in step 2. Next, replace `MY_PIPELINE` with the name you want for your `binding`. The binding must be a valid JavaScript variable name. This is the variable you will use to reference this queue in your Worker.
+
+## 4. Write your Worker
+You will now configure your Worker to send records to your Pipeline. Your Worker will:
+
+1. Take a request it receives from the browser
+2. Transform the request to JSON
+3. Send the resulting record to your Pipeline
+
+In your Worker project directory, open the `src` folder and add the following to your `index.ts` file:
+```ts
+export interface Env {
+ : Pipeline;
+}
+
+export default {
+ async fetch(req, env, ctx): Promise {
+ let record = {
+ url: req.url,
+ method: req.method,
+ headers: Object.fromEntries(req.headers)
+ }
+ await env.MY_PIPELINE.send([record]);
+ return new Response('Success');
+ },
+} satisfies ExportedHandler;
+```
+
+Replace `MY_PIPELINE` with the name of the binding you set in Step 3. If sending the record to the Pipeline fails, your Worker will return an error (raise an exception). If sending the record succeeds, it will return `Success` back with a HTTP `200` status code to the browser.
+
+In a production application, you would likely use a [`try...catch`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) statement to catch the exception and handle it directly (for example, return a custom error or even retry).
+
+## 5. Publish your Worker
+With your `wrangler.toml` file and `index.ts` file configured, you are ready to publish your producer Worker. To publish your producer Worker, run:
+
+```sh
+npx wrangler deploy
+```
+
+You should see output that resembles the below, with a `*.workers.dev` URL by default.
+
+```
+Uploaded (0.76 sec)
+Published (0.29 sec)
+ https://..workers.dev
+```
+
+Copy your `*.workers.dev` subdomain and paste it into a new browser tab. Refresh the page a few times to send records to your Pipeline. Your browser should return the `Success` response after sending the record to your Pipeline.
+
+## 6. Verify in R2
+Go to the R2 bucket you created in step 2 via [the Cloudflare dashboard](https://dash.cloudflare.com/). You should see a prefix for today's date. Click through, and you'll find one or more files, containing the records you sent in step 4.
+
+# Local Development
+:::note
+Known issue: When running your Worker locally, sending data to your Pipeline currently results in an error.
+:::
diff --git a/src/content/docs/workers/wrangler/commands.mdx b/src/content/docs/workers/wrangler/commands.mdx
index 688dbf9f236adc1..d9c8e8f0065fb79 100644
--- a/src/content/docs/workers/wrangler/commands.mdx
+++ b/src/content/docs/workers/wrangler/commands.mdx
@@ -2012,6 +2012,88 @@ wrangler pages secret bulk [] [OPTIONS]
---
+## `pipelines`
+:::note
+
+Pipelines is currently in open beta. Report Pipelines bugs in [GitHub](https://github.com/cloudflare/workers-sdk/issues/new/choose).
+
+:::
+
+Manage your [Pipelines](/pipelines/) configurations.
+
+### `create`
+
+Create a new pipeline
+
+```txt
+wrangler pipelines create --r2 [OPTIONS]
+```
+
+- `name` string required
+ - The name of the pipeline to create
+- `--r2` string required
+ - The name of the R2 bucket used as the destination to store the data.
+- `--batch-max-mb` number optional
+ - The maximum size a batch before data is written, in megabytes. Default 10 MB, maximum 100 MB.
+- `--batch-max-rows` number optional
+ - The maximum number of rows in a batch before data is written. Default, and maximum, 10,000.
+- `--batch-max-seconds` number optional
+ - The maximum duration of a batch before data is written, in seconds. Default 15s. Maximum of 600s.
+- `--compression` string optional
+ - Type of compression to apply to output files. Choices: "none", "gzip", "deflate"
+- `--prefix` string optional
+ - Optional base path to store files in the destination bucket.
+- `--filepath` string optional
+ - The path to store partitioned files in the destination bucket. Defaults to `event_date=${date}/hr=${hr}`
+- `--filename` string optional
+ - The name of the file in the bucket. Must contain `${slug}`. File extension is optional. Defaults to `${slug}${extension}`
+
+### `update`
+
+Update an existing pipeline
+
+```txt
+wrangler pipelines update [OPTIONS]
+```
+
+- `name` string required
+ - The name of the pipeline to update
+- `--r2` string required
+ - The name of the R2 bucket used as the destination to store the data.
+- `--batch-max-mb` number optional
+ - The maximum size a batch before data is written, in megabytes. Default 10 MB, maximum 100 MB.
+- `--batch-max-rows` number optional
+ - The maximum number of rows in a batch before data is written. Default, and maximum, 10,000.
+- `--batch-max-seconds` number optional
+ - The maximum duration of a batch before data is written, in seconds. Default 15s. Maximum of 600s.
+- `--compression` string optional
+ - Type of compression to apply to output files. Choices: "none", "gzip", "deflate"
+- `--prefix` string optional
+ - Optional base path to store files in the destination bucket.
+- `--filepath` string optional
+ - The path to store partitioned files in the destination bucket. Defaults to `event_date=${date}/hr=${hr}`
+- `--filename` string optional
+ - The name of the file in the bucket. Must contain `${slug}`. File extension is optional. Defaults to `${slug}${extension}`
+
+### `delete`
+
+Deletes an existing pipeline
+
+```txt
+wrangler pipelines delete [OPTIONS]
+```
+
+- `name` string required
+ - The name of the pipeline to delete
+
+### `list`
+
+Lists all pipelines in your account.
+
+```txt
+wrangler pipelines list [OPTIONS]
+```
+
## `queues`
Manage your Workers [Queues](/queues/) configurations.
diff --git a/src/content/products/pipelines.yaml b/src/content/products/pipelines.yaml
new file mode 100644
index 000000000000000..6a0d113c63a5233
--- /dev/null
+++ b/src/content/products/pipelines.yaml
@@ -0,0 +1,12 @@
+name: Pipelines
+
+product:
+ title: Pipelines
+ url: /pipelines/
+ group: Developer platform
+ preview_tryout: true
+
+meta:
+ title: Cloudflare Pipelines Docs
+ description: Ingest, transform, and store, real time data streams in R2.
+ author: '@cloudflare'
\ No newline at end of file
diff --git a/src/icons/pipelines.svg b/src/icons/pipelines.svg
new file mode 100644
index 000000000000000..98412a8959540a1
--- /dev/null
+++ b/src/icons/pipelines.svg
@@ -0,0 +1 @@
+
\ No newline at end of file